Using JConsole in combination with SNMP

I have created a simple agent (using version 5 JDK) to handle an MBean. Within this agent I have created two adapters; an HTML and SNMP adapter. The HTML adapter is working fine via the browser (i can execute the operations on the MBean).
Question is; Can I use JConsole to connect to the SNMP adapter?; if not how then?
Code:
public void addSnmpAdapter(int port)
               throws Exception {
          // Register and start the SNMP adapter
          SnmpAdaptorServer snmpAdapter = new SnmpAdaptorServer();
          ObjectName snmpAdapterName = new ObjectName(
                    "MBeanAgent:name=snmpadapter,type=SNMP,port=" + port);
          snmpAdapter.setPort(port);
          mbs.registerMBean(snmpAdapter, snmpAdapterName);
          snmpAdapter.start();
          System.out.println("SnmpAdapter on port " + port);
     }Start command:
java -Dcom.sun.management.jmxremote -Dcom.sun.management.snmp.acl=false -Dcom.sun.management.snmp.port=162 MBeanAgent

Hi Willem,
No you cannot use JConsole to "connect to the SNMP adapter".
JConsole is a JMX Console, not an SNMP console.
What are you trying to do with that SNMP Adaptor anyway?
If you use Java DMK to start an SNMP adapter, and do nothing more,
the adaptor will show you nothing. It is empty. You still need to
generate, implement, and add at least one SNMP MIB to the adaptor.
For more information on this, see the Java DMK documentation.
http://java.sun.com/products/jdmk/jdmk_docs.html
Java SE 5.0 has a built-in SNMP agent that can be started using System
properties on the Java command line. This agent implements a single MIB,
the JVM-MANAGEMENT-MIB, that exposes the JVM Management and Monitoring
data through SNMP.
See here for more info on the JVM SNMP Agent:
What is the JVM SNMP Agent?
http://blogs.sun.com/jmxetc/entry/what_is_the_jvm_snmp
See also here for a discussion of using SNMP versus JMX to monitor your JVMs.
JVM Monitoring: JMX or SNMP?
http://blogs.sun.com/jmxetc/entry/jmx_vs_snmp
Hope this helps,
-- daniel
JMX, SNMP, Java, etc...
http://blogs.sun.com/jmxetc

Similar Messages

  • Using mod_alias in combination with dispatcher

    Hi all,
    In the example dispatcher configuration there is a setting called DispatcherDeclineRoot which has a comment saying: if turned to 1, request to / are not handled by the dispatcher, use the mod_alias then for the correct mapping.  Can anyone explain how to use mod_alias in combination with the dispatcher module?  For example, how can I configure it to serve static content for a specific alias and go to CQ for other requests?  Can someone provide an example configuration?
    Many thanks,
    Jan

    Hi,
    It still doesn't quite work the way I want it to...  Indeed, when you deny it in the dispatcher config, apache is trying to serve it.  However, apache is looking for the file under /var/www/html which is the docroot configured in the dispatcher config.  But my files are somewhere else, and there's an Alias defined for this in my apache config file.  For some reason apache is not using this alias definition, the dispatcher docroot setting seems to be taking precedence.  I tried several of the dispatcher settings (like DispatcherDeclineRoot and DispatcherUseProcessedURL) but to no avail.  The reason why I'm not just putting my static files under /var/www/html is because I don't want them to get mixed up with the CQ cache.  I want to be able to clear everything in the cache without touching these files I want to serve statically. 
    Any thoughts?
    Regards,
    Jabn

  • Using servlet in combination with public_html subfolders

    Hi,
    I'm fairly new to servlet and jsp technology. Up to this point I only really used PHP for developing dynamic web content. But since I have finally been working more with Java I want to start using Java for my web needs as well. So I've started with servlets and am currently exploring servlet filters. Things started out to go pretty smooth but now I've run into some problems.
    What I'm building now is a servlet filter that will check if the user already has a session. If the user doesn't have one the user should be redirected to a login page. Getting this logic to work hasn't been the problem. My problem started when I wanted to place .jsp files in a subfolder.
    First of all, I wasn't able to create a subfolder from JDev 10.1.3 so I had to this through the file system (added a subfolder in the public_html folder).
    Then when I tried to implement the filter I had a hard time getting the jsp pages and the servlets to work together. This is my setup:
    I have a loginservlet that handles the login post request and sets the session if all is ok.
    I have both the welcome.jsp and login.jsp in a subfolder called "app".
    Then I have a servlet filter.
    When trying to access the welcome.jsp the filter succesfully redirects the request to the login.jsp. But from here things go wrong. First, the login page has problems retrieving the css stylesheet. And when it submits the form it doesn't seem to process correctly since it doesn't move on to the welcome.jsp.
    Can someone point in the direction of some sort of guide on how to incorporate subfolders for jsp pages in combination with servlets? And is creating subfolders in any way possible from JDeveloper?
    Message was edited by:
    Deddiekoel

    What kind of URL do you use to access the css files etc. from your jsp? Your URL needs to follow the directory structure, so you may need e.g. "../css/style.css"
    Also make sure all your files are imported into jdev, since files that aren't won't be copied into the server instance.

  • Using oracle in combination with the mean-stack

    Hi all,
    I have been exploring the mean stack and saw the oracle-nodejs-passing by.
    I am wondering what the best way is to replace/combine the mongodb with an oracle db (11g).
    At this moment I have tried using mongoose for schema's and an json to XML parser to retrieve an xml-representation of the object I want to save. 
    To do some dml in the db I created some procedures which uses the xmltype in combination with xquery to update/insert.
    Although it is working I am wondering this is the way to go. What are your experiences?
    Regards,
    Romano

    The way to go is to use node-oracledb with Oracle 12.1.0.2 JSON datatype support: http://docs.oracle.com/database/121/ADXDB/json.htm
    However, since you are still on 11g, using stored procedures to encapsulate logic seems like a good start.
    You'll lose some performance using any ORM: do you really need one?

  • How do I use CFTHREAD in combination with CFFILE Upload

    I've been reading how CFTHREAD is supposed to release the
    client so that long-lasting operations like a CFFILE upload can
    continue to do their business while the client does something else.
    How does this work exactly? I can't find any concrete
    references. For example, I have a regular file upload form. A user
    uploads a large file, say 100Mb, and I want to use CFTHREAD (or
    something) to handle the upload process, which I know may take
    hours depending on the connection, and immediately return the page
    for the client saying "Your file is being processed. Please check
    back later."
    I tried wrapping some CFFILE code with CFTHREAD, but when I
    ran it the page just sat there working while the file was being
    uploaded, so I did not witness any change whatsoever in how its
    being handled. What am I missing?
    <cfthread action="run" name="uploadVideo">
    <cffile
    action="upload"
    destination = "/var/www/video1/assetsIN"
    nameconflict="overwrite"
    filefield="video" />
    </cfthread>
    Thanks for any help.
    UPDATE: I asked Ben Forta this in an email yesterday and here
    was his response:
    What you are doing is correct. The actual file processing
    (getting the uploaded file) will not happen in a separate thread.
    Were you expecting a separate thread for the upload itself? CF
    can’t do that for you s CF does not have the file until it is
    uploaded. Actually, CF is not even called until the form is
    submitted (which includes the file being uploaded). It sounds like
    you want the browser to do an upload in the background. This may be
    doable using JavaScript and Ajax type controls. Or maybe using
    browser tabs or something. -Ben

    With file uploads the time consumption is between the client
    PC and the web
    server, not the web server and the CF server (which are
    usually on the same
    machine). All <cffile action="upload"> does is copy the
    uploaded file from
    the web server's temp upload dir to [wherever you tell it to
    go]: it's a
    local process, and should be fairly quick; certainly compared
    to the
    process of getting it to the web server in the first place.
    Even if the data transmission was between the client PC and
    the CF server,
    you can't expect <cfthread> to somehow increase the
    bandwitdh and speed up
    the file upload between the two machines.
    <cfthread> could come into its own if you had some file
    processing to
    perform on the file *after* it's uploaded: unzipping it,
    doing some image
    manipulation, parsing a CSV file into a DB, that sort of
    thing. Not for
    the initial upload.
    Adam

  • Using VPD in combination with a user table?

    I'm very new with VPD's. In fact, I don't know a thing about it yet (I know the philosophy behind it and the principle, but not the practical implementation). My question: Are VPD's always based on database-users? Our applications have a user-table now, where the access rights to applications are stored. Once a user is present in that table and has the necessary rights, he can login to the application. So we don't have an actual database-user for each "real-life" user, just an entry in a table.
    Is it possible to use the system of VPD's (and maybe Oracle Label Security) with users stored in a table, instead of actual database users?

    TomVD wrote:
    My question: Are VPD's always based on database-users? No, they are not. You could for example put VPD policies on tables that restrict access after a certain time of the day (not caring which user attempts to access the data, using only SYSDATE and a given cut off access time).
    TomVD wrote:
    Is it possible to use the system of VPD's (and maybe Oracle Label Security) with users stored in a table, instead of actual database users?Yes you can.
    VPD allows you to construct a predicate as you would like based on your requirements (you are basically appending a WHERE clause in to every query based on the logic you dictate on the objects and accesses you determine necessary).
    Typically if you're running through a connection pool (as it sounds like you are) you would use an application context to set a specific value (the logging in user) and then validate that against your Users table in whatever fashion tickles your fancy
    [Some Tutorials|http://www.google.ca/#hl=en&source=hp&q=oracle+vpd+tutorial&btnG=Google+Search&meta=&aq=0&oq=oracle+vpd+&fp=8e6c6930b7d53e73] may also be helpful
    and of course .. [The Documentation|http://download.oracle.com/docs/cd/E11882_01/network.112/e10574/vpd.htm]

  • Using formulas in combination with dates

    I'm a salesman and I track my daily sales on numbers. I have 5 columns: date, sales number, sales amount, then other stuff. I need a formula to track on the sales for a certain 2 weeks pay period for another table. I need one that sums the total $ amount of the sales and one that just counts the number of sales. Any help would be greatly appreciated.
    Thanks

    to get the count, I had to use the countif and the concatenate command together.
    If my compare date was in A1 and my information was in Table 2.. Column A was dates and B was Dollars.. modify to fit your final data, it would look like this:
    =countif(Table 2::A,concatenate(">=",A1))
    And the sum of the Column B in table 2 for the same condition would be:
    =Sumif(Table 2::A,concatenate(">=",A1),Table 2::B)
    Jason

  • Using ADF in combination with ESB

    I can invoke my ESB Service from my ADF application but I would like to show these results in the ADF application as well. Is there a sample or how-to on how to accomplish this?
    I have a synchronous service that will perform a search on my db, getting the criteria from my ADF application and I would like to show the results returned by the ESB back into my ADF Application.

    Hi All,
    I have the same problem in calling BPEL or Esb from ADF, is there any document or example on this issue?
    Any help would highly be appretiated.
    Thank you in advance,
    Alireza

  • How can I use the DAQMX in combination with a FIREWIRE MIO?

    We had problems in using DAQMX in combination with a NI FIREWIRE MIO Acquisition Card (according to an explanation of a NI-Consultant). Is there a way to solve that problem?

    NI's manual claims there will be DAQmx support for firewire in the future. I hope so too.
    Best,
    Davy

  • Can Oracle be forced to use the spatial index for sdo_filter in combination with an or clause? Difference between Enterprise and SE?

    We’re seeing the following issue: sql - Can Oracle be forced to use the spatial index for sdo_filter in combination with an or clause? - Stack Overflow (posted by a colleague of mine) and are curious to know if this behaviour is due to a difference between standard and enterprise, or could we doing something else wrong in our DB config.?
    We have also reproduced the issue on the following stacks:
    Oracle SE One 11.2.0.3 (with Spatial enabled)
    Redhat Linux 2.6.32-358.6.2.el6.x86_64 #1 SMP Thu May 16 20:59:36 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
    11.2.0.3.0 Standard Edition and 11.2.0.4.0 Standard Edition (both with Spatial enabled)
    Microsoft Windows Server 2003R2 Standard x64 Edition
    However, the SQL works fine if we try it on Oracle 11.2.0.3.0 *Enterprise* Edition.
    Any help or advice would be much appreciated.
    Kindest Regards,
    Kevin

    In my experience sdo_filter ALWAYS uses the spatial index, so that's not the problem. Since you did not provide the explain plans, we can't say for sure but I think yhu is right: Standard Edition can't use the bitmap operations, and thus it'll take longer to combine the results of the two queries (because the optimizer will surely split this OR up in two parts, then combine them).
    BTW: when asking questions about queries here, it would be nice if you posted the queries here as well, so that we do not have to check another website in order to see what you are doing. Plus it will probably get you more answers, because not everyone can be bothered to click on that link. It would also have been nice if you had posted your own answer on the other post here as well, because my recommendation would have been to use union all - but since you already found that out for yourself my recommendation would have been a little late.

  • Use of business partner functionality in combination with dual control

    Is there a generally agreed upon procedure to use the business partner functionality in combination with dual control? The problem is that when you block a business partner, the customer/ vendor master data aren't blocked automatically. You can then still use them in transactions, which leads to problems.
    So is there a way to make this work or a procedure to do this?
    Niels Vanwingh

    Thanks Masa,
    I did notice the 'Add external supplier from' in create supplier or bidder option. However there is a small catch and your experience may help.
    Let me explain the requirement and scenario here in SRM 7.
    We are implementing the Registration of Supplier scenario; both ROS and SRM are in same client. When a potential supplier registers themselves in the registration system, a BP number is created (an Internal number range is defined for this). After accepting the potential supplier in pre-select screen the purchaser has two options to transfer the potential supplier from the ROS system to SRM
    Option 1: He can select the accepted potential supplier from the supplier directory option and transfer the business partner to SRM. In this case the business partner number of the potential supplier is retained in SRM and a business partner with supplier and bidder tag is created. However the purchaser does not have any option to select which type of business partner he would like to create like supplier or bidder.
    Option 2: Purchase can go to create supplier or bidder option and choose the u2018Add external supplier formu2019 from the ROS system and create the business partner. The ROS business partner details are copied to the create supplier screen, but the purchaser have to provide an external business partner number for the supplier. This is because we have defined external number range for business partner for the vendors that are replicated from ERP to SRM.
    Objective is the ROS business partner should be retained in SRM with option to create as supplier and bidder and then manually create ERP supplier with same SRM BP number and map against SRM supplier.
    Is there any way we can achieve this?
    In SRM 5.5 with Manage business partner functionality we could achieve as system give us the option which business partner type we would like to create as well as retains the ROS BP number in SRM.
    Regards
    Sandeep

  • Kernel debugging using git bisection method in combination with abs.

    Is there any way to use git bisection method for kernel debugging in combination with ABS for building the kernel via makepkg?

    Yes, but ABS for kernel uses some patches in order for the kernel to be successfully build.
    ftp://ftp.archlinux.org/other/kernel26/
    I wanted to know is if there is any way to distinguish what particular patch to use in every bisection point.

  • Using own SQLite DB in combination with Data Management

    Hi,
    Currently on a huge project we're we are using LCDS 3.1 in combination with a AIR 2.5 client.
    I've been reading "Using Adobe LiveCycle Data Services ES2 version 3.1" and I have a question. In the chapter "Building an offline-enabled application" it says on the very first line:
    "You can use an offline adapter with an AIR SQLite database to perform offline fills when a desktop client is disconnected from the LiveCycle Data Services server. An offline adapter contains the SQL queries for AIR SQLite for retrieving cached items like an assembler on the server retrieves items from the data source."
    However, in my experience that AIR SQLite database is not just any DB but one that Datamanagment designs and generates itself, based on the Dto the DataManagement destination is managing. The offline adapter doesn't work like an assembler at all, because the documentation says you can only override the methods pertaining to constructing the WHERE, and ORDER BY parts of the queries, not the SELECT, CREATE, FROM,... parts.
    In our case, we have a database on the server, constructed according to a very specific ERD, and we have a SQLite database on the client, also constructed according to a very specific ERD. What we want to do is execute every fill, create, update, delete against the offline cache and only synchronize with the backend when we want it the synchronize (technically possible by playing with the autoMerge, autoSaveCache, autoConnect,... properties). So what part of datamanagement can we customize to use our DB instead of a generated one?
    Thx in advance!

    You are correct in noting that Data Management does not allow you to use your own database to store offline data.  This data is exclusively managed by the LCDS library for the developer.  The intent is that the local cache is a reflection of the server data, not an independent copy.
    If you have an existing database in AIR, then you will have much more direct control over the querying and updating of that data by using the SQLite APIs directly.
    That being said, you can in essence replicate the data stored on the server, managed by Data Management, in the offline cache.  In an upcoming release (winter 2011) we will have a few features ('briefcases' and a 'changes-only' fill) that will make this story even more compelling for your use cases.  But even with the 3.1 functionality, you can do something like the following:
    Perform a fill() to collect the data you want to have available on the client, save this in the offline cache
    Construct an Offline Adapter Actionscript class that implements the fills you want to perform on the local data
    Use the DataService.localFill() API to perform all of the client application fills, turn off autoCommit.
    When the client is online, call commit() to store client changes and call fill() to refresh the cached data.
    This should give you some ideas on how you could go about constructing your app to leverage the offline features of Data Services.
    Tom

  • Can i keep using bridge cs3 in combination with photoshop cs5

    can i keep using bridge cs3 in combination with photoshop cs5

    I have a question sort of relevant to this discussion. Had my 8800GT start to fail on me in my MacPro1,1 - in desperation I disassembled the card, cleaned it, applied fresh thermal compound/padding, and stuck it back in. To my amazement the card actually began working again! Before failing a few days later ; ;.
    I ordered a replacement 8800GT on ebay for a considerably low cost, but before it arrived my computer began behaving normally again. From what I'm understanding from this thread, there is absolutely no benefit to installing both cards in my computer at the same time? If that's the case, I'm going to tuck the backup card away and keep riding my self-repaired card until it crashes again.

  • Use of action links in combination with hierarchy object in analysis

    Hi All,
    From OBI 11g release hierarchy objects are available in the presentation layer of OBI. We make often use of these hierarchy objects. We also make use of action links to drill down to a analysis with more detailed information. This more detailed analysis should inherit the values of the higher-level analysis. In combination with the hierarchy objects this can give problems.
    An example:
    Let's say we have the following atttributes:
    Hierarchy object date (year, quarter, month), a product column and # of units shipped measure. On the column # of units shipped we have defined an action link to a more detailed analysis.
    Now, let's say the user clicks on the year 2011 in the hierarchy object. The quarters for 2011 are shown. The 'normal' product column attributes contains 'Product A'. The user makes use of the action link that is available by clicking on the measure value of # of units shipped. He clicks on the value that is based on quarter 2 of 2011 and 'Product A'. The detailed analysis is set to filter the analysis on the product and date dimension. In this example the analysis is filtered on 'Product A' but not on the value Quarter 2, 2011. Hierarchy object values are not passed through on clicking on a action link.
    Is there any workaround for this issue? We want to make use of the hierarchy object but users expect to also pass the hierarchy object value to the underlying detailed analysis

    I am facing the same issue...
    First check it out:
    http://prasadmadhasi.com/2011/12/15/hierarchicalnavigationinobiee11g/
    http://www.rittmanmead.com/2010/10/oracle-bi-ee-11g-navigation-passing-parameters-using-hierarchical-columns/
    I solved it by using Go URL link. I pass value of hierarchy level (whichever level it is). For Year it is 2012, month: 2012-03.
    Now ugly part: in url I pass 2 parameters that refers to the same value : "Time"."Year"="2012", "Time"."Month"="2012"
    In target report I apply filter: "Time"."Year" "Is Prompted" OR "Time"."Month" "Is Prompted"
    This way in target report only one of filters will work : depending from which level you navigated from source report.
    If you decide use this approach be carefoul with URL syntax, remember about double quotes etc. In my case it was:
    Parameter : value
    PortalGo : [LEAVE EMPTY]
    path : %2Fusers%2Fweblogic%2FMIS%20EVAL%2FT_Target
    options : dr
    Action : Navigate
    col1 : "Time"."Year"
    val1 : [SELECT TIME HIERARCHY COLUMN]
    col2 : "Time"."Month"
    val2 : [SELECT TIME HIERARCHY COLUMN]
    Remember to:
    Remove “=” after PortalGo
    Surround value attribute with double quotes - e.g. @val1=”@{6}”
    After you "adjust" your URL text manually (unfortunatelly it won't be done automatically) it should look like:
    http://10.10.10.100:7001/analytics/saw.dll?PortalGo@{1}&path=@{2}&options=@{3}&Action=@{4}&col1=@{5}&val1=”@{6}”&col2=@{7}&val2=”@{8}”

Maybe you are looking for