Best practice to manage a SessionBean with JMX

Hi,
I have a SessionBean and a User specific MBean. The SessionBean holds a static
boolean which should be controlled by the MBean.
Now should the MBean tell the SessionBean when the state changes or should the
SessionBean setup a ChangeNotificationListener and react on the notification ?
Whats best practice here ?

Since JMX handles Notifications, it would be better if the session bean
listens for any attribute change notifications on the User specific
MBean and reacts accordingly.
-satya
Tomy wrote:
Hi,
I have a SessionBean and a User specific MBean. The SessionBean holds a static
boolean which should be controlled by the MBean.
Now should the MBean tell the SessionBean when the state changes or should the
SessionBean setup a ChangeNotificationListener and react on the notification ?
Whats best practice here ?

Similar Messages

  • Best Practice for Managing a BPC Environment?

    My company is currently running a BPC 5.1 MS environment and will soon be upgrading to version 7.0 MS.  I was wondering if there is a white paper or some guidance that anyone can give with regard to the best practice for managing a BPC environment.  Which brings to light several questions in my mind:
    1.  Which department(s) in a company should u201Cownu201D the BPC application? 
    2. If both, whatu2019s SAPu2019s recommendation for segregation of duties?
    3. What roles should exist within our company to manage BPC?
    4. What type(s) of change control is SAPu2019s u201CBest Practiceu201D?
    We are currently evaluating the best way to manage the system across multiple departments, however there is no real business ownership in the system, which seems to be counter to the reason for having BPC as a solution in the first place.
    Any guidance on this would be very much appreciated.

    My company is currently running a BPC 5.1 MS environment and will soon be upgrading to version 7.0 MS.  I was wondering if there is a white paper or some guidance that anyone can give with regard to the best practice for managing a BPC environment.  Which brings to light several questions in my mind:
    1.  Which department(s) in a company should u201Cownu201D the BPC application? 
    2. If both, whatu2019s SAPu2019s recommendation for segregation of duties?
    3. What roles should exist within our company to manage BPC?
    4. What type(s) of change control is SAPu2019s u201CBest Practiceu201D?
    We are currently evaluating the best way to manage the system across multiple departments, however there is no real business ownership in the system, which seems to be counter to the reason for having BPC as a solution in the first place.
    Any guidance on this would be very much appreciated.

  • What's the best practice to manage the page file?

           
    We have one Hyper-v Server running windows 2012 R2 with 128 GB RAM and 2 drives (C and D). It setup Automatically manage page file size for all drives. What's the best practice to manage the page file?
    Bob Lin, MCSE & CNE Networking, Internet, Routing, VPN Networking, Internet, Routing, VPN Troubleshooting on http://www.ChicagoTech.net How to Install and Configure Windows, VMware, Virtualization and Cisco on http://www.HowToNetworking.com

    For Hyper-V systems, my general recommendation is to set the page file to 1-4 GB. This allows for a mini-dump should something happen. 99.99% of the time, Microsoft will be able to figure out the cause of the problem from the mini-dump. It does not make
    sense on a Hyper-V system to set aside enough space to capture all the memory on the system because only a very small portion of that memory is used by the parent partition. Most of the memory is under control of the individual VMs.
    Yes, I had one of the Hyper-V product group tell me that I should let Windows manage it.  A couple of times I saw space on my system disk disappear because the algorithm decided it wanted all the space for the page file.  Made it so I couldn't
    patch my systems.  Went back in and set the page file to 1-4 GB and have not had any issues since.
    . : | : . : | : . tim

  • General Discussion - Best practice to manage Process order

    Hi Experts,
    Which is the best practice to manage process orders ?
    1. Quantity Change - I can make quantity adjustment in R3 and APO.
    2. Source Change - I can make a version change from order header . Also i can make a source change in APO by selecting a different PPM. Which is the best option.
    3. Re Read Master Data - Best practice to read master data is from R3 or APO ?
    I feel for all the above scenarios process ordes should always managed in R3. But still wondering why we have the same flexibility in APO too ?
    Can

    Hello,
    we are just migrating from 4.6c to ECC 6.0 and I have a couple of workflows to adopt.
    For background steps I defined in the corresponding BOR methods an exception to be fired when no result is available (e.g. no mail address available). Normally, I defined them as temporary errors.
    I activated in the WI outcome section the line for this exception and so the workflow processed this branch when the exception appeared. It worked fine.
    Now, in ECC 6.0, the same workflow get stuck in the WI. The exception is fired (I can see it in the log as "Error message"), but the WI is still in status "in process". It doesn't continue with the error outcome branch.
    Is this a new logic in ECC 6.0? Do you have any idea what to do? I used this logic some dozent times in different methods and workflows and it gives me a headache if I have to change everything ...
    Thank you!
    Best regards,
    Thomas

  • What are best practices for managing my iphone from both work and home computers?

    What are best practices for managing my iphone from both work and home computers?

    Sync iPod/iPad/iPhone with two computers
    Although it isn't possible to sync an Apple device with two different libraries it is possible to sync with the same logical library from multiple computers. Each library has an internal ID and when iTunes connects to your iPod/iPad/iPhone it compares the local ID with the one the device normally syncs with. If they are the same you can go ahead and sync...
    I have my library cloned to a small 1Tb USB drive which I can take between home & work. At either location I use SyncToy 2.1 to update the local copy with the external drive. Mac users should be able to find similar tools. I can open either of the local libraries or the one on the external drive and update the media content of my iPhone. The slight exception is Photos which normally connects to a specific folder on a specific machine, although that can easily be remapped to the current library if you create a "Photos" folder inside the iTunes Media folder so that syncing the iTunes folders keeps this up to date as well. I periodically sweep my library for new files & orphans withiTunes Folder Watch just in case I make changes at one location but then overwrite the library with a newer copy from the other. Again Mac users should be able to find similar tools.
    As long as your media is organised within an iTunes Music or Tunes Media folder, in turn held inside the main iTunes folder that has your library files (whether or not you let iTunes keep the media folder organised) each library can access items at the same relative path from the library folder so the library can be at different drives/paths on different machines. This solution ensures I always have adequate backups of my library and I can update my devices whenever I can connect to the same build of iTunes.
    When working with an iPhone earlier builds of iTunes would remove any file not physically present in the local library, even if there was an entry for it, making manual management practically redundant on the iPhone. This behaviour has been changed but it will still only permit manual management with a library that has the correct internal ID. If you don't want to sync your library between machines on a regular basis just copy the iTunes Library.itl file from the current "home" machine to any other you want to use, then clean out the library entires and import the local content you have on that box.
    tt2

  • Best practices for managing Movies (iPhoto, iMovie) to IPhone

    I am looking for some basic recommendations best practices on managing the syncing of movies to my iPhone. Most of my movies either come from a digital camcorder into iMovie or from a digital Camera into iPhone.
    Issues:
    1. If I do an export or a share from iPhoto, iMovie, or QuickTime, what formats should I select. I've seem 3gp, mv4.
    2. When I add a movie to iTunes, where is it stored. I've seen some folder locations like iMovie Sharing/iTunes. Can I copy them directly there or should I always add to library in iTunes?
    3. If I want to get a DVD I own into a format for the iPhone, how might I do that?
    Any other recommedations on best practices are welcome.
    Thanks
    mek

    1. If you type "iphone" or "ipod" into the help feature in imovie it will tell you how.
    "If you want to download and view one of your iMovie projects to your iPod or iPhone, you first need to send it to iTunes. When you send your project to iTunes, iMovie allows you to create one or more movies of different sizes, depending on the size of the original media that’s in your project. The medium-sized movie is best for viewing on your iPod or iPhone."
    2. Mine appear under "movies" which is where imovie put them automatically.
    3. If you mean movies purchased on DVD, then copying them is illegal and cannot be discussed here.
    From the terms of use of this forum:
    "Keep within the Law
    No material may be submitted that is intended to promote or commit an illegal act.
    Do not submit software or descriptions of processes that break or otherwise ‘work around’ digital rights management software or hardware. This includes conversations about ‘ripping’ DVDs or working around FairPlay software used on the iTunes Store."

  • Best practice for integrating oracle atg with external web service

    Hi All
    What is the best practice for integrating oracle atg with external web service? Is it using integration repository or calling the web service directly from the java class using a WS client?
    With Thanks & Regards
    Abhishek

    Using Integration Repository might cause performance overhead based on the operation you are doing, I have never used Integration Repository for 3rd Party integration therefore I am not able to make any comment on this.
    Calling directly as a Java Client is an easy approach and you can use ATG component framework to support that by making the endpoint, security credentials etc as configurable properties.
    Cheers
    R
    Edited by: Rajeev_R on Apr 29, 2013 3:49 AM

  • Best practice for managing a Windows 7 deployment with both 32-bit and 64-bit?

    What is the best practice for creating and organizing deployment shares in MDT for a Windows 7 deployment that has mostly 32-bit computers, but a few 64-bit computers as well? Is it better to create a single deployment share for Windows 7 and include both
    versions, or is it better to create two separate deployment shares? And what about 32-bit and 64-bit versions of applications?
    I'm currently leaning towards creating two separate deployment shares, just so that I don't have to keep typing (x86) and (x64) for every application I import, as well as making it easier when choosing applications in the Lite Touch installation. But I know
    each deployment share has the option to create both an x86 and x64 boot image, so that's why I am confused. 

    Supporting two task sequences is way easier than supporting two shares. Two shares means two boot media, or maintaining a method of directing the user to one or the other. Everything needs to be imported or configured twice. Not to mention doubling storage
    space. MDT is designed to have multiple task sequences, why wouldn't you use them?
    Supporting multiple task sequences can be a pain, but not bad once you get a system. Supporting app installs intelligently is a large part of that. We have one folder per app install, with a wrapper vbscript that handles OS detection. If there are separate
    binaries, they are placed in x86 and x64 subfolders. Everything runs from one folder via the same command, "cscript install.vbs". So, import once, assign once, and forget it. Its the same install package we use for Altiris, and we'll be using a Powershell
    version of it when we fully migrate to SCCM.
    Others handle x86 and x64 apps separately, and use the MDT app details to select what platform the app is meant for. I've done that, but we have a template for the vbscript wrapper and its a standard process, I believe its easier. YMMV.
    Once you get your apps into MDT, create bundles. Core build bundle, core deploy bundle, Laptop deploy bundle, etcetera. Now you don't have to assign twenty apps to both task sequences, just one bundle. When you replace one app in the bundle, all TS'es are
    updated automatically. Its kind of the same mentality as active directory. Users, groups and resources = apps, bundles and task sequences.
    If you have separate build and deploy shares in your lab, great. If not, separate your apps into build and deploy folders in your lab MDT share. Use a selection profile to upload only your deploy side to production. In fact I separate everything (except
    drivers) into Build and deploy folders on my lab server. Don't mix build and deploy, and don't mix Lab/QA and production. I also keep a "Retired" folder. When I replace an app, TS, OS, etcetera, I move it to the retired folder and append "RETIRED - " to the
    front of it  so I can instantly spot it if it happens to show up somewhere it shouldn't.
    To me, the biggest "weakness" of MDT is its flexibility. There's literally a dozen different ways to do everything, and there's no fences to keep you on the path. If you don't create some sort of organization for yourself, its very easy to get lost as things
    get complicated. Tossing everything into one giant bucket will have you pulling your hair out.

  • Best Practice in V7.0 : Issues with Sales Planning and Reporting

    I am trying to install the SAP Best Practices for BPC 5.1 on SAP PBC 7.0 SP 04 I have done this as I cannot find any Best Practice documents for version 7 as yet.
    I have managed to get through the Administration setup and most of the BPC -Administration Configuration Guide, however I am having a problem with 7.4 Running a Data ManagementPackage - Import on page 32 of 36. This step involves you uploading a data file Demo_Revenue_Data.txt into BPC.
    The file says that it has failed due to Ínvalid dimension ACCOUNT in lookup.
    I believe that this error may be driven by a previous step 6.4 Creating Script Logic where the logic for BP_Sales Application was required.
    My question is twofold in that I need to determine:
    1. Has anyone else tried the BestPractices for BPC 5.0 in BPC 7.0?
    2. Does anyone know how to overcome the error when uploading the Demo Revenue into BPC?
    Edited by: Kevin West on Jul 8, 2009 2:03 PM

    Hi,
    BPC best practices document from 5 is working fine also for 7.0 because 7.0 is just an update for 5.x.
    Running Import involve logic just if you are running the package with option enabled (Run Default Logic).
    Your issue seems to be related to maping which means you have to check Transformation and Conversion file.
    Any way the best practices document will not provide you information about how to build Transformation and Conversion files.
    You have to follow an SAP BPC training and that it will help you to build your applicatioon easier and faster.
    Regards
    Sorin Radulescu

  • Best practice for remote topic subscription with HA

    I'd like to create an orchestrator EJB, in cluster A, that must persist some data to a database of record and then publish a business event to a topic. I have two durable subscribers, MDBs on clusters B & C, that need to receive the events and perform some persistence on their side.
              I'd like HA so that a failure in any managed server would not interrupt the system. I can live with at least once delivery, but at the same time I'd like to minimize the amount of redundant message processing.
              The documentation gets a little convoluted when dealing with clustering. What is the best practice for accomplishing this task? Has anyone successfully implemented a similar solution?
              I'm using Weblogic 8.1 SP5, but I wouldn't mind hearing solutions for later versions as well.

    A managed server failure makes that server's JMS servers unavailable, which, in turn, makes the JMS server's messages unavailable until the either (A) the JMS server is migrated or (B) the managed server is restarted.
              For more discussion, see my post today on topic "distributed destinations failover - can't access messages from other node". Also, you might be interested in the circa 8.1 migration white-paper on dev2dev: http://dev2dev.bea.com/pub/a/2004/05/ClusteredJMS.html
              Tom

  • Best practice for management of Portal personalisation

    Hi
    I've been working in our sandpit client figuring out all portal personalisation and backend/frontend customisation for ESS (with some great help from SDN). However I'm not clear on what best practice is when doing this for 'real'.
    I've read in places that objects to be personalised should be copied first. Elsewhere I've read about the creating of a new portal desktop for changes to the default framework page etc
    What I'm not clear on is the strategy for how you manage and control all the personalisation changes.  For example, if you copy an iview and personalise where do you copy it to? Do you create "Z" versions of all the folders of the standard content or something else?
    Do you copy everything at the lowest possible object level and copy or highest?
    Implications of applying support or enhancement packs?
    We're going pretty vanilla to begin with in that there will just be a single ESS role so things should be fairly simple from that point of view, but I've trawled all around SDN without being able to find clear guidance on the overall best pratice process for personalisation.
    Can anyone help?
    Regards
    Phil
    Edited by: Phil Martin on Feb 15, 2010 6:47 PM

    Hi Phil,
    To keep things organized I would create an new folder (so don't use the desktop one). The new folder should sit under Portal Content. In that folder create a folder for ESS, then inside that one for iViews (e.g. COMPANY_XYZ >>> ESS >>> iViews) then copy via delta link the iViews you want from the standard SAP folders.
    Then you will have to create another folder for your own roles (e.g. ESS >>> Roles). Inside this folder create a new role and add your iViews (again via delta link). Or you could just copy one of the standard SAP roles into this folder and not bother with copying the iViews. What you do depends on how many changes you intend to make, sometimes if the changes are only small it is easier to copy the entire SAP role (via delta link) and make your changes.
    You should end up with something link this in your PCD:
    Portal Content
         COMPANY_XYZ
              ESS
                   iViews
                        iView1
                        iView2 etc..
                   Roles
                        Role1
                        Role2 etc...
    Then don't forget to assign your new roles to your users or groups.
    Hope that makes sense.
    BRgds,
    Simon
    Edited by: Simon Kemp on Feb 16, 2010 3:56 PM

  • Advice re best practice for managing the scan listener logs and list logs

    Hi friends,
    I've just started a job as a RAC dba administrator for some big 24*7 systems, I've never worked with clusterware and RAC.
    2 Space problems
    1) Very large listener_scan2.log in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/trace folder
    2) Heaps of log_nnn.xml files in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/alert folder (4Gb used up)
    Welcome advice on the best way to manage these in the short term (i.e. delete manually) and the recommended practice and safest way (adri maybe not sure how it works with scan listeners)
    Welcome advice and commands that could be used to safely clean these up and put a robust mechanism in place for logfile management in RAC and CLusterware systems.
    Finally should I be checking the log files in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/alert regulalrly ?
    My experience with listener logs is that they are only looked at when there are major connectivity issues and on the whole are ignored.
    Thanks for your help,
    Cheers, Rob

    Have you had any issues that require them for investigative purposes? If not, just remove them. Are the logs required for some sort of audit process? If yes, gzip them to a location where you can use your OS tape backup policies to retain them for n-days. Once you remove an active file, it should recreate the file and continue without interruption.

  • What is the best practice to manage versions in XI?

    Hi!
    Is there any <b>good</b> “best practice” ways to manage versions in XI.
    Have a challenging scenario with many legacy systems and many interfaces per legacy system.
    Should I put all the different interfaces for one legacy system under one namespace in the Integration Repository or should I make one own namespace for each interface.
    Or is there other approaches, that I should consider to get a environment that is “easily” maintained.
    br.samuli

    Hi,
    In our project we have defined our own naming conventions/namespaces.
    For instance, we have agreed that we will group all interfaces (from all offices worldwide and different projects) into one single product version.
    This global custom product will in turn be divided into different SWC (Software Components) and SWCV (Software Component versions).
    So for each business scenario we will create separate SWC's and when necessary create new versions of these SWC's.
    This means that each SWC should contain all the required objects to support a complete integration scenario i.e. inbound/outbound-interfaces, data types, msg types, business scenario's etc...
    Regards,
    Rob.

  • Best Practice for managing variables for multiple environments

    I am very new to Java WebDynPro and have a question
    concerning our deployments to Sandbox, Development, QA,
    and Production environments.
    What is the 'best practice' that people use so that if
    you have information specific to each environment you
    don't hard-code it in your Java WebDynPro code.
    I could put the value in a properties file, but how do I
    make that variant?  Otherwise I'd still have to make a
    change for each environment to the property file and
    re-deploy.  I know there are some configurations on the
    Portal but am not sure if that will work in my instance.
    For example, I have a URL that varies based on my
    environment.  I don't want to hard-code and re-compile
    for each environment.  I'd prefer to get that
    information on the fly by knowing which environment I'm
    running in and load the appropriate URL.
    So far the only thing I've found that is close to
    telling me where I'm running is by using a Parameter Map
    but the 'key' in the map is the URL not the value and I
    suspect there's a cleaner way to get something like that.
    I used Eclipse's autosense in Netweaver to discover some
    of the things available in my web context.
    Here's the code I used to get that map:
    TaskBinder.getCurrentTask().getWebContextAdapter().getRequestParameterMap();
    In the forum is an example that gets the IP address of
    the site you're serving from. It sounds like it is going
    to be or has been deprecated (it worked on my system
    right now) and I would really rather have something like
    the DNS name, not something like an IP that could change.
    Here's that code:
    String remoteHost = TaskBinder.getCurrentTask().getWebContextAdapter().getHttpServletRequest().getRemoteHost();
    Thanks in advance for any clues you can throw my way -
    Greg

    Hi Greg:
         I suggest you that checks the "Software Change Managment Guide", in this guide you can find an explication of the best practices to work with a development infrastructure.
    this is the link :
    http://help.sap.com/saphelp_erp2005/helpdata/en/83/74c4ce0ed93b4abc6144aafaa1130f/frameset.htm
    Now if you can gets the ip of your server or the name of your site you can do the next thing:
    HttpServletRequest request = ((IWebContextAdapter) WDWebContextAdapter.getWebContextAdapter()).getHttpServletRequest();
    String server_name = request.getServerName();
    String remote_address =     request.getRemoteAddr()
    String remote_host = request.getRemoteHost()
    Only you should export the servlet.jar in your project properties > Build Path > Libraries.
    Good Luck
    Josué Cruz

  • Best Practice for VPC Domain failover with One M2 per N7K switch and 2 sups

    I Have been testing some failover scenarios with 4 nexus 7000 switches with an M2 and an F2 card in each. Each Nexus has two supervisor modules.
    I have 3 VDC's Admin, F2 and M2
    all ports in the M2 are in the M2 VDC and all ports on the F2 are in the F2 VDC.
    All vPC's are connected on the M2 cards, configured in the M2 VDC
    We have 2 Nexus representing each "site"
    In one site we have a vPC domain "100"
    The vPC Peer link is connected on ports E1/3 and E1/4 in Port channel 100
    The peer-keepalive is configured to use the management ports. This is patched in both Sups into our 3750s. (this is will eventually be on a management out of band switch)
    Please see the diagram.
    There are 2 vPC's 1&2 connected at each site which represent the virtual port channels that connect back to a pair of 3750X's (the layer 2 switch icons in the diagram.)
    There is also the third vPC that connects the 4 Nexus's together. (po172)
    We are stretching vlan 900 across the "sites" and would like to keep spanning tree out of this as much as we can, and minimise outages based on link failures, module failures, switch failures, sup failures etc..
    ONLY the management vlan (100,101) is allowed on the port-channel between the 3750's, so vlan 900 spanning tree shouldnt have to make this decision.
    We are only concerned about layer two for this part of the testing.
    As we are connecting the vPC peer link to only one module in each switch (a sinlge) M2 we have configured object tracking as follows:
    n7k-1(config)#track 1 interface ethernet 1/1 line-protocol
    n7k-1(config)#track 2 interface ethernet 1/2 line-protocol
    n7k-1(config)#track 5 interface ethernet 1/5 line-protocol
    track 101 list boolean OR
    n7k-1(config-track)# object 1
    n7k-1(config-track)# object 2
    n7k-1(config-track)# object 5
    n7k-1(config-track)# end
    n7k-1(config)# vpc domain 101
    n7k-1(config-vpc-domain)# track 101
    The other site is the same, just 100 instead of 101.
    We are not tracking port channel 101, not the member interfaces of this port channel as this is the peer link and apparently tracking upstream interfaces and the peer link is only necessary when you have ONE link and one module per switch.
    As the interfaces we are tracking are member ports of a vPC, is this a chicken and egg scenario when seeing if these 3 interfaces are up? or is line-protocol purely layer 1 - so that the vPC isnt downing these member ports at layer 2 when it sees a local vPC domain failure, so that the track fails?
    I see most people are monitoring upstream layer3 ports that connect back to a core? what about what we are doing monitoring upstream(the 3750's) & downstream layer2 (the other site) - that are part of the very vPC we are trying to protect?
    We wanted all 3 of these to be down, for example if the local M2 card failed, the keepalive would send the message to the remote peer to take over.
    What are the best practices here? Which objects should we be tracking? Should we also track the perr-link Port channel101?
    We saw minimal outages using this design. when reloading the M2 modules, usually 1 -3 pings lost between the laptops in the diff sites across the stretched vlan. Obviously no outages when breaking any link in a vPC
    Any wisdom would be greatly appreciated.
    Nick

    Nick,
    I was not talking about the mgmt0 interface. The vlan that you are testing will have a link blocked between the two 3750 port-channel if the root is on the nexus vPC pair.
    Logically your topology is like this:
        |                             |
        |   Nexus Pair          |
    3750-1-----------------------3750-2
    Since you have this triangle setup one of the links will be in blocking state for any vlan configured on these devices.
    When you are talking about vPC and L3 are you talking about L3 routing protocols or just intervaln routing.
    Intervlan routing is fine. Running L3 routing protocols over the peer-link and forming an adjaceny with an router upstream using L2 links is not recommended. Teh following link should give you an idea about what I am talking here:
    http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
    HSRP is fine.
    As mentioned tracking feature purpose is to avoid block hole of traffic. It completely depends on your network setup. Don't think you would be needing to track all the interfaces.
    JayaKrishna

Maybe you are looking for