Best practice to handle the class definitions among storage enabled nodes

We have a common set of cache servers that are shared among various applications. A common problem that we face upon deployment is with the missing class definition newly introduced by one of the application node. Any practical approach / best practices to address this problem?
Edited by: Mahesh Kamath on Feb 3, 2010 10:17 PM

Is it the cache servers themselves or your application servers that are having problems with loading classes?
In order to dynamically add classes (in our case scripts that compile to Java byte code) we are considering to use a class loader that picks up classes from a coherence cache. I am however not so sure how/if this would work for the cache servers themselves if that is your problem!?
Anyhow a simplistic cache class loader may look something like this:
import com.tangosol.net.CacheFactory;
* This trivial class loader searches a specified Coherence cache for classes to load. The classes are assumed
* to be stored as arrays of bytes keyed with the "binary name" of the class (com.zzz.xxx).
* It is probably a good idea to decide on some convention for how binary names are structured when stored in the
* cache. For example the first tree parts of the binary name (com.scania.xxxx in the example) could be the
* "application name" and this could be used as by a partitioning strategy to ensure that all classes associated with
* a specific application are stored in the same partition and this way can be updated atomically by a processor or
* transaction! This kind of partitioning policy also turns class loading into a "scalable" query since each
* application will only involve one cache node!
public class CacheClassLoader extends ClassLoader {
    public static final String DEFAULT_CLASS_CACHE_NAME = "ClassCache";
    private final String classCacheName;
    public CacheClassLoader() {
        this(DEFAULT_CLASS_CACHE_NAME);
    public CacheClassLoader(String classCacheName) {
        this.classCacheName = classCacheName;
    public CacheClassLoader(ClassLoader parent, String classCacheName) {
        super(parent);
        this.classCacheName = classCacheName;
    @Override
    public Class<?> loadClass(String className) throws ClassNotFoundException {
        byte[] bytes = (byte[]) CacheFactory.getCache(classCacheName).get(className);
        return defineClass(className, bytes, 0, bytes.length);
}And a simple "loader" that put the classes in a JAR file into the cache may look like this:
* This class loads classes from a JAR-files to a code cache
public class JarToCacheLoader {
    private final String classCacheName;
    public JarToCacheLoader(String classCacheName) {
        this.classCacheName = classCacheName;
    public JarToCacheLoader() {
        this(CacheClassLoader.DEFAULT_CLASS_CACHE_NAME);
    public void loadClassFiles(String jarFileName) throws IOException {
        JarFile jarFile = new JarFile(jarFileName);
        System.out.println("Cache size = " + CacheFactory.getCache(classCacheName).size());
        for (Enumeration<JarEntry> entries = jarFile.entries(); entries.hasMoreElements();) {
            final JarEntry entry = entries.nextElement();
            if (!entry.isDirectory() && entry.getName().endsWith(".class")) {
                final InputStream inputStream = jarFile.getInputStream(entry);
                final long size = entry.getSize();
                int totalRead = 0;
                int read = 0;
                byte[] bytes = new byte[(int) size];
                do {
                    read = inputStream.read(bytes, totalRead, bytes.length - totalRead);
                    totalRead += read;
                } while (read > 0);
                if (totalRead != size)
                    System.out.println(entry.getName() + " failed to load completely, " + size + " ," + read);
                else
                    System.out.println(entry.getName().replace('/', '.'));
                    CacheFactory.getCache(classCacheName).put(entry.getName() + entry, bytes);
                inputStream.close();
    public static void main(String[] args) {
        JarToCacheLoader loader = new JarToCacheLoader();
        for (String jarFileName : args)
            try {
                loader.loadClassFiles(jarFileName);
            } catch (IOException e) {
                e.printStackTrace();
}Standard disclaimer - this is prototype code use on your own risk :-)
/Magnus

Similar Messages

  • What is the best practice to handle JPA methods in JSF app?

    I am building a JSF-JPA web app(No EJB).
    I have several methods that has JPA QL inside.
    Because I have to put those methods inside JSF beans to inject EntityManagerFactory (am I right about this?).
    And I do want to separate those methods from regular JSF beans which are used by page authors.
    And I may need to use them in different JSF managed beans.
    My question here is that what is the best practice to handle that?
    I. write a or a few separate JSF Beans and inject them into regular Beans?
    II. write a or a few separate JSF Beans and access them into regular Beans using FacesContext?
    III. others?
    Waiting to hear from you opinions.

    You can create named queries on your Entities themselves then just call entityMgr.createNamedQuery("nameOfQuery");
    Normally, we put these named queries in the class of the entity which will be returned. This allows for all information pertaining to a given entity and all ways of accessing that entity (except em.find() and stuff, of course) to be in one place. As long as the entity is defined in your persistence.xml file, any named queries which reside on that entity will be available through the EntityManager.
    As for the EntityManagerFactory, we normally create an application scope bean which holds the factory itself (because this is a heavy-weight object) and then just get all EntityManager instances from that by injecting this bean into whatever needs it. For example, I might have:
    //emfBB is the injected app scope bean which holds the entity manager factory.
    private EmfBB emfBB;
    private void lookupSomeData()
    EntityManager em = this.getEmfBB().getEmf()
    I hope this answered your question?
    ~Zack
    Edited by: zmarr on Nov 6, 2008 1:29 PM

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • Best practice to develop the news web part to retrieve news data across the farms

    Hi,
    We have developed the News Web part. The functionality  it pulls the news from other SharePoint Farms and display it in the site. Currently we are using secure store service to  to connect to different FARM to retrieve the news from that FARM.
    The issue with this approach is that every time user hits the page it  will always hit the Secure Store service.There are almost 80K users who will be using this web part. Moreover this web part connects to multiple sites for multiple news from different
    division.What should be the best practice to develop such kind of web part without consuming the server resources and keep hitting the server till we get the new news.News does not change very often.
    Regards
    Rajaniesh

    Hi,
    According to your description, my understanding is that you want to know which is the best way  to handle the large complication of SharePoint cross farm retrieving data.
    If you are developing the custom web part to retrieve data from other farm, I suggest you can firstly create a custom timer job to get data hourly in the backend and restore the data in a list. Then you can create a web part to link to the list to display
    the data.
    For cross farm accessing data, I suggest you can create a custom web service to achieve it.
    Also, you can use ajax to display the web part data asynchronously. It will improve the performance and reduce the server pressure.
    Here are some detailed articles for your reference:
    Create and Deploy Custom Timer Job Definition in SharePoint Programatically
    Creating a Custom ASP.NET Web Service
    Create asynchronous web parts for Sharepoint
    Thanks
    Best Regards
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Jerry Guo
    TechNet Community Support

  • Best practices for making the end result web help printable

    Hi all, using TCS3 Win 7 64 bit.  All patched and up to date.
    I was wondering what the best practices are for the following scenario:
    I am authoring in Frame, link by reference into RH.
    I use Frame to generate PDFs and RH to generate webhelp.
    I have tons of conditional text which ultimately produce four separate versions of PDFs as well as online help - I handle these codes in FM and pull them into RH.
    I use a css on all pages of my RH to make it 'look' right.
    We now need to add the ability for end users to print the webhelp - outside of just CTRL+P because a)that cuts off the larger images and b)it doesn't show header, footer, logo, date, etc. (stuff that is in the master pages in FM).
    My thought is doing the following:
    Adding four sentences (one for each condition) in the FM book on the first page. Each one would be coded for audience A, B, C, or D (each of which require separate PDFs) as well as coded with ONLINE so that they don't show up in my printed PDFs that I generate out of Frame. Once the PDFs are generated, I would add a hyperlink in RH (manually) to each sentence and link the associated PDF (this seems to add the PDF file to the baggage files in RH). Then when I generate my RH webhelp, it would show the link, with the PDF, correctly based on the condition of the user looking at the help.
    My questions are as follows:
    1- This seems more complicated than it needs to be. Is it?
    2- I would have to manually update every single hyperlink each time I update my FM book, because I am single sourcing out of Frame and I am unable (as far as I can tell) to link a PDF within the frame doc. I update the entire book (over 1500 pages) once every 6 weeks so while this wouldn't be a common occurrence it will happen regularly, and it would be manual (as far as I can tell)?
    3- Eventually, I would have countless PDFs inside RH. I assume this will eventually impact performance. So this also doesn't seem ideal?
    If anyone has thoughts/suggestions on a simpler way or better way to do this, I'd certainly appreciate it. I have watched the Adobe TV tutorial on adding a master page but that seems to remove the ability to use a css across all my topics and it also requires the manual addition of a manual hyperlink to the PDF file, so that is what I am proposing above, anyway (not sure the benefit, therefore).
    Thanks in advance,
    Adriana

    Anything other than CTRL + P is going to create a lot of work so perhaps I can comment on what you see as drawbacks to that.
    a)that cuts off the larger images and b)it doesn't show header, footer,
    logo, date, etc. (stuff that is in the master pages in FM).
    Larger images.
    I simply make a point of keeping my image sizes down to a size that works. It's not a problem for me but that doesn't mean it will work for you. Here all I am doing is suggesting you review how big a problem that would be.
    Master Page Details
    I have to preface this with the statement that I don't work with FM. The details you refer to print when they are in RoboHelp master pages. Perhaps one of the FM users here can comment on how to get FM master pages to come through.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • Best practice on extending the SIEBEL data model

    Can anyone point me to a reference document or provide from their experience a simple best practice on extending the SIEBEL data model for business unique data? Basically I am looking for some simple rules - based on either use case characteristics (need to sort and filter by, need to update frequently, ...) or data characteristics (transient, changes frequently, ...) to tell me if I should extend the tables, leverage the 'x' tables, or do something else.
    Preferably they would be prescriptive and tell me the limits of the different options from a use perspective.
    Thanks

    Accepting the given that Siebel's vanilla data model will always work best, here are some things to keep in mind if you need to add something to meet a process that the business is unwilling to adapt:
    1) Avoid re-using existing business component fields and table columns that you don't need for their original purpose. This is a dangerous practice that is likely to haunt you at upgrade time, or (worse yet) might be linked to some mysterious out-of-the-box automation that you don't know about because it is hidden in class-specific user properties.
    2) Be aware that X tables add a join to your queries, so if you are mapping one business component field to ATTRIB_01 and adding it to your list applets, you are potentially putting an unnecessary load on your database. X tables are best used for fields that are going to be displayed in only one or two places, so the join would not normally be included in your queries.
    3) Always use a prefix (usually X_ ) to denote extension columns when you do create them.
    4) Don't forget to map EIM extensions to the extension columns you create. You do not want to have to go through a schema change and release cycle just because the business wants you to import some data to your extension column.
    5) Consider whether you need a conversion to populate the new column in existing database records, especially if you are configuring a default value in your extension column.
    6) During upgrades, take the time to re-evalute your need for the extension column, taking into account the inevitable enhancements to the vanilla data model. For example, you may find, as we did, that the new version of the S_ADDR_ORG table had an ADDR_LINE_3 column, and our X_ADDR_ADDR3 column was no longer necessary. (Of course, re-configuring all your business components to use the new vanilla column can also be quite an ordeal.)
    Good luck!
    Jim

  • Best Practice for Handling Timeout

    Hi all,
    What is the best practice for handling timeout in a web container?
    By the way, we have been using weblogic 10.3
    thanks

    Are you asking about this specific to web services, or just the JEE container in general for a web application?
    If a web application, Frank Nimphius recently blogged a JEE filter example that you could use too:
    http://thepeninsulasedge.com/frank_nimphius/2007/08/22/adf-faces-detecting-and-handling-user-session-expiry/
    Ignore the tailor fitted ADF options, this should work for any app.
    CM.

  • What is the best practice for using the Calendar control with the Dispatcher?

    It seems as if the Dispatcher is restricting access to the Query Builder (/bin/querybuilder.json) as a best practice regarding security.  However, the Calendar relies on this endpoint to build the events for the calendar.  On Author / Publish this works fine but once we place the Dispatcher in front, the Calendar no longer works.  We've noticed the same behavior on the Geometrixx site.
    What is the best practice for using the Calendar control with Dispatcher?
    Thanks in advance.
    Scott

    Not sure what exactly you are asking but Muse handles the different orientations nicely without having to do anything.
    Example: http://www.cariboowoodshop.com/wood-shop.html

  • Best practice to handle contents greater than 1 TB

    Hello All,
    I am using Sharepoint 2010 and I need to know whats the best practice to handle contents greater than 1 TB
    Specifics
    1) Contents will be collection of images (Jpeg format) and collectively the sizes can go above 1 TB till 10 TB or more
    2) Image will be uploaded to sharepoint though webservice
    So any of below option suitable? if not, then any other option?
    - Document Library
    - Document Center
    - Record Center
    - Asset Library
    - Picture Library
    Thanks in advance ...

    Theres several aspects to this.
    Large lists:
    http://technet.microsoft.com/en-gb/library/cc262813%28v=office.14%29.aspx
    A blog summarising large databases here:
    http://blogs.msdn.com/b/pandrew/archive/2011/07/08/articles-about-scaling-sharepoint-to-large-content-database-capacity.aspx
    Boundaries and limits:
    http://technet.microsoft.com/en-us/library/cc262787%28v=office.14%29.aspx#ContentDB
    If at all possible make your web service clever enough to split content over multiple site collections to allow you to have smaller individual databases.
    It can be done but you need to do a lot of reading on this to do it well. You'll also need a good DBA team to maintain the environment.

  • Best practice to integrate the external(ERP or Database etc) eCommerce data in to CQ

    Hi Guys,
    I am refering to GEOMetrixx-Outdoors project for building eCommerce fucntionality in our project.
    Currently we are integrating with an ERP system to fetch the Product details.
    Now I need to store all the Product data from ERP system in to our CRX  under etc/commerce/products/<myproject> folder structure.
    Do I need to create a csv file structure as explained in the geometrixx-outdoors project  and place it exactly the way they have mentioned in the documentation? By doing this the csvimporter will import the data in to CRX and creates the Sling:folder and nt:unstructured nodes in to CRX?
    Please guide me  which is this best practice to integrate the external eCommerce data in to CQ system to build eCommerce projects?
    Are there any other best practices ?
    Your help in this regard is really appreciated.
    Thanks

    Hi Kresten,
    Thanks for your reply.
    I went through the eCommerce framework link which you sent.
    Can you get me few of the steps to utilise eCommerce framework to pull all the product information in to our CRX repository and also  how to synchronise between the ERP system and CRX data. Is that we have a scheduling mechanism to pull the data from our ERP system and synch it with CRX repository?
    Thanks

  • Best Practice for enhancing the SAP delivered standard WD ABAP application

    Hi,
    I am new to WebDypro ABAP.
    To enhance the SAP delivered Standard WebDynpro Component (complex component with Business objects & powl).
    Kindly let me know the best practice for enhancing the Standard WD ABAP from the below 1 or 2.
    1) To copy & create a "Z" of the component & make changes in that (or)
    2) to enhance directly on the same standard component without making "Z".
    Regards,
    NS

    Hi NS,
    If it is a standard component its better we go for enhancing the component rather than copying it into Z component.
    If there is any issue with in the standard component , SAP supports it through notes and OSS messages. If it is a Z component, SAP doesn't support it.
    If there is any up gradation of business packages, changes will be done to standard , but not the Z components, wherein we could miss it.
    Further, since it is a standard component it might have been used at many places, changes that has to done to reflect all changes might be difficult in this case if it is a z component.
    Regards,
    Harsha

  • What's the best practice to manage the page file?

           
    We have one Hyper-v Server running windows 2012 R2 with 128 GB RAM and 2 drives (C and D). It setup Automatically manage page file size for all drives. What's the best practice to manage the page file?
    Bob Lin, MCSE & CNE Networking, Internet, Routing, VPN Networking, Internet, Routing, VPN Troubleshooting on http://www.ChicagoTech.net How to Install and Configure Windows, VMware, Virtualization and Cisco on http://www.HowToNetworking.com

    For Hyper-V systems, my general recommendation is to set the page file to 1-4 GB. This allows for a mini-dump should something happen. 99.99% of the time, Microsoft will be able to figure out the cause of the problem from the mini-dump. It does not make
    sense on a Hyper-V system to set aside enough space to capture all the memory on the system because only a very small portion of that memory is used by the parent partition. Most of the memory is under control of the individual VMs.
    Yes, I had one of the Hyper-V product group tell me that I should let Windows manage it.  A couple of times I saw space on my system disk disappear because the algorithm decided it wanted all the space for the page file.  Made it so I couldn't
    patch my systems.  Went back in and set the page file to 1-4 GB and have not had any issues since.
    . : | : . : | : . tim

  • BEST PRACTICE TO PARTITION THE HARD DISK

    Can some Please guide me on THE BEST PRACTICE TO PARTITION THE[b] HARD DISK FOR 10G R2 on operating system HP-UX-11
    Thanks,
    Amol
    Message was edited by:
    user620887

    I/O speed is a basic function of number of disk controllers available to read and write, physical speed of the disks, size of the I/O pipe(s) between SAN and server, and the size of the SAN cache, and so on.
    Oracle recommends SAME - Stripe And Mirror Everything. This comes in RAID10 and RAID01 flavours. Ideally you want multiple fibre channels between the server and the SAN. Ideally you want these LUNs from the SAN to be seen as raw devices by the server and use these raw devices as ASM devices - running ASM as the volume manager. Etc.
    Performance is not achieved by just partitioning. Or just a more memory. Or just a faster CPU. Performance planning and scalability encapsulates the complete system. All parts. Not just a single aspect like partitioning.
    Especially not partitioning as an actual partition is simple a "logical reference" for a "piece" of the disk. I/O performance has very little do with how many pieces you split a a single disk into. That is the management part. It is far more important how you stripe and whether you use RAID5 instead of a RAID1 flavour, etc.
    So I'm not sure why you are all uppercase about partitioning....

  • Forums : Best practices to FAX the BI publisher reports

    What are the best practices to FAX the BI publisher reports in Oracle Applications?
    Also please let me know the advantages and disadvantages using CUPS printer/RightFax Server.
    Thanks for your help.
    Liyakath

    You can use any of the methods you have listed to create XML data. Oracle seeded code is using PLSQL to generate XML rather than reports since reports technology will not be used in the next version of Application (aka Fusion)
    Srini

  • Best practices for using the knowledge directory

    Anyone know when it is best to store docs in the Knowledge Directory versus Collab? They are both searchable, but I guess you can publish from the Publisher to the KD. Anyone have any best practices for using the KD or setting up taxonomies in the KD?

    Hi Richard,
    If you need to configure dynamic pricing that may vary by tenant and/or if you want to set up cost drivers that are service item attributes, you should configure Billing Tables in the Demand Management module in 10.0. 
    The cost detail functionality in 9.4 will likely be changed to merged with the new pricing feature in 10.0.  The current plan is not to bring cost detail into the Service Catalog module.

Maybe you are looking for