Question on Implementation of Cache.

hi everybody,
I would like to ask Java community Can we have something similar to Cache object in .NET available in java.
i know there is Hashtable collection which can be implemented as Cache. but suppose i have stored 500 objects in Hashtable out of that 100 objects are no longer referred by any programme in that case these objects should get removed from this Hashtable automatically. Currently,as far as i know ,it is not working like that.
will Sun people or anybody reply ?

It is possible to track references and respond to their lifecycle events - have a look at java.lang.ref for reference classes. These allow you to bind things to (eg) HashMaps whilst still leaving them eligible for garbage collection. For this requirement, though, I don't think you'd need to go overboard using a ReferenceQueue.
In fact, there's an implementation called WeakHashMap that does something a bit like this already. Only in reverse.
Your cache could use normal Objects for keys and get those to refer to WeakReferences which are examined in its get methods to check the Reference for a null referent (and subsequently remove the corresponding mapping). I doubt it would represent much effort to implement something straightforward.
Hope this helps.

Similar Messages

  • Implementing Automatic cache purging

    Hi All ,
    I want to implement Automatic cache purging using Event pooling table in obiee..
    i have followed one site, in that they asked me to crated one table in database ... table columns are as follows
    1.update_type
    2.update_date
    3.databasename
    4.catalogname
    5.schemaname
    6.tablename.
    here i am having one doubt .. in my rpd , i am having two tables which are using in 4 catalogs . so.. my doubt is .. how should i came to know the table has come from particular catalog .. then i should i populated the catalog names in backend table ..
    if any one knows please let me know.
    Thanks
    Sree

    Hi,
    The below links should help you
    http://obiee101.blogspot.com/2008/03/obiee-manage-cache-part-1.html
    and
    http://oraclebizint.wordpress.com/2007/12/18/oracle-bi-ee-101332-scheduling-cache-purging/
    To purge the cache automatically you have to set the cache persistent time in the tables present in the physical layer. There you can mention the time after which you want to purge the cache. The steps are provided below:
    1. Double click on the table in the physical layer.
    2. Select the General tab.
    3. Select the Cacheable option.
    4. Select the Cache Persistent time.
    5. Specifiy the time interval when you need the cache to be refreshed.
    You have to do the same for all tables for which you want to purge the cache
    Thanks
    Deva
    Edited by: Devarasu on Sep 28, 2011 2:39 PM

  • Best practice question for implementing a custom component

    I'm implementing a custom component which renders multiple <input type="text" .../> controls as part of it. The examples I've seen that do something similar use the ResponseWriter to generate the markup "by hand" like:
         writer.startElement("input", component);
         writer.writeAttribute("type", "text", null);
         writer.writeAttribute("id", "foo", null);
         writer.writeAttribute("name", "foo", null);
         writer.writeAttribute("value", "hello", null);
         writer.writeAttribute("size", "20", null);
         writer.endElement("input");
    I don't know about anyone else, but I HATE having to write code that manufactures this stuff - seems to me that there are already classes that do this, so why not just use those? For example, the above could be replaced with:
         HtmlInputText textField = new HtmlInputText();
         textField.setId("foo");
         textField.setValue("hello");
         textField.setSize(20);
         // just to be safe, invoke both encodeBegin() and encodeEnd(),
         // though it seems like encodeEnd() actually does the work in this case,
         // but who knows if they might change it at some point
         textField.encodeBegin(context);
         textField.encodeEnd(context);
    So my question is, why does everyone seem to favor the former over the latter? Why not leverage objects that already do the (encoding) work for you?

    Got it!
    You JSP should have this:
    <h:panelGroup styleClass="jspPanel" id="jspPanel1"></h:panelGroup>
    And your code page ValueChangeListener/ActionListner should have this:
              if (findComponent(getForm1(),"myOutputText") == null)
                   FacesContext facesCtx = FacesContext.getCurrentInstance();
                   System.out.println("Adding component");
                   HtmlOutputText output =
                        (HtmlOutputText) facesCtx.getApplication().createComponent(
                             HtmlOutputText.COMPONENT_TYPE);
                   output.setId("myOutputText");
                   output.setValue("It works");
                   getJspPanel1().getChildren().add(output);          
                   System.out.println("Done");
                   DebugUtil.printTree(FacesContext.getCurrentInstance().getViewRoot(),System.out);
              else
                   System.out.println("component already added");
    I just have to figure out this IOException on the closed stream - probably has to do with [immidiate="true"].
    Thanks.
    [9/15/04 13:05:53:505 EDT] 6e436e43 SystemErr R java.io.IOException: Stream closed
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at java.lang.Throwable.<init>(Throwable.java)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at java.lang.Throwable.<init>(Throwable.java)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at org.apache.jasper.runtime.JspWriterImpl.ensureOpen(JspWriterImpl.java:294)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at org.apache.jasper.runtime.JspWriterImpl.write(JspWriterImpl.java:424)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at org.apache.jasper.runtime.JspWriterImpl.write(JspWriterImpl.java:452)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.faces.component.UIJspPanel$ChildrenListEx.add(UIJspPanel.java:114)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at pagecode.admin.Test.handleListbox1ValueChange(Test.java)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at java.lang.reflect.AccessibleObject.invokeImpl(Native Method)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at java.lang.reflect.AccessibleObject.invokeV(AccessibleObject.java:199)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at java.lang.reflect.Method.invoke(Method.java:252)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.sun.faces.el.MethodBindingImpl.invoke(MethodBindingImpl.java:126)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at javax.faces.component.UIInput.broadcast(UIInput.java:492)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:284)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at javax.faces.component.UIViewRoot.processDecodes(UIViewRoot.java:342)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.sun.faces.lifecycle.ApplyRequestValuesPhase.execute(ApplyRequestValuesPhase.java:79)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:200)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:90)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at javax.faces.webapp.FacesServlet.service(FacesServlet.java:197)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.servlet.StrictServletInstance.doService(StrictServletInstance.java:110)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.servlet.StrictLifecycleServlet._service(StrictLifecycleServlet.java:174)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.servlet.IdleServletState.service(StrictLifecycleServlet.java:313)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.servlet.StrictLifecycleServlet.service(StrictLifecycleServlet.java:116)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.servlet.ServletInstance.service(ServletInstance.java:283)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.servlet.ValidServletReferenceState.dispatch(ValidServletReferenceState.java:42)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.servlet.ServletInstanceReference.dispatch(ServletInstanceReference.java:40)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.handleWebAppDispatch(WebAppRequestDispatcher.java:948)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.dispatch(WebAppRequestDispatcher.java:530)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.forward(WebAppRequestDispatcher.java:176)
    [9/15/04 13:05:53:536 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.srt.WebAppInvoker.doForward(WebAppInvoker.java:79)
    [9/15/04 13:05:53:552 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.srt.WebAppInvoker.handleInvocationHook(WebAppInvoker.java:201)
    [9/15/04 13:05:53:552 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.cache.invocation.CachedInvocation.handleInvocation(CachedInvocation.java:71)
    [9/15/04 13:05:53:552 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.srp.ServletRequestProcessor.dispatchByURI(ServletRequestProcessor.java:182)
    [9/15/04 13:05:53:567 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.oselistener.OSEListenerDispatcher.service(OSEListener.java:334)
    [9/15/04 13:05:53:567 EDT] 6e436e43 SystemErr R      at com.ibm.ws.webcontainer.http.HttpConnection.handleRequest(HttpConnection.java:56)
    [9/15/04 13:05:53:567 EDT] 6e436e43 SystemErr R      at com.ibm.ws.http.HttpConnection.readAndHandleRequest(HttpConnection.java:610)
    [9/15/04 13:05:53:567 EDT] 6e436e43 SystemErr R      at com.ibm.ws.http.HttpConnection.run(HttpConnection.java:435)
    [9/15/04 13:05:53:567 EDT] 6e436e43 SystemErr R      at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:593)
    [9/15/04 13:05:56:146 EDT] 6e436e43 SystemOut O Done

  • How to use Java NIO to implement disk cache for serialized java objects

    Hi,
    I have a cache (implemented as hahstable etc.) that contains java objects (mostly strings) and swaps objects from runtime memory to the disk and back based on some algorithms. Currently, the reading and writing from the disk is implemented using java.io.* package i.e. fileInputstream and FileOutputStream. Essentially, I serialize the java object and write to the disk and the deserialize and give it back to the Hashtable cache.
    The performance of swapping from disk to memory is kinda slow. I have read that memory mapping would improve the performance.
    My idea is to do the following:
    Have one big file mapped to memory. I write the serialized objects to different portions of the file and then read those portions when needed. I can use the MappedByteBuffer for that but then I have the following questions. I will not store objects in the hashtable anymore.
    1. How do I delete things from the cache in the above design i.e. how do I delete portions of a mapped file?
    2. How do I serialize objects using ByteBuffers and then deserialize them? I guess this shouldn't be hard but just want to confirm.
    Do you think this is the right design or should I change? Right now using the old io package, I have a separate file for each object. When using the NIO package, I want to store all objects in a single file in different portions of the file, is that the right way to go?
    As you can see, I am beginner in memory mapped io and need help.

    Have one big file mapped to memory. I write the serialized objects to different portions of the file and then read those portions when needed. I can use the MappedByteBufferThis is a good idea, one that I have worked on. It involves quite a bit of manipulation with temporary buffers and a deep working knowledge of object serialization.
    1. How do I delete things from the cache in the above design i.e. how do I delete portions of a mapped file?The best way to handle this is do a two-step process, cutting the file into two pieces and gluing it back together where the original one is...
    2. How do I serialize objects using ByteBuffers and then deserialize them? I guess this shouldn't be hard but just want to confirm.It is hard. Wrapping the streams and making the IO work properly is not the challenge however. The hard part comes in hacking the object streams. The object input/output streams use a ClassDescriptor object which only gets written once/ read once. This shouldn't be a problem if you will read/write the entire file at once, but will bring you grief if you want random access to your objects. You will also need an indexing mechanism to support random access.
    Do you think this is the right design or should I change? Right now using the old io package, I have a separate file for each object. When using the NIO package, I want to store all objects in a single file in different portions of the file, is that the right way to go?I guess it depends on your needs. Do you require random access to objects? NIO provides some performance gains, but mostly for very large amounts of data (>10M in my experience).
    You can always write all your objects into the same file using normal io techniques and you can still generate an index and acheive random access. It might be easier...
    Good luck

  • 5 Dataguard questions on implementation and maintainence

    I have created Oracle10g Dataguard Physical standby. Everything seems to working properly. I switch a log@primary and i see it applied at standby. In OEM the Primary instance is shown as "Primary" and Dataguard Normal.
    Questions
    =======
    1.I used cold backup to create physical standby where i copied just the datafiles. I DID NOT copy the online logfiles from primary DB . Is this ok?
    Reason i am asking is, I am seeing this in alertlog (Please see in Bold below). Could this message in alert be ignored?
    Fri Mar 14 16:11:55 2008
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[5]: Assigned to RFS process 9652
    RFS[5]: Identified database type as 'physical standby'
    Fri Mar 14 16:11:57 2008
    Media Recovery Waiting for thread 1 sequence 140 (in transit)
    Fri Mar 14 16:12:09 2008
    RFS[4]: Archived Log: '/b03/archive/PRI/arch1_140_649240729.arc'
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[4]: No standby redo logfiles created
    Fri Mar 14 16:12:12 2008
    Media Recovery Log /b03/archive/PRI/arch1_140_649240729.arc
    Media Recovery Waiting for thread 1 sequence 141 (in transit)
    Fri Mar 14 16:12:58 2008
    2. Physical standby was created in default mode " MAX PERFORMANCE" but i followed the manual and i realize i created standby redo logs at primary DB.
    Is it true we do not need standby redo logs for MAX PERFORMANCE mode?
    3. What is the quickest way to create a physical standby taking into consideration the downtime of the primary database.
    4. Can we setup Dataguard for a live running production database (assuming prod is all set with prepare steps as laid out in manual) without downtime.
    5. I hear dataguard management involves lot of shell scripting for log shipping, failover, switchover etc. Looking at 10g manual all these appear to be handled automatically if configured. Is it true that all the Dataguard functionality is auto and does not need shell scripting?
    These are the things came up to mind. Please list any others you can think of for a Dataguard implementation. Any help greatly appreciated.
    Thanks and have a great time.
    S~

    My question is , I havent created either online logfiles or standby log files on the secondary database, then how is RFS process applying the logs to the standby database?
    If you do not create standby log files on the secondary database, then RFS process will apply logs from primary DB's archieved redo logs, refer 5.1 Introduction to Redo Transport Services.(http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/log_transport.htm)
    Also i see entries in v$log@standby, since i haven't created log files@standby where is this info getting from?
    It will be through the entry in Stndby DB parameter file "LOG_FILE_NAME_CONVERT=
    '/arch1/chicago/','/arch1/boston/','/arch2/chicago/','/arch2/boston/'
    Thanks

  • Question on implementing a Berkeley DB Java Edition solution

    Hi,
    I am trying to assess if Berkeley DB JE is the solution to my problem. I have couple RDBMS tables that stores user security Groups and Resource relationship recursively in a tree data structure.
    I have an online security api for online applications that supports read only functionality and a admin security api for business system applications for write/update functionality. Currently the online api has to recursively query the database to build the complete group and resource tree structure for a user trying to login, and the performance is horrible for a user with Group sets since it query the database repetitively; or when a several concurrent users try to log in.
    I was thinking of loading the two tables in memory using Berkley DB JE during application start up to remove the overhead of connection to an RDMS. But I want to know what can I do to keep the Berkley DB in sync with the RDMS for if any updates are being made to the RDBS by the admin API.?
    I'm using Oracle 10g and querying using 'connect by' didn’t help wither
    Any advice, suggestion or alternate solution would be helpful
    Mo

    Mo,
    In my case a single application is launched on
    different app servers and I’m planning on using JE asa RDBMS cache, but since JE log files will be created
    on on each app server, over time they will be out of
    sync as the app on each server will be updating their
    individual JE log file. Yes, in this scenario, there are multiple JE read/write environments
    (which would be called databases, in Oracle terms), and they are
    independent. There is no way to built in way to keep multiple
    read/write environments in sync. I do wonder why that's an issue,
    because it sounds like these JE environments are all fed from the
    common RDBMS server, and therefore would hold the same data, but
    that's in your application realm.
    JE allows only a single process to write to the environment. In a
    JavaEE environment we recommend implementing a singleton service that
    is used to access the JE environment, for example, a stateless session
    bean that provides your application specific "data service". Other
    processes (or other J2EE components) should access JE via this data
    service. The data service should provide high level application
    operations, not individual get/put operations, to reduce communication
    overhead. In your case, this doesn't sound sufficient because
    you're using multiple app servers.
    It is possible for other processes to open the JE environment
    read-only, but these are very limited in capability. Changes made by
    the writer process are not visible in the reader process, unless the
    reader process closes and re-opens the Environment, which is
    expensive. In addition, if the processes are on different machines
    you'll need to use a network file system of some kind to access the JE
    environment directory. You can use a network file system for the JE
    environment, but you'll get much better performance by putting it on a
    local disk where the singleton service is running. See the caveats in
    our FAQ if you use a network file system.
    A High Availability (HA) version of JE, i.e., replication, is planned
    for a future release. This will allow multiple reader processes to
    use JE that do not have the limitations I described above, but still
    only a single writer process. Fail over and load balancing (for
    scalability) can be implemented using the HA version.
    ... then may be I could have a new table in the RDMS that keep
    tracks of the set record keys that have been added/updated by the
    admin API, and have the JE application query that table from time to
    time to see if something changed and update its JE store using the DB
    Load functionality ... so this is more of a hybrid solution.Yes, this is a possibility. DbLoad is a bulk load though, and in your
    case you may need to think about how to only add the recent changes,
    and whether you're inserting new data, or also updating existing records.
    Regards,
    Linda

  • Question of Berkeley DB "cache size"

    quote:
    Set the size of the shared memory buffer pool, that is, the size of the cache.
    The cache should be the size of the normal working data set of the application, with some small amount of additional memory for unusual situations. (Note: the working set is not the same as the number of pages accessed simultaneously, and is usually much larger.)
    The default cache size is 256KB, and may not be specified as less than 20KB. Any cache size less than 500MB is automatically increased by 25% to account for buffer pool overhead; cache sizes larger than 500MB are used as specified. The current maximum size of a single cache is 4GB. (All sizes are in powers-of-two, that is, 256KB is 2^18 not 256,000.)
    The database environment's cache size may also be set using the environment's DB_CONFIG file. The syntax of the entry in that file is a single line with the string "set_cachesize", one or more whitespace characters, and the cache size specified in three parts: the gigabytes of cache, the additional bytes of cache, and the number of caches, also separated by whitespace characters. For example, "set_cachesize 2 524288000 3" would create a 2.5GB logical cache, split between three physical caches. Because the DB_CONFIG file is read when the database environment is opened, it will silently overrule configuration done before that time.
    This method configures a database environment, including all threads of control accessing the database environment, not only the operations performed using a specified Environment handle.
    This method may not be called after the environment has been opened. If joining an existing database environment, any information specified to this method will be ignored.
    This method may be called at any time during the life of the application.
    Parameters:
    cacheSize The size of the shared memory buffer pool, that is, the size of the cache.
    The question:
    When I have a host, the memory total is 16G.
    I don't know what mean of this document.
    How many max cache size can be set ?
    4G? 16G?
    or cacheCount (4)* 4G = 16G?
    My Email: [email protected]

    What version of Berkeley DB are you using?
    I'm a little confused about what you are quoting. Most of your quote seems to be from DB_ENV->set_cachesize(), but set_cachesize does not have a parameter named cacheSize. The parameters for set_cachesize are gbytes, bytes and ncache.
    You use set_cachesize to specify the logical cache that you can optionally split into more than one physical region. The maximum size of the logical cache is 4GB and there is only one logical cache. You specify the total size of the logical cache with the gbytes and bytes parameters. If you set ncache to a value greater than 1, you split this logical cache into separate physical regions. So, for example, if you specify (gbytes=2, bytes=0, ncache=1) you will have a logical cache of 2GB that internally is split into 2 separate physical regions of 1GB each.
    You can read more about the memory pool cache in the Reference Guide sections "Selecting a cache size" and "Configuring the memory pool".
    If you have other Berkeley DB questions that are not specific to replication, you should direct them to the general Berkeley DB forum where you will have the benefit of a wider set of Berkeley DB experts:
    Berkeley DB
    Paula Bingham
    Oracle

  • Question about implementation of a webservice

    Hi
    I have successful implemented a webservice call in my webdnypro application. But I am not sure if I have done this the right way
    Scenario: I have a view which displays data from a business partner. These data can be accessed over a webservice. The webservice requires a parameter with the id of the bussiness partner.
    My solution: I have implemented a method in the component controller (*comp.java).
    public void getBusinessPartnerInfo( java.lang.String businessPartnerCode )  {
        //@@begin getBusinessPartnerInfo()
           service = new BPService();
           Request_GetBPInfo request = new Request_GetBPInfo(service);
           GetBPInfo getBPInfo = new GetBPInfo(service);
           getBPInfo.setCardCode(businessPartnerCode);
           request.setGetBPInfo(getBPInfo);
           try {
                request.execute();
           } catch(WDWSModelExecuteException ex) {
           wdContext.nodeRequest_GetBPInfo().bind(request);
        //@@end
    and I invoke this method from the wdInit method of the same controller.
    public void wdDoInit()
        //@@begin wdDoInit()
           this.getBusinessPartnerInfo("c1000");
        //@@end
    Therefor the data is available when the view is loaded. But I am asking myself if this is the right way to do that? Wouldn't it be better to call this webservice invocation from the init method of the view?
    Additionally I do not know at the moment how to access the environment of the portal for information. E.g. the business partner code c1000 is hard coded at the moment. I would like to read this code from the environment, e.g. the portal. Over which API can I access data from the portal or better say from the ume (where i can map a field to the business partner code)?
    Thanks for your answers,
    Thierry

    Hi,
    As far as your first question is concerned you can access the controller method from the view linked with the controller as follows:
    wdThis.wdGet<Contoller Name>Controller().yourMethod();
    I have no idea that you can store generic information such as BP code specific to a user in EP. Lets see.
    thanks & regards,
    Manoj

  • Question regarding implementation of Portal Service

    Hello,
    I want to create a portal service that calls our R/3 system and comes back with customer master data. For that I have to hand over the userid to the portal service.
    The Interface looks like that:
    import com.sapportals.portal.prt.service.IService;
    import com.lgs.model.CustomerDataBean;
    public interface IR3CustDataService extends IService
        public static final String KEY = "R3CustDataService";
        // returns an object with all customer master data from R/3
         public CustomerDataBean getCustomerData(String userid);
    The implementation of the method in the corresponding class is:
    CustomerDataBean cdb = new CustomerDataBean();
    public CustomerDataBean getCustomerData(String userid) {
              return cdb;
    Now I would implement the program logic (accessing R/3 and fill the CustomerDataBean) in the init method of the portal service class.
    public void init(IServiceContext serviceContext) {
                    mm_serviceContext = serviceContext;
                    implementation of program logic, usage of userid necessary
    My question is now how I can use the String userid in the init method? How can I hand over the userid to the init method so that I can use it?
    Any hint is really appreciated!
    Thanks a lot.
    Arno

    hi
    MY level of understandind u r problem is
       by application  u r sending the username and password to the portal and getting the required data by beans
    if so
    check this it may be usefull foru
    Integrating External Application Services without Web service
    bvr

  • Three questions regarding DB_KEEP_CACHE_SIZE and caching tables.

    Folks,
    In my Oracle 10g db, which I got in legacy. It has the init.ora parameter DB_KEEP_CACHE_SIZE parameter configured to 4GB in size.
    Also there are bunch of tables that were created with CACHE turned on for them.
    By querying dba_tables table , with CACHE='Y', I can see the name of these tables.
    With time, some of these tables have grown in size (no. of rows) and also some of these tables are not required to be cached any longer.
    So here is my first question
    1) Is there a query I can run , to find out , what tables are currently in the DB_KEEP_CACHE_SIZE.
    2) Also how can I find out if my DB_KEEP_CACHE_SIZE is adqueataly sized or needs to be increased in size,as some of these
    tables have grown in size.
    Third question
    I know for fact, that there are 2 tables that do not need to be cached any longer.
    So how do I make sure they do not occupy space in the DB_KEEP_CACHE_POOL.
    I tried, alter table <table_name> nocache; statement
    Now the cache column value for these in dba_tables is 'N', but if I query the dba_segments tables, the BUFFER_POOL column for them still has value of 'KEEP'.
    After altering these tables to nocache, I did bounce my database.
    Again, So how do I make sure these tables which are not required to be cached any longer, do not occupy space in the DB_KEEP_CACHE_SIZE.
    Would very much appreciate your help.
    Regards
    Ashish

    Hello,
    1) Is there a query I can run , to find out , what tables are currently in the DB_KEEP_CACHE_SIZE:You may try this query:
    select owner, segment_name, segment_type, buffer_pool
    from dba_segments
    where buffer_pool = 'KEEP'
    order by owner, segment_name;
    2) Also how can I find out if my DB_KEEP_CACHE_SIZE is adqueataly sized or needs to be increased in size,as some of these tables have grown in size.You may try to get the total size of the Segments using the KEEP BUFFER:
    select sum(bytes)/(1024*10124) "Mo"
    from dba_segments
    where buffer_pool = 'KEEP';To be sure that all the blocks of these segments (Table / Index) won't be often aged out from the KEEP BUFFER, the total size given by the above query should be less than the size of your KEEP BUFFER.
    I know for fact, that there are 2 tables that do not need to be cached any longer.
    So how do I make sure they do not occupy space in the DB_KEEP_CACHE_POOL.You just have to execute the following statement:
    ALTER TABLE <owner>.<table> STORAGE(BUFFER_POOL DEFAULT);Hope this help.
    Best regards,
    Jean-Valentin

  • Servlet side Cache implementation - need cache (updateable, portable) in Servlet

    Hi Guys,
              Any ideas on implementing an updateable cache on a servlet. The problem is
              that
              it must be updateable (I must be able to tell it to update itself whenever a
              welldefined
              event occurs), and it must work in a cluster environment, and I would prefer
              it to
              be J2EE portable.
              I know I can poll from the servlet - but this isn't the most elegant
              approach.
              My servlet needs to do some really fast security/auditing, but I don't want
              it to
              always do something like an EJB lookup. I can quite easily cache what I
              need
              in the hashmap - problem is updateing it and also having it work in a
              clustered
              environ.
              Any other ideas appreciated,
              Jon
              

    ejp wrote:
    Of course it is. That's how any Map behavesWhen a key has been discarded its entry is effectively removed from the map, so this class behaves somewhat differently than other Map implementations. I meant, the mapping doesnt prevent the key from getting discarded cause it is a weakreference.
    No: it's out of your control, but it's within the garbage collector's control. That is the purpose of the class.You aint talking about the situation written in this email. So if it is out of control, your suggestion about weakhashmap isn't gonna work. Out of control in the sense, it doesnt treat a key which was recently accessed any different from anything which wasn't accessed for a long time.
    Because by choosing a key that will get garbage-collected at the time of interest to you, you ensure that the WeakHashMap will drop the corresponding value at the same time as the key is GC'd.Not realted to the problem above. Summarily WeakHashmap is no good for the above scenario.

  • How to implement content caching for jsp page ?

    Hello everyone,
    I am reading an article <Servlets and Jsp Best Practice>,at
    http://developer.java.sun.com/developer/technicalArticles/javaserverpages/servlets_jsp/#author, on one section it is saying :
    "Cache content: You should never dynamically regenerate content that doesn't
    change between requests. You can cache content on the client-side, proxy-
    side,or server-side. "
    Now I am working on a project. For every user, some of the content servlet generated will be the same for at least a week . I am thinking if I implement caching for these jsp pages, that would increase performace a lot.
    But I have no idea how to implement it either client-side or server-side, can someone give me a hint ?
    Thanks,
    Rachel

    You mean actually you are caching the response stream
    and the key to distinguish between different response
    streams are made of user's different request
    parameters. And the filter's function is to intercept
    the request to see if this request parameter's
    combination already exists in the Hashmap,then either
    use the cached response or forward to
    servlet.....really interesting...Do I get it right ?Yes that's it in a nutshell.
    >
    Then how do you build those response streams in
    advance ? You did it manually or have some mechanism
    to build it automatically ?
    The data gets cached the first time somebody visits the page.
    Find some examples on Filters, and take a look at the HttpServletResponseWrapper class. You need to cache response headers as well as the body. Another pitfall that you might run into is handling an If-modified-since header on the request. Don't cache the results of those requests.
    -Jonathan
    >
    Thanks again !
    Rachel

  • Questions relate to table "cache"??

    We plan to "cache" some tables to SGA or DB_keep_cache_size. some questions need to clarify:
    1. what different between:
    alter table user.table_name cache;
    alter TABLE USER.TABLE_NAME storage (buffer_pool keep);
    2. if I using either way to cache table, how to check table already "cache"?
    Thanks.

    ef8454 wrote:
    We plan to "cache" some tables to SGA or DB_keep_cache_size. some questions need to clarify:
    1. what different between:
    alter table user.table_name cache;
    alter TABLE USER.TABLE_NAME storage (buffer_pool keep);
    2. if I using either way to cache table, how to check table already "cache"?
    Thanks.In Oracle, starting with I suppose Release 8, they have three types of data buffer pool, or say they have divided data buffer pool, in three part,
    1) Default pool -- This is the normal data pool.
    2) Keep Pool -- This is a data pool where you want your table/object to remain in memory for longer time.
    3) Recycle Pool -- This is a data pool where you want your table/object to remain till the time it is needed i.e. very short duration.
    So when you use second command with storage option, you are saying to keep that data in Keep Pool.
    Now, in all the three pools data gets aged out based on an algorithim known as LRU (Least recently used) i.e. to keep the data in front of the queue if it is accessed.
    When you say cache using your first query, you are saying oracle to keep it in the front of the queue when full table scan is done. It can be in any buffer (Default/Keep/Recycle)
    Regards
    Anurag

  • SUP - 2 questions about the CDB (cache database)

    Hi,
    I have 2 questions about the cache database and the cache groups:
    1 - How does the "On demand" cache group policy exactly works? I know that online cache group is without storing any data on the CDB making direct requests to de backend from the device, the DCN is based on updating from the backend, the scheduled is based on a time period, but I don't understand how the "on demand" exactly works, and why it has a time period too.
    2 - Is it possible to query the cache database table to check the data that SUP has stored? How can I do this?
    Thank you!

    I posted a similar question in SUP Apps project not too long ago and  Paul Horan provided this useful reply:
    Create a "Sybase ASA v12.x for Unwired Server" connection profile in the Enterprise Explorer.  I named mine CDB.
    : Host = localhost (or whatever the machine name is)
    : Port = 5200
    : Database name = "default"
    : User Name = "dba"
    : Password = "sql"
    Obviously, change the userid/password to match, if you changed them during install time.
    Connect, and you'll see the "default" database displayed.
    Navigate down through the Tables folder, and the first subfolder is labeled something like [#should_delete_sk ...]  Start there.
    You'll see a bunch of tables with the naming convention "D1" + package name + package version + MBO name.  These are the cache tables for the MBOs.

  • Questions about entity bean caching/pooling

    We have a large J2ee app running on weblogic6.1 sp4. We are using entity beans
    with cmp/cmr. We have about 200 EntityBeans and accessed quite heavily. We are
    struggling with what is the right setting of max-beans-in-cache and idle-time-out.
    The current max heap setting is 2GB. With the current setting (default setting
    of max-beans-in-cache to 1000, with a few exceptions to take care of cachefullexceptions)
    we run into extended gc happening after about 4 hours. The memory freed gradually
    reduces with time and lurks around the 30% mark after about 4 hours of run at
    the expected load. In relation to this we had the following questions
    1. What does caching mean?
    a. If a bean with primary key 100 exists in the cache, and the following
    is done what is expected
    i. findByPrimaryKey(100)
    ii. findBySomeOtherKey(xyz)
    which results in loading up bean with primary key 100
    iii. cmr access to bean with
    primary key 100
    Is the instance in the cache reused at all between transactions?
    If there is minimal reuse of the beans in cache, Is it fair to assume that caching
    can only help loading of beans within a transaction. If this is the case, is there
    any driver to increase the max-beans-in-cache other than to avoid CacheFullException?
    In other words, is it wrong to say that max-beans-in-cache should be set to the
    minimum value so as to avoid CacheFullExceptions.
    2. Again what is the driver of setting idle-time-out to a value? ( We currently
    have it at 30 secs) Partly the answer to this question would again go back to
    what amount of reuse is done from cache? Is it right to say that it should be
    set to a very low value? (Why is the default 10 min?)
    3. Can you provide us any documentation that explains how all this works
    in more detail, particularly in relevance to entity beans. We have already read
    the documentation from weblogic as is. Anything to give more explicit detail?
    Any tools that can be of use.
    4. What is the right parameter (from among the things that weblogic console
    throws up) to look at for optimizing?
    Thanks in advance for your help
    Cheers
    Arun

    The behaviour changes according to these descriptor settings: concurrency-strategy,
    db-is-shared and include-updates.
    1. If concurrency-strategy is Database, then the database is used to provide locking
    and db-is-shared is ignored. A bean's ejbLoad() is called once per transaction,
    and the 'cache' is really a per-transaction pool. A findByPrimaryKey() always
    initially hits the db, but can use the cache if called again in the same txn (although
    you'd simply just pass a reference around). A findByAnythingElse() always hits
    the db.
    2. If concurrency-strategy is ReadOnly then the cache is longer-term: ejbLoad()
    is only called when the bean is activated; thereafter, the number of times ejbLoad()
    is called is influenced by the setting of read-timeout-seconds. A findByPrimaryKey()
    can use the cache. A findByAnythingElse() can't.
    3. If concurrency-strategy is Exclusive then db-is-shared influences how many
    times ejbLoad() is called. If db-is-shared is false (i.e. the container has exclusive
    use of the underlying table), then the ejbLoad() behaviour is more like ReadOnly
    (2. above), and the cache is longer-term. If db-is-shared is true, then the ejbLoad()
    behaviour is like Database (1. above).
    Exclusive concurrency reduces ejbLoads(), increases the effectiveness of the cache,
    but can reduce app concurrency as only one instance of an entity bean can exist
    inside the server, and access to it is serialised at the txn level.
    You can't use db-is-shared = false in a cluster. So Exclusive mode is less useful.
    That's when you think long and hard about Tangosol Coherence (http://www.tangosol.com)
    4. If include-updates is true, then the cache is flushed to the db before every
    non-findByPrimaryKey() finder call so the finder (which always hits the db) will
    get the latest bean values. This overrides a true setting of delay-updates-until-end-of-tx.
    The max-beans-in-cache setting refers to the maximum number of active beans (really
    beans that have been returned by a finder in a txn that hasn't committed). This
    wasn't checked in SP2 (we have an app that accidently loads 30,000 beans in a
    txn with a max-beans-in-cache of 3,000. Slow, but it works, showing 3,000 active
    beans, and 27,000 passivated ones...).
    This setting is checked in SP5, but I don't know about SP4. So you do need to
    size appropriately.
    In summary:
    - The cache isn't nearly as useful as you'd like. You get far more db activity
    with entity beans than you'd like (too many ejbLoads()). This is disappointing.
    - findByPrimaryKey() finders can use the cache. How long the cache is kept around
    depends on concurrency-strategy.
    - findByAnythingElse() finders always hit the db.
    WebLogic 8 tidies all this up a bit with a cache-between-transactions setting
    and optimistic locking. But I believe findByAnythingElse() finders still have
    to hit the db - ejbql is never run against the cache, but is always converted
    to SQL and run against the db.
    Hope this is of some help - feel free to email me at simon-dot-spruzen-at-rbos-dot-com
    (you get the idea!)
    simon.

Maybe you are looking for

  • Download issue on my lumia 520

    Whenever i try downloading on my phone it always shows me a page that i should add m'y info.the page has a drop-down bar which doesn't drop to enable me choose my data,and i cant proceed without setting my info..pls someone should help me and proffer

  • Question on Report Painter

    Hello, I am trying to learn how to create a report thru Report Painter. At GRR3, I executed standard report 0F-BSNA with my parameters From/To Period Fiscal Year Company Code Ledger (0L) Plan Version 1 But when I run the report, I get a " report cont

  • Unable to Install Panther:looking for a hero

    I was having major issues regarding keychain access with my G4 a while back. After running TECHTOOLPRO 4 on the computer and finding some bad blocks I reinstalled the OS that came with the computer. (OS 9.2 and 10.1) Everything was going well until I

  • Bootcamp on dual hard drive setup

    Hi all you Applefolks, I'm having a serious problem installing Win 7 with Bootcamp on my MacBook Pro Mid 2012 13" non-retina (MacBookPro9,2). Currently I'm running Mac OS X 10.9.2 (13C64). I used my MacBook Pro with Bootcamp on the partitioned origin

  • How to avoid being redirected to Chinese site?

    Hi, I cannot proceed my payment in euro as every time I choose a subscription, I will be redirected to Skype's Chinese site. Most of the payment methods listed there require Chinese credit cards. Would you please help me to solve this problem so that