Caching data

In the application, a row contains 4 text-fields. After the row is added, data can be entered into the corresponding text-fields of the row. Any number of such rows can be added to the page by clicking "ADD" button. When "SAVE" button is clicked, it will save those to database.
Now the problem I have is, I added some rows, didn't save them to the database yet and have to delete one of them(row).
How can this be approached. Do I need to put that entire row(s) in cache .. how to put them in cache and delete them. Is there any class that I can use.
Any help will be appreciated.

I would do it by having a custom class to hold the values of the text fields and add them to a collection and save it in the session. For every addtion, I would get the collection from the session and add a new row (object) into the collection. On the final submit, get the collection and persist it into the database and clean the session.

Similar Messages

  • Some music files do not show up in google play music app library.  I did clear cache/data and restarted phone.  The music is stored on the SD card.  Most of the music in the library is in the same folder on the sd card.  I can play the song from file mana

    some music files do not show up in google play music app library.  I did clear cache/data and restarted phone.  The music is stored on the SD card.  Most of the music in the library is in the same folder on the sd card.  I can play the song from file manager, but it still is not in the music library in play music.

    Cyndi6858, help is here! We'd be happy to help figure this out. Just to be sure though, the Droid Maxx should not have an SD card. Is this the Droid Razr Maxx? How did you add the music to the device? Are you able to see the files and folders located on the SD card or device when plugged in?
    Thanks,
    MichelleH_VZW
    Follow us on Twitter @VZWSupport

  • Report Using A Stored Procedure Is Caching Data

    Post Author: springerc69
    CA Forum: Data Connectivity and SQL
    I converted a report from a view that worked fine to a stored procedure to try and improve the performance of the report, but when I publish the report it seems to cache the data.  When you change the parameters you use to call the report or simply run the report again with the original parameters the report doesn't run the sproc and just shows the cached data from the original request.  If I right click on the report and select refresh (web based crystal report), it prompts for the parameters. I just close out the prompt window, report window and click on the link for the report again it returns the correct results based on the new parameters or a refresh based on the original parameters.  I've checked the cache time setting and set it to 0, and if you close the Internet Explorer window that originally called the report, open IE back up and request the report it will return the appropriate data.  I have also verify that the report is not setup to save data with report.  This is on Crystal XI Server.

    Post Author: synapsevampire
    CA Forum: Data Connectivity and SQL
    Which viewer are you using?
    It might be that your IE settings are caching the report pages. because you're using an HTML viewer.
    Try the Active-X viewer.
    I've forgotten which icon it is that changes the viewer...it's under the preferences options, I think it's the one that looks like a hunk of cheese on the right upper side.
    -k

  • Prevent multiple users from updating coherence cache data at the same time

    Hi,
    I have a web application which have a huge amount of data instead of storing the data in Http Session are storing it in coherence. Now multiple groups of users can use or update the same data in coherence. There are 100's of groups with several thousand users in each group. How do I prevent multiple users from updating the cache data. Here is the scenario. User logs-in checks in coherence if the data there and gets it from coherence and displays it on the ui if not get it from backend i.e. mainframe systems and store it in coherence before displaying it on the screen. Now some other user at the same time can also perform the same function and if don't find the data in coherence can get it from backend and start saving it in coherence while the other user is also in the process of saving or updating. How do I prevent this in coherence. As have to use the same key when storing in coherence because the same data is shared across users and don't want to keep multiple copies of the same data. Is there something coherence provides out-of-the-box or what is best approach to handle this scenario.
    Thanks

    Hi,
    actually I believe, that if we are speaking about multiple users each with its own HttpSession, in case of two users accessing the same session attribute in their own session, the actually used cache keys will not be the same.
    On the other hand, this is probably not what you would really like, you would possibly like to share that data among sessions.
    You should probably consider using either read-through caching with the CacheLoader implementor doing the expensive data retrieval (if the data to be cached can be obtained outside of an HTTP container), or side caching with using Coherence locks or entry-processors for concurrency control on the data retrieval operations for the same key (take care of retries in this case).
    Best regards,
    Robert

  • Attempt to fetch cache data from Integration Directory failed

    HI,
    while checking cache connectivity testing: status is
         green:   Integration Repository     
         green:    Integration Directory     
              green: Integration Server - JAVA     
                  red:Adapter Engine af.axd.aipid     
               yello:Integration Server - ABAP
    Jun 30, 2007 1:16:08 PM - Cache notification from Integration Directory received successfully
    Attempt to fetch cache data from Integration Directory failed; cache could not be updated
    [Fetch Data]: Unable to find an associated SLD element (source element: SAP_XIIntegrationServer, [CreationClassName, SAP_XIIntegrationServer, string, Name, is.00.aipid, string], target element type: SAP_BusinessSystem)
    [Data Evaluation]: GlobalError
    what to do?
    and there is nothing under integration server and integration engine but there is an green status under Non-Central Adapter Engines > from this i am doing send messeage testing fro xi to bi ,
    send message to: http://aibid:8000/sap/xi/engine?type=entry
    payload:
    <?xml version="1.0" encoding="utf-8"?>
    <ns1:MI_VCNdatatoBI
    xmlns:ns1="http://bi.sap.com"
    xmlns:xsi="http://www.w3.org/2001/XMLSchemainstance">
    <DATA>
    <item>
    </BIC/ZG_CWW010>1000<//BIC/ZG_CWW010>
    </BIC/ZVKY_CHK>1<//BIC/ZVKY_CHK>
    </item>
    </DATA>
    </ns1:MI_VCNdatatoBI>
    i can sent message from there (component monitoring > Non-Central Adapter Engines) but unable to get it at message monitoring and at BI side.
    dushyant.

    thanks,
    but i have adepter type XI
    and i am folowing step of this lonk and there is no need to create fild adepter type according to that and almost done but while sending message through config. monitor in RWB it goes but not coming in mess monitoring and at bi side
    see 4.5 > 3 and 4 topic and 4.6 > 3,4,5
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f027dde5-e16e-2910-97a4-f231046429f2
    now what to do?
    dushyant,

  • In which table is the Live cache data stored?

    Hi experts,
       I am new APO .Can anyone let me know in which database table will the Live cache data be stored?
    Thanks in advance
    regards
    Ashwin.

    Hello Ashwin,
    the idea of the liveCache is to haev data permanently and quickly available without having to read/write from/to database. Therefore, the liveCache is <b>NOT</b> a physical database, it is a program in C++ that <i>simulates</i> a database and holds the data in the memory.
    However, it does need so called liveCache-anchors which are saved on the database, and which are similar to pointers.
    However, you can extract data from liveCache using BADIs or by creating a datasource on a planning area (for DP and SNP), manipulation can also be done only by BADIs or sophisticated programming (which basically uses RFCs).
    I hope this answers your question.
    Regards,
    Klaus

  • Exposing cached data as webservice

    Hi all,
    I am planing to put an xml file data in cache i.e by making the xml data as a string and putting it in the cache. Now I want to expose this cached data as webservice. How can I do that. I am a newbie, pardon my ignorance.
    Thanks,
    PS

    Hi,
    You can either place your config file in your application classpath.
    http://wiki.tangosol.com/display/COH32UG/Cache+Configuration+Elements
    Or you can set from the command line or maybe from a startup script like catalina.bat using
    the following -D argument.
    -Dtangosol.coherence.cacheconfig=pathtofile
    The following has more information as well.
    http://wiki.tangosol.com/display/COH32UG/Command+Line+Setting+Override+Feature
    Thanks,
    -Dave

  • Query Dimension 1-Cache Data

    Hi,
    I am running a MDX query  and when I checked in profiler its showing a long list of Query dimension (Event Class) 1-Cache data, What does it mean?
    I think its not hitting storage engine rather pulling from cache but why so much caching. What does this event class means?
    Please help! 

    Hi Pinu123,
    Create Cache for Analysis Services (AS) was introduced in SP2 of SQL Server 2005. It can be used to make one or more queries run faster by populating the OLAP storage engine cache first. The query results were cached in memory for re-use.
    In your scenario, you said that the results not hitting storage engine rather pulling from cache. In this case, it seems that this results had been queried by other users and cached in memory. For more information about cache data, please refer to the links
    below.
    How to warm up the Analysis Services data cache using Create Cache statement
    Examining MDX Query performance using Block Computation
    Regards,
    Charlie Liao
    TechNet Community Support

  • Adobe Reader is Cacheing Data From OLEDB Connection

    I am trying to pre-populate a form from an oledb connection to an access database. I have an html page where a user can search an id. This id then gets written to a table where the sql query defined in the PDF form can grab it, join it with a table where the user info is stored, then display it.
    My problem is that Adobe Reader seems to be caching data from the first SQL select query that is executed. When I change the id I am loading, I still get data from the first SQL query in Reader. If I open the PDF via Acrobat, the data loads up properly, and it doesn't seem to be cached. I have looked at the following forum for suggestions: http://www.adobeforums.com/webx/.3bc3549c, but their suggestions haven't worked.
    I have tried turning off caching anywhere I can find it (i.e. in LiveCycle, Adobe Reader), but nothing is working. Does anybody have any suggestions?

    It sounds like you need use the adobe schema in MII so the output of your transaction matches it.
    Regards,
    Jamie

  • Cache data in Web Services

    Hi!
    I need to create a Web Service that can cache data in between multiple Web Service calls. If I'm not misstaken then stateful sessions keep data but still separate clients from each other. I need all clients to be able to access the same synchronized cached data. The data should be kept in memory for as long as the server is running. What kind of java technology should I use for that? Is it even possible to create? Using a database is not an option in this case.
    Thanks...

    Hello. I would strongly recommend looking at Apache's Java Caching System that is tailored for web applications. It's caching model is based on JSR 208 (Java Caching) and is probably the best solution out there. We have a similar requirement in our web service application that JCS helped accomplish.
    JCS allows both in-memory as well as caching to disk. The latter would require the data to implement java.io.Serializable or Externalizable interface.
    http://jakarta.apache.org/jcs/
    Hope this helps!

  • Coherence Event Listeners on Cache Data Expiry

    Hi,
    I'm working on a Fusion integration project with Coherence for caching solution. We have a scenario to update cache data(with Updated Key value) when it expires in Coherence.
    I came across topic on listeners for Coherence Events, would like if its possible to use MapListener to handle cache Expiry.
    I have created class that implements MapListener, but it doesn't get triggered at Cache Expiry.
    Appreciate if your thoughts on this.? Also, Please suggest me best practice/approach for using Coherence cache listeners.
    Thanks
    Dileep Kumar.

    Yes - when you click on the array/group widget in the form editor, there's an 'Event Listener' section with a property called "Method Invoked".
    I think you've figured this out on your own, but If you have trouble seeing a method in the dropdown, create a method that has one argument of type Fuego.Util.GroupEvent. If you look at this class, you can see the various event types that can be received (ADD, INSERT_DOWN, INSERT_UP, REMOVE, and REMOVE_LAST).
    Dan

  • Approach when the used Live cache data area crosses the threshold

    Hi,
    Could any of you please let me know the detailed approach when the used Live cache data area crosses the threshold in APO system?
    The approach I have as of now is :
    1) When it is identified that data cache usage is nearly 100%, check for hit rate for OMS data in data cache in LC10 .Because generally hit rate for OMS data in data cache should be atleaset 99.8% and Data Cache usage should be well below 100%.
    2) To monitor unsuccessful accesses to data cache choose refresh and compare value now and before unsuccessful accesses result in physical disk I/O and should generally be avoided.
    3) The number of OMS data pages (OMS Data) should be much higher than the number of OMS history pages (History/Undo).A ratio of 4:1 is desirable. If OMS history has nearly the same size as OMS data, use Problem AnalysisPerformanceOMS versions to find out if named consistent views (Versions) are open for a long time. Maximum age should be 8hrs.
    4)If consumption of OMS heap and data cache is large, one reason may be a long running transaction simulation that accumulates heap memory and prevents the garbage collector from releasing old object images.
    5) To display existing transactional simulations in LC10,use Problem AnalysisPerformanceOMS versions and SM04 to find out user of corresponding transaction and may be required to cancel the session after contacting user if the version open for long time..
    Please help me by providing additional information on the issue.
    Thanks,
    Varada Reddy.

    Hi Mayank, sorry, one basic question - are you using some selection criteria during extraction? If yes, then try extraction without the selection criteria.
    If you maintain selection based on, let's say, material, you need to use the right number of zeros as prefix (based on how you have defined the characteristic for material) otherwise no records would be selected.
    Is this relevant in your case?
    One more option is to try to repair teh datasource. In the planning area, go to extraction tools, select the datasource, and then choose the option of repair datasource.
    If you need more info, pls let me know.
    - Pawan

  • Caching data with Entity Bean

    Hello,
    I am performing some tests concerning the benefit of caching data with Entity Bean.
    Here is the case :
    I have an Entity Bean with a business method getName() to retrieve a name field in the EJB.
    I understand that in order to cach data, I have to set the NOT_SUPPORTED transaction attr for this method. In this way, when this method is called, the ejbReload() is not called and the data is retreived from the EJB ready instance (and not from the database).
    Is it true and is it the good way to use the cach mechanism ?
    Now if we consider that this instance is the only one in the ready stage, and it is never pooled (it seems so !), what about a modification of the database from a tier (or from an other EB instance)? The Entity Bean is not able to see this modification seence it does not call the ejbLoad method.
    Is there a way to force an Entity Bean to be periodically polled in order to recover data from the data store when activated ?
    Thanks in advance,
    Thierry

    No, This is wrong way of doing what you want. Most of the application servers provide various configuration settings for this. Eg. caching mechanism, interval on when to call ejbLoad and ejbStore, read only beans. You have to check the documentation for this.
    --Ashwani

  • Accessing the planning cache data (In IP)

    Hello...
    I need a help in retreiving the live cache data.
    while we are using the transactional cube everytime we need to switch into planning mode and while loading we need to switch to load mode.
    Is there any other way to access the IP cache data.
    Appreciate any suggestions....
    Regards,
    Pari.

    Hi Parimala,
    Custom Planning Functions can be created using the transaction RSPLF1. Give a technical name and click on create button. It will ask for a class name to be attached with the function type. The class should be created in SE24 transaction. The implementation of the class will be in object oriented ABAP, where you will write the implementation logic in class methods.
    The custom planning function so created is available for usage when you create a planning function attached with an aggregation level. You can see it in the drop-down list of all the function types.
    You can refer to the standard function type for delete (0RSPL_DELETE) which will give you an idea on how a the class is implemented.
    The link provided by Marc above is helpful :
    http://help.sap.com/saphelp_nw70/helpdata/en/43/332530c1b64866e10000000a1553f6/frameset.htm
    Also, go through this how to guide:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c0ac03a4-e46e-2910-f69d-ec5fbb050cbf
    Hope this helps you.
    Regards,
    Srinivas Kamireddy.

  • Does clearing cache & data in Google Services Framework really force OTA updates?

    While waiting for Big Red to push the OTA 4.3 update to their Samsung GS3 customers, I read somewhere that clearing the cache + data + selecting Force Stop in the Google Services Framework app [settings>applications manager>all] before checking the device software update status [settings>about phone>software update>check new] would force available OTA updates to the device; however, after repeated attempts, I've been unsuccessful achieving that objective. In fact, after doing all that AND removing both the battery + sim card for 30-60 seconds, the date/time stamp in the update status continues to read the most recent status check.
    Though I've seen several claims of success doing this on different devices, has anyone else in the VZW Community tried this and has gotten a successful 'force update' on their GS3?

    No, that will not force an update, especially one that does not exist.

  • Stale Near Cache data when all Cache Server fails and are restarted

    Hi,
    We are currently making use of Coherence 3.6.1. We are seeing an issue. The following is the scenario:
    - caching servers and proxy server are all restarted.
    - one of the cache say "TestCache" is bulk loaded with some data (e.g. key/value of : key1/value1, key2/value, key3/value3) on the caching servers.
    - near cache client connects onto the server cluster via the Extend Proxy Server.
    - near cache client is primed with all data from the cache server "TestCache". Hence, near cache client now has all key/values locally (i.e. key1/value1, key2/value, key3/value3).
    - all caching servers in the cluster is down, but the extend proxy server is ok.
    - all cache server in the cluster comes back up.
    - we reload all cache data into "TestCache" on the cache server, but this time it only has key/value of : key1/value1, key2/value.
    - So the caching server's state for "TestCache" is that it should only have key1/value1, key2/value, but the near cache client still thinks it's got key1/value1, key2/value, key3/value3. So in effect, it still knows about key3/value3 which no longer exists.
    Is there anyway for the near cache client to invalidate the key3/value3 automatically? This scenario happens because the extend proxy server is actually not down, but all caching servers are, but the near cache client for some reason doesn't know about this and does not invalidate the cache client near cache data.
    Can anyone help?
    Thanks
    Regards
    Wilson.

    Hi,
    I do have the invalidation strategy as "ALL". Remember this cache client is connected via the Extend proxy server where it's connectivity is still ok, just the caching server holding the storage data in the cluster are all down.
    Please let me know why else we can try.
    Thanks
    Regards
    Wilson.

Maybe you are looking for