Should I load product info into memory or keep in db?

Hi,
I am going to develop a b2c shopping website with jboss-tomcat bundle. Now I must make a decision on whether I should load product info into memory or just keep it in table. If load into memory, the performance of customer browsing and searching will be good; but I am afraid of the memory capacity and issues like, when I want to update the product price or description, I need to refresh the memory objects. This shop has about 500 products and we will develop this application into Dell server with 1GB memory. Can anyone who has ever developed such e-commerce website give me some suggestions?
Thanks.
Henry

I usally use XSLT enabled servlets using Apache Xalan http://xml.apache.org
Its as easy as caching the result of a transformation for a particular URL request in a generic servlet against the string url and putting htem in a HashMap.
Usual architecture is EJN stateless session beans sending xml to a servlet which styles it and caches the result according to configuration.
Work out cache expiry etc later once you get things going.
To answer your question completely, it would be on the servlet side that I would do the caching.
You may be able to do it using servlet chaining or filters, although I have not tried this but have seen some tutorials about this sort of stuff.
The main concept of page caching is storing the whole html page in memory in the servlet (or in a Java class structure contained in the servlet) and whenever a page request matches a cached entry, just send the cached html straight at the browser, instead of expensively creating a new dynamic page.
Consider implementing gzip encoding as well and storing zipped pages in memory, much faster. Avoid implemrnting stuff that a proxy server would give you out of the box.
I usually try to configure what urls are cached and what are not.
If you are using JSP, I am sure someone has implemented caching of jsp content, a preoduct called Tiles perhaps?
Josh

Similar Messages

  • Remote Panel: how to load a vi into memory without showing it on the screen

    Hi,
    I'm using the remote panel facility of LabView 6.1 to allow any user
    to control my instruments from within a web browser. However, in order
    for this to work, labview has to be running and the vi opened on the
    server, which i want to avoid.
    Is there a possibilily to enable the labview web server and load a vi
    into memory in the background, whithout showing it on the screen?
    thanks

    > Is there a possibilily to enable the labview web server and load a vi
    > into memory in the background, whithout showing it on the screen?
    >
    Not at the moment. The VI truly is running on the server and simply
    displaying on the remote machine(s). At the moment it also needs to
    have a display on the server. You probably could write a small VI to
    periodically look for open LV windows that have connections to them and
    minimize them. Then run that VI on the server.
    Greg McKaskle

  • Can I load a vi into memory programmatically

    Here's the situation (I'm programming in LV7.1)...
    I've got a program that runs, which uses the DSC.  I'm wanting to programmatically shut down and quit the Tag Engine when the program ends.  My problem is that when I use "Engine Shutdown.vi", all vi's that use the tag engine have to shutdown before the Tag Engine will completely shut down.  If I call this an wait (monitoring engine status for shutdown), then I get a deadlock error.  If I call this and end the program, then I get a pop up asking if I want to "Stop and close Tag Engine" (and for some reason an error afterwards as well, doesn't describe error, just that a log is being created.)  My goal is to have to pop up or dialog boxes appear that the user has to interact with when shutting down.
    So the solution that I believe would be appropriate for this would be to close the main program and have a separate vi (that is independant of the main program structure) call the "Engine Shutdown.vi", then quit itself, closing LV.  So I've been messing around with this...
    "Open VI Reference" with the vi path connected as an input (just a simple vi I made to kill the Tag Engine, wait until it falls, then kills Labview).  This is connected to an "Invoke Node" with the 'Run VI' method selected, property 'Wait until done' set to False, and 'Auto Dispose Ref' set to True.
    This works all fine on my developement machine, but I imagine I'm going to have issues when compiling this and installing it onto it's target machine.  How would I compile this?  If I add this vi to the build as a Dynamic VI will it automatically be loaded into memory when the program runs (on the target computer)?  If not, is there any way I can programmatically load this vi into memory in such a way where it is not a subvi to my program (else it will have same issues shutting down Tag Engine)?
    As always I appreciate everybody's help,
    Sean

    Mike, thx for your response.  Let me run this by you...
    So if I put Main.vi and Kill Engine.vi in the same folder on my developement machine, have the attached snippet of code execute when Main.vi is ready to shut down (very last thing to run, after all loops conclude) build the app (with Kill Engine.vi as a Dynamic VI), and install the app on the target machine, does it sound by you that this will work?  (Kill Engine.vi is the simple vi I made to shutdown and quit the Tag Engine, wait for this to conclude, then exit Labview).
    Thanks,
    Sean
    Attachments:
    snippet.JPG ‏8 KB

  • Fastest way to load an image into memory

    hi, ive posted before but i was kinda vague and didnt get much of a response so im going into detail this time. im making a java program that is going to contol 2 robots in a soccer competition. ive decided that i want to go all out and use a webcam instead of the usual array of sensors so the first step is to load an image into the memory. (ill work on the webcam once ive got something substanical, lol) since these robots have to be rather small (21cm in diameter) i can only use some pretty crappy processors. the robots are going to be both running gentoo linux on a 600 mhz processor, therefore it is absoleutely vital i have a fast method of loading and analyzing images. i need to be able to both load the image quickly, and more importainly analyze it quickly. ive looked at pixelgrabber which looks good, but ive read several things about javas image handling beging crap. these articles are a few years old, and im therefore wonding if there anything from the JAI that will do this for me. thx in advance.

    well i found out why i was having so much trouble
    installing JAI. im now back on windows and i cant
    stand it, so hopefully the bug will be fixed soon. oIt's not so bad. I mean, that's why we love Java! Once your linux problem is fixed you can just transfer your code there as is.
    well. i like the looks of JAI.create() but im not so
    sure how to use it. at this stage im tying to load an
    image from a file. i no to do this with pixelgrabber
    you use getcodebase(), but i dont know how to do it
    with JAI.create. any advice is appreciated. thx.Here are some example statements for handling a JAI image. There are other ways, I'm sure, but I've no idea which are faster, sorry. But this is quite fast on my machine.
    PlanarImage pi = JAI.create("fileload", imgFilename);
    WritableRaster wr = Raster.createWritableRaster(pi.getSampleModel(), null);
    int width = pi.getWidth(); // in case you need to loop through the image
    int height = pi.getHeight();
    wr = pi.copyData(); // copy data from the planar image to the writable one
    wr.getPixel(w,h,pixel); //  pixel is an int array with three elements.
    int red = pixel[0];     // to find out what's the red component
    int[] otherPixel = {0,0,0}
    wr.setPixel(w,h,otherPixel); // make pixel at location (w,h) black.                And here's a link with sample code using JAI. I've tried several of the programs there and they work.
    https://jaistuff.dev.java.net/

  • I'm trying to load video file into CS6 I keep getting error message: 'Could not complete you request because the DynamicLink Media Server is not available.' No Idea what to do next. Help please.

    I'm using CS6 Extended. My OS is "Windows 7 home premium.
    Can no longer load video files into CS6 for edited I keep getting error message:
    'Could not complete your request because the DynamicLink Media Server is not available.'
    Don't know what this means. Until just now I was able to load videos without any problems. Now I'm getting this message. Any thoughts?

    Hi Mylenium,
    upfront:
    I hope I won't be marked as spam now, since I am posting on a few relevant discussions now to this topic.
    However, I really would like to ask the people who have experienced this problem to see if they were able to solve it.
    Now the real deal:
    I posted a question in this discussionDynamicLink Media Server won't let me import video for video frames to layers anymore.
    The linked discussion is a lot younger, which is why I posted there first.
    I also put in information on the steps that I have tried and my computer specifications.
    I am experiencing this problem for a while now and hope you and jones1351may be able to help out.

  • Loading pc info into ipod formatted in apple mac

    my brother loaded his itune library (apple mac) onto my new ipod. I unfortunatley have a hp pc and my ipod does not show up when i plug it into my computer. the prompt tells me to reformat the ipod and remove al the tunes from it. is there any way to convert all those tunes to pc format? and then restore the ipod in pc format?
    hp pavilion   Windows XP  

    An iPod formatted for Mac won't run natively on a PC because the Windows OS does not support the HFS Plus file system and therefore will not see the drive. You can reformat your iPod on Windows if you want to use it on both platforms as Macs can read Windows formatting I've seen posts on the forum where there have been problems but these would be the exception). Alternatively there are third party programs that will allow you to use a Mac formatted iPod on Windows, XPlay for instance will allow you to play your songs on a PC and even copy them to the computer: XPlay 2

  • Loading a large number of strings into memory quickly

    Hello,
    I'm working on an iPhone application where I need to load a large number of strings into memory. Currently I'm simply reading from a file where each string is stored in plain text on a single line. I read the file contents into a string using stringWithContentsOfFile and then I create an NSSet object using NSSet setWithArray:[string componentsSeparatedByString:@"\n"]];
    This works like a charm but takes around 8 seconds to load on the iPhone. I'm looking for ways to speed this up. I've already tried a few things which weren't any faster:
    1) I used [NSKeyedArchiver archiveRootObject:myList toFile:appFile]; to store the NSSet data structure. Then instead of reading the text file storage. I read this file using [NSKeyedUnarchiver unarchiveObjectWithFile:appFile]; This was actually very slow and created a strings file that was about 2x the size of the original plain text.
    2) Instead of using an NSSet, I used and NSDictionary and used writeToFile and dictionaryWithContentsOfFile. This was also no faster.
    3) Finally I tried using the NSDictionary to write to a binary file format using NSPropertyListSerialization. This was also not any faster.
    I've been thinking about using SQLite instead of the flat file read, but I haven't had a chance to prototype that out to see if it would be faster. It's important that I can do fast searches for specific strings, which is why I originally used a set.
    Does any one else have any ideas how to load this into memory faster? If all else fails, I'm simply going to load the strings into memory using a separate thread on application launch so I don't prevent the user from getting to the main menu for 8 seconds.
    Thanks,
    -Keith

    I'd need to know more about what you're doing, but from what you're describing I think you should try to change your algorithm.
    For example: Instead of distributing one flat file, split your list of strings into 256 files, based on the first two hex digits of their MD5 hashes*. (Two digits might not be enough--you might need three or four. You may also want to use folders, especially if you need more than two digits.) When testing if a string exists, first calculate its MD5 hash and extract the necessary number of digits, then load that file into memory and scan its list. (You can cache these lists in memory so that you only have to load each file once--just make sure that a didReceiveMemoryWarning message will empty those caches.)
    Properly configured, SQLite may be faster than the eight second load time you talk about, especially if you ensure it indexes the column you store the strings in. But it's probably overkill for this application.
    \* A hash is a numeric code calculated from a string; on average, changing a single bit anywhere in the string should change half the bits in the hash, so even very similar strings should generate very different hashes. I suggest using MD5 instead of -\[NSString hash\] because the hash method is not guaranteed to return the same results on Mac OS and iPhone OS, or on different releases of either OS. You could also use a different algorithm, like a CRC; these are faster but I'm not as familiar with them. This thread discusses calculating MD5 hashes on iPhone OS: http://discussions.apple.com/thread.jspa?messageID=7362074
    Message was edited by: Brent Royal-Gordon

  • How can I load pattern matching images into memory?

    I discovered this problem by accident when our network went down. I could not run my VI because the .png file used for pattern matching was on a network drive. My concern it the amount of time that is required to read a network file. Is there a way to load the file into memory when the VI is opened?

    Brian,
    Thank you for contacting National Instruments. For most pattern matching programs, the pattern file should only be read from file once and is then copy to a buffer on the local machine. If you examine your code, or an example program for pattern matching, you should see at least two IMAQ Create VI called somewhere near the front of your code. This VI basically creates a memory location and most likely will be called once for your pattern image and once for the image you are searching.
    Unless you are specifically calling a File Dialog VI where you are given a dialog box to open a file or have hard coded the file path so that it is read each iteration of your code, then your pattern file should only need to be called at the beginning of your application, th
    us causing only one file read over the network for that image. Therefore your program most likely already loads the image in memory once it is read and should not be accessing the network constantly.
    Again, I would recommend taking a look at your code to make sure you are not causing a file access every time and then you should be ready to go. Just in case you do have the network go down, I would recommend keeping a copy of the image locally and, if you are feeling ambitious, you can programmatically have your program read the file locally if the network file returns an error.
    Good luck with your application and let us know if we can be of any further assistance.
    Regards,
    Michael
    Applications Engineer
    National Instruments

  • Loading time into memory for a large datastore ?

    Is there some analysis/statistics about what would be the loading time for a timesten data store according to the size of the data store.
    We have a problem with one of our clients where loading of datastore into memory takes a long time. but only certain instances it takes this long.. maximum size for data store is set to be 8GB (64bit AIX with 45GB physical memory), is it something to do with transactions which are not committed?
    Also is it advisable to have multiple smaller datastores or one single large datastore...

    When a TimesTen datastore is loaded into memory it has to go through the following steps. If the datastore was shut down (unloaded from memory) cleanly, then the recovery steps essentially are no-ops; if not then they may take a considerable time:
    1. Allocate appropriately sized shared memory segment from the O/S (on some O/S this can take a significant time if the segment is large)
    2. Read the most recent checkpoint file into the shared memory segment from disk. The time for this step depends on the size of the checkpoint file and the sustained read performance of the storage subsystem; a large datastore, slow disks or a lot of I/O contention on the disks can all slow down this step.
    3. Replay all outstanding transaction log files from the point corresposnding to the checkpoint until the end of the log stream is reached. Then rollback any still open transactions. If there is a very large amount of log data to replay then this can take quite some time. This step is skipped if the datastore was shut down cleanly.
    4. Any indices that would have been modified during the log replay are dropped and rebuilt. If there are many indices, on large tables, that need to be rebuilt then this step can also take some time. This phase can be done in parallel (see the RecoveryThreads DSN attribute).
    Once these 4 steps have been done the datastore is usable, but if recovery had to be done then we will immediately take a checkpoint which will happen in the background.
    As you can see from the above there are several variables and so it is hard to give general metrics. For a clean restart (no recovery) then the time should be very close to size of datastore divided by disk sustained read rate.
    The best ways to minimise restart times are to (a) ensure that checkpoints are occurring frequently enough and (b) ensure that the datastore(s) are always shutdown cleanly before e.g. stopping the TimesTen main daemon or rebooting the machine.
    As to whether it is better to have multiple smaller stores or one large one - that depends on several factors.
    - A single large datastore may be more convenient for the application (since all the data is in one place). If the data is split across multiple datastores then transactions cannot span the datastores and if cross-datastorestore queries/joins are needed they must be coded in the application.
    - Smaller datastores can be loaded/unloaded/recovered faster than larger datastores but the increased number of datastores could make system management more complex and/or error prone.
    - For very intensive workloads (especially write workloads) on large SMP machines overall better throughput and scalability will be seen from multiple small datastores compared to a single large datastore.
    I hope that helps.
    Chris

  • Question to load the entire database into memory.

    I am planing to load the whole database into memory. Suppose mydb is 10G. Then I plan Max Memory for 10G. Then I can create a named cache with 10G and bind the mydb to this cache. Is this the best way to load entire db into memory?
    If the whole db can be loaded into memory, if procedure cache, cache for tempdb and all other params are not important any more? Or still need to follow common practice to configure memory params?

    Hi Kent,
    12-15GB sounds reasonable.
    I recommend always including your version with your initial posting (unless the version simply doesn't apply to the question).  Particularly when running an unusual version, and 12.5.x has been end-of-lifed long enough to be unusual now.  Are you running SAP Applications on this system?  If not, please questions post to the SAP Adaptive Server Enterprise (SAP ASE) for Custom Applications space instead, it is the space for general-purpose ASE questions.
    Cheers,
    -bret

  • Why does LabVIEW sometimes hang when DLL loads into memory?

    I'm calling a third party DLL from LabVIEW 2010.  LV occassionally hangs (Not Responding) when either loading the DLL into memory or when closing my main VI.  When it doesn't hang, it communicates with the DLL seamlessly.  When I try to build an Application (exe), LV always hangs during the build at the point that it is saving the main VI (the scroll on the builder moves until it says "Saving main.vi").  Any insight into what needs to be done to the DLL (or VI) to resolve this issue?

    What does the DLL do? One cause of this could be to try to load/unload other DLLs in PROCESS_ATTACH or PROCESS_DETACH of DLLMain. Microsoft has in many places said that doing this is highly unsafe and asking for all kinds of troubles, since the DLL loading is not fully reentrant.
    Another possibility would be incorporation of ActiveX components that use some form of RPC mechanisme to communicate with out of process ActiveX/OLE components. The necessary RPC proxy hooks into the calling processes message loop and that is a delicate piece of code in LabVIEW. Even when the DLL does not use ActiveX itself, it might employ some message hooking on its own and mess up things in a way that Windows and/or LabVIEW get confused.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Exporting from Motion Loads Into Memory

    I completed a slideshow in Motion, and am trying to export it out for a DVD.
    First I tried exporting from Motion to DV NTSC format, and after watching my memory & CPU, it appears that Motion is loading the project into memory for export.
    So I quit the process, quit Motion, loaded the Motion project file into compressor, and submitted a job to the Batch monitor to compress from my Motion project to DVD.
    The process is taking 5 hours for a 10 minute piece. The CPU's are running at around 10% and my memory is full? I was hoping compressor would not load the project into memory and instead use the CPU to render.
    I suspect my slow down is from the project paging in and out of memory to compress?
    Does anyone know how to force compressor to use CPU for rendering and not load a Motion Project FIle into memory?
    Thank you.

    I do slideshows regularly. I render out of Motion to DV. Clean, fast, slick.
    I pull that DV movie file into DVDSP (if I'm going that direction) and let the defaults to their job. Clean, fast, predictable results. Your output efficiency depends on your image sizes and effects and nesting so I carefully reduce the size of my stills, preplan nests, and prerender/reimport where possible
    I don't see where you are having a problem unless you are simply misinformed. You must render out of Motion or you must render the .motn project file from within another app like FCP but it's going to be processed in exactly the same way as if you have rendered out of Motion.
    You CAN set up Quemaster to run batces using all of your Mac's cores as separate rendering engines but that gains you nothing in Motion projects, usually, since you only have one graphic card. And Quemaster, despite a few success stories on that forum, remains a cruel joke.
    bogiesan

  • Sample app for loading TopLink Cache into multiple server instances

    We are looking for a solution for a reporting system for large data size (somewhere around 20G per application). We want to load all data into memory and query from there.
    Does anyone have experience on this? A sample application for loading large caches into multiple physical servers would help a lot.
    Thanks for your help in advance.

    Hi pam,
    > How would multiple app server help...meaning what's the most beneficial part of having multiple app server in such as scenario.
    A good Question, benefit  is lot, In simple  words  Load balance.. Performance  in Transfering the messages  will be quick, System resource  is maintained and you can assign particular adapters  for particular interfaces..alone... if you have Multiple App servers....
    Regards
    Agasthuri Doss

  • JPA - How to load all data in memory

    Is it possible to load all data in memory using JPA, do many transactions like create, delete, update and finally make a commit when I want?.
    I am trying to do something like Weblogic does, the user locks the console, does many transactions (create services, delete accounts, update costumers etc) and at the end when he presses a button, all changes are committed.

    Yes. Of course. There are tradeoffs. First, if you loaded all data into memory, you likely have a small database or a huge amount of RAM. :^)
    Big I digress. What you are talking about is either conversational or transactional state. The former would be implemented at your view-controller level (e.g., hold onto the results of the user's selections until the work is done, and then submit as a batch). Sometimes this is simply session state, but generally, you are handling this at the web or controller tier. It is solely a decision to enhance user experience.
    Another possibility is to hold onto the transaction for the whole time that the user is modifying data. This is in some ways "easier" than the first method. However, it likely will not scale beyond a non-trivial amount of users. Generally, you want to hold onto a transaction for the shortest possible time.
    So, my recommendation would be to load your objects into memory using JPA. Keep those in session state (or write them to a 'working copy' database table or the filesystem to save memory). Then submit all the requests in one go back to JPA (in one transaction).
    - Saish

  • Help! Help! Help! Why is DPV and LACS not loading into memory ?

    Postalsoft Desktop Mailer - load into memory is checked and it does not load into memory,
    I tried it unchecked and it is the same speed.
    How do I fix this problem?????

    Tim,
    Please make sure to post any "classic Firstlogic software" questions under the Business Objects Enterprise Information Management (EIM) forum and you will get a much faster response.  Also you can log a case for support by going to the Help and Support tab on the SAP website, then click on Report A Product Error to log a case for support.  Just make sure to choose BOJ-EIM-COM for the component when logging a case for DeskTop Mailer or Business Edition. 
    Steve is correct.... we are aware of the slow speed issue with address correction when using DPV and LACS in DeskTop Mailer and Business Edition 7.90.  Rev 4 should be out by the end of this week.  This will correct the speed issue.  Please make sure to have the auto update option turned on in the software so you get this update.
    Thanks,
    Kendra

Maybe you are looking for

  • My home button is stuck

    My home button is stuck. i can't use my ipod touch at all.

  • F1109: Line Item 001

    Dear All, After client copy, When posting MIGO, Transfers system showing 'Express document update error...'. When checked the details of error info showing as : F1 109 : Line item 001. Any suggestions to solve this issue with number ranges. Regards,

  • How to create internal fields dynamically

    Hi, I am getting the data from MARA table to an internal table. Now I have to show the data according to month wise, like if same material is created in may and just, i have to show that material in may and june like below. material | may | june | m1

  • Seperate business logic from application logic

    Hi guys, I am looking for a good article explaining the princiaples of seperating between Business Logic & Integartion logic. The articale will be used to exlplain the prinicples SAP PI is based on when it comes to integration between 3rd party syste

  • What is the process of Leave of Absence?

    what is the process of Leave of Absence?