Best practice for keeping a mail session open in web application?

Hello,
We have a webmail like application where users login with their IMAP credentials, then are taken to an authenticated area of the site where they can manage different things about their email account.
Right now the application is opening and closing a mail store connection (via a new javax.mail.Session) for each page load based on the current logged in user credentials. To me this seems like it would be a bad practice to keep opening and closing a connection each page load.
Are there any best practices for this situation? It would be nice to be able to open the connection to the mail server on login, then keep that connection open until the person logs out, session expires, etc.
I can probably put the javax.mail.Session into the HTTP session, but that seems like it would break any clustering functionality of tomcat. This would be fine if the machine the user is on didn't fail, but id assume if they failed over to another the mail session would be gone. Maybe keeping the mail session in the http session, checking for a connection, then first attempting to reconnect with the logged in credentials before giving up would be a possiblity?
Any pointers would be appreciated

If you keep the connection open across pages, you're going to need to deal with
timeouts - from the http session and from the mail server.
If you don't keep the connection open, you're going to need to "resynchronize"
your view of the store/folder with each operation, in case the folder is modified
by another session.
The former is easier in the common cases, especially if you don't care how gracefully
you handle failures. The latter is more difficult in the common cases, but handles
failure better, and in particular handles clustering better. You'll need to measure it to
see if it meets your performance and scalability requirements. You may need to mix
the two approaches to get acceptable performance.

Similar Messages

  • Best practices for speeding up Mail with large numbers of mail?

    I have over 100,000 mails going back about 7 years in multiple accounts in dozens of folders using up nearly 3GB of disk space.
    Things are starting to drag - particularly when it comes to opening folders.
    I suspect the main problem is having large numbers of mails in those folders that are the slowest - like maybe a few thousand at a time or more.
    What are some best practices for dealing with very large amounts of mails?
    Are smart mailboxes faster to deal with? I would think they would be slower because the original emails would tend to not get filed as often, leading to even larger mailboxes. And the search time takes a lot, doesn't it?
    Are there utilities for auto-filing messages in large mailboxes to, say, divide them up by month to make the mailboxes smaller? Would that speed things up?
    Or what about moving older messages out of mail to a database where they are still searchable but not weighing down on Mail itself?
    Suggestions are welcome!
    Thanks!
    doug

    Smart mailboxes obviously cannot be any faster than real mailboxes, and storing large amounts of mail in a single mailbox is calling for trouble. Rather than organizing mail in mailboxes by month, however, what I like to do is organize it by year, with subfolders by topic for each year. You may also want to take a look at the following article:
    http://www.hawkwings.net/2006/08/21/can-mailapp-cope-with-heavy-loads/
    That said, it could be that you need to re-create the index, which you can do as follows:
    1. Quit Mail if it’s running.
    2. In the Finder, go to ~/Library/Mail/. Make a backup copy of this folder, just in case something goes wrong, e.g. by dragging it to the Desktop while holding the Option (Alt) key down. This is where all your mail is stored.
    3. Locate Envelope Index and move it to the Trash. If you see an Envelope Index-journal file there, delete it as well.
    4. Move any “IMAP-”, “Mac-”, or “Exchange-” account folders to the Trash. Note that you can do this with IMAP-type accounts because they store mail on the server and Mail can easily re-create them. DON’T trash any “POP-” account folders, as that would cause all mail stored there to be lost.
    5. Open Mail. It will tell you that your mail needs to be “imported”. Click Continue and Mail will proceed to re-create Envelope Index -- Mail says it’s “importing”, but it just re-creates the index if the mailboxes are already in Mail 2.x format.
    6. As a side effect of having removed the IMAP account folders, those accounts may be in an “offline” state now. Do Mailbox > Go Online to bring them back online.
    Note: For those not familiarized with the ~/ notation, it refers to the user’s home folder, i.e. ~/Library is the Library folder within the user’s home folder.

  • Best Practices for Keeping Library Size Minimal

    My library file sizes are getting out of control. I have multiple library's that are now over 3TB in size!
    My question is, what are the best practices in keeping these to a manageable size?
    I am using FCPX 10.2. I have three camera's (2x Sony Handycam PJ540 and 1x GoPro Hero 4 Black).
    When I import my PJ540 videos they are imported as compressed mp4 files. I have always chosen to transcode the videos when I import them. Obviously this is why my library sizes are getting out of hand. How do people deal with this? Do they simply import the videos and choose not to transcode them and only transcode them when they need to work on them? Do they leave the files compressed and work with them that way and then transcoding happens when exporting your final project?
    How do you deal with this?
    As for getting my existing library sizes down, should I just "show package contents" of my library's and start deleting the transcoded versions or is there a safer way to do this within FCPX?
    Thank you in advance for you help.

    No. Video isn't compressed like compressing a document. When you compress a document you're just packing the data more tightly. When you compress video you do it basically by throwing information away. Once a video file is compressed, and all video files are heavily compressed in the camera, that's not recoverable. That information is gone. The best you can do is make it into a format that while not deteriorate further as the file is recompressed. Every time you compress a file, especially to heavily compressed formats, more and more information is discarded. The more you do this the worse the image gets. Transcoding converts the media to a high resolution, high data rate format that can be manipulated considerably without loss, and go through multiple generations without loss. You can't go to second generation H.264 MPEG-4 without discernible loss in quality.

  • Best Practice for enhancing the SAP delivered standard WD ABAP application

    Hi,
    I am new to WebDypro ABAP.
    To enhance the SAP delivered Standard WebDynpro Component (complex component with Business objects & powl).
    Kindly let me know the best practice for enhancing the Standard WD ABAP from the below 1 or 2.
    1) To copy & create a "Z" of the component & make changes in that (or)
    2) to enhance directly on the same standard component without making "Z".
    Regards,
    NS

    Hi NS,
    If it is a standard component its better we go for enhancing the component rather than copying it into Z component.
    If there is any issue with in the standard component , SAP supports it through notes and OSS messages. If it is a Z component, SAP doesn't support it.
    If there is any up gradation of business packages, changes will be done to standard , but not the Z components, wherein we could miss it.
    Further, since it is a standard component it might have been used at many places, changes that has to done to reflect all changes might be difficult in this case if it is a z component.
    Regards,
    Harsha

  • Best Practice for Keeping Imported Music Organized? - Help Please

    I download tunes and am looking at the best way to keep iTunes up to date with new downloads (as opposed o CD rips). Am i best to have all files in one overall folder and then just use iTunes to import the files from whatever folder they reside in (after the download) to the copy in iTunes and delete the original? To date Ive been keeping them scattered across drives and just using iTunes as the pointer to the original but now losing track as to what has been imported and what hasnt. Perhaps the above is a better way and if so....best to delete iTunes, reinstall with an empty database, import all and then delete from original flile locations? Help and thanks in advance!!
    Porsche216

    I import similar to the way you described. Import then delete original. I keep a folder just for importing. iTunes has a function consolidating the library. It will copy from the multiple locations to the iTunes library location. You would then have to delete the originals to free up space.

  • ES2 best practice for how much stuff should be in one application?

    I'm wondering if there is a best practice/recommended amount of the maximum amount of forms/processes/etc that you should have contained within one application in ES2?  I have an application which has about 5 processes, and has over 300 xdp forms.  When "deploying" the application it takes probably over 5 minutes or longer.  It seems to be working fine but i'm curious if this will cause any problems and if there is a recommended threshold?

    I don't think there is a limit on the number of processes & forms to be used within an application.
    However there is recommendation for not having more than 20 variable in a single process.
    Each process created within you application will become a service. So it doesn't matter having 500 processes in one application or 50 processes in 10 applications. You will endup with 500 services deployed into Java Runtime.
    Forms also doesn't bother about the count as it just stay within repository (not in Java Runtime).
    The only issue with enormous resources within an application is the response time to Deploy to application server (which you already mentioned here).
    So, if you can split your resources into manageable units, that will reduce your checkin/deploy time.
    Nith

  • Best practice for infoview and which folder to save webi or crystal reports

    All,
    I was wondering what are your thought about the following question.Imagine you have a customer using at the same time webi reports and also crystal reports against BW.
    The thing is that he is transporting the crystal report thru SAP using the rsadmin transaction to manage his crystal reports, but also use the SAP transport to move them to PROD .As far as webi, he is using the import wizard to move them to PROD. \
    As you know the crystal reports will end up into an SAP folder .. something that is such as SAP/(description of the menu role).
    Well webi reports happen to be inside the public folder.
    The question was .. what would be the best practice
    1 u2013 store all your crystal reports against BW in the SAP menu roles as it is ending up thru the SAP transport and have the webi reports inside the public folder ?
    2 u2013 Copy your webi reports from the public folder to the SAP /Menu role folder where my crystal reports are ?
    3 u2013 copy your crystal reports from the SAP/(menu role folder) to the Public folder ?
    Let me know what is your feeling as best practice
    Thank you
    Philippe

    Just a hint:
    The path SAP/2.0 is not mandatory. You can configure the SAP BW publisher on the BW side (transaction /CRYSTAL/RPTADMIN) so that your reports will be stored in another folder on the BOE side. Please note that the addition of the role name in the path cannot be overrided.
    Regards,
    Stratos

  • Best practice for keeping battery life optimal

    I am in a job role where my usage of my iPhone 4S is best described as:
    Weekdays: long call (60+ minutes) after long call (60+ minutes), etc. but MOSTLY voice calls.
    and
    Evenings/Nights:  Use my phone occasionally, mostly apps/data/entertainment type of use.
    So far, I tend to keep my phone plugged in while I'm on these long conference calls, unplug when I need to step out, or step away from my desk, but then plug back in for the next call, etc.
    I plug in at night to start a fresh, 100% full battery the next day.
    Should I do anything differently to get the best life (long term) out of my battery?  Should  I not keep it plugged in while using it?
    Any suggestions would be appreciated.

    http://www.apple.com/batteries/iphone.html
    I have to admit I never gave special attention to batterie on my 3 Ipod-iPhone and 2 MacBookPro and battery lasted for 3-4 years (even 5 on my first MBP).

  • Best practice for opening/closing JDBC conection

    I've written a program that accesses a database, and I'd like to know the best practice for opening and closing that connection. For example should I use a try{} finally {} block,
    try
        //load driver
        //create conection
        //create statement object
        //sql statement to execute
        //execute statement
    finally
        //close connection
    }Or should I split the code into seperate methods, maybe an init() method that loads the driver and makes the connection and an execute() method that creates the statement and executes it and finally a cleanUp() method that closes the connection.
    So, which would be the best way to do this?

    Hallo,
    your idea seems OK to me. However, there are a couple of points to consider:
    1. Do you just want to execute one SQL query? Or will your program execute several? If the latter case is possible, then you do not want to close your connection between queries. Opening and closing a connection to a database takes 'a lot of time', relatively speaking. In this case, it is better to save the connection and reuse it every time that you need it.
    2. Do not forget to close the statements and result sets that you create or get back from JDBC. Depending on the database (eg Oracle) that you are using, you can run out of cursors, and your application will stop.

  • Best Practice for mail connectors

    I am setting up connectors for exchange and groupwise and am testing auto-provisioning. I am running into an issue with groupwise where the GroupWise connector attempts to provision before the E-Directory account is established. Is there a best practice for how to avoid this from happening? should I add a hidden OIM field that populates when E-dir is provisioned correctly and put a rule in place to catch it and then provision groupwise or is there an easier way?
    rkimbal45

    You can put a task on the create user of the email that validates the directory exists. And you can set the retry counter, so that if your email connectors runs first, it will not continue. Then when retry happens, it will see it exists, and then your next tasks can trigger. Just create it as an unconditional task and set it as a preceding to your Create User task. This will keep the create user task in a pending state until your check completes.
    -Kevin

  • Best Practices for Using Photoshop (and Computing in General)

    I've been seeing some threads that lead me to realize that not everyone knows the best practices for doing Photoshop on a computer, and in doing conscientious computing in general.  I thought it might be a good idea for those of us with some exprience to contribute and discuss best practices for making the Photoshop and computing experience more reliable and enjoyable.
    It'd be great if everyone would contribute their ideas, and especially their personal experience.
    Here are some of my thoughts on data integrity (this shouldn't be the only subject of this thread):
    Consider paying more for good hardware. Computers have almost become commodities, and price shopping abounds, but there are some areas where spending a few dollars more can be beneficial.  For example, the difference in price between a top-of-the-line high performance enterprise class hard drive and the cheapest model around with, say, a 1 TB capacity is less than a hundred bucks!  Disk drives do fail!  They're not all created equal.  What would it cost you in aggravation and time to lose your data?  Imagine it happening at the worst possible time, because that's exactly when failures occur.
    Use an Uninterruptable Power Supply (UPS).  Unexpected power outages are TERRIBLE for both computer software and hardware.  Lost files and burned out hardware are a possibility.  A UPS that will power the computer and monitor can be found at the local high tech store and doesn't cost much.  The modern ones will even communicate with the computer via USB to perform an orderly shutdown if the power failure goes on too long for the batteries to keep going.  Again, how much is it worth to you to have a computer outage and loss of data?
    Work locally, copy files elsewhere.  Photoshop likes to be run on files on the local hard drive(s).  If you are working in an environment where you have networking, rather than opening a file right off the network, then saving it back there, consider copying the file to your local hard drive then working on it there.  This way an unexpected network outage or error won't cause you to lose work.
    Never save over your original files.  You may have a library of original images you have captured with your camera or created.  Sometimes these are in formats that can be re-saved.  If you're going to work on one of those files (e.g., to prepare it for some use, such as printing), and it's a file type that can be overwritten (e.g., JPEG), as soon as you open the file save the document in another location, e.g., in Photoshop .psd format.
    Save your master files in several places.  While you are working in Photoshop, especially if you've done a lot of work on one document, remember to save your work regularly, and you may want to save it in several different places (or copy the file after you have saved it to a backup folder, or save it in a version management system).  Things can go wrong and it's nice to be able to go back to a prior saved version without losing too much work.
    Make Backups.  Back up your computer files, including your Photoshop work, ideally to external media.  Windows now ships with a quite good backup system, and external USB drives with surprisingly high capacity (e.g., Western Digital MyBook) are very inexpensive.  The external drives aren't that fast, but a backup you've set up to run late at night can finish by morning, and if/when you have a failure or loss of data.  And if you're really concerned with backup integrity, you can unplug an external drive and take it to another location.
    This stuff is kind of "motherhood and apple pie" but it's worth getting the word out I think.
    Your ideas?
    -Noel

    APC Back-UPS XS 1300.  $169.99 at Best Buy.
    Our power outages here are usually only a few seconds; this should give my server about 20 or 25 minutes run-time.
    I'm setting up the PowerChute software now to shut down the computer when 5 minutes of power is left.  The load with the monitor sleeping is 171 watts.
    This has surge protection and other nice features as well.
    -Noel

  • Best practice for RDGW placement in RDS 2012 R2 deployment

    Hi,
    I have been setting up a RDS 2012 R2 farm deployment and the time has come for setting up the RDGW servers. I have a farm with 4 SH servers, 2 WA servers, 2 CB servers and 1 LS.
    Farm works great for LAN and VPN users.
    Now i want to add two domain joined RDGW servers.
    The question is; I've read a lot on technet and different sites about how to set the thing up, but no one mentions any best practices for where to place them.
    Should i:
    - set up WAP in my DMZ with ADFS in LAN, then place the RDGW in the LAN and reverse proxy in
    - place RDGW in the DMZ, opening all those required ports into the LAN
    - place the RDGW in the LAN, then port forward port 443 into it from internet
    Any help is greatly appreciated.
    This posting is provided "AS IS" with no warranties or guarantees and confers no rights

    Hi,
    The deployment is totally depends on your & company requirements as many things to taken care such as Hardware, Network, Security and other related stuff. Personally to setup RD Gateway server I would not prefer you to select 1st option. But as per my research,
    for best result you can use option 2 (To place RDG server in DMZ and then allowed the required ports). Because by doing so outside network can’t directly connect to your internal server and it’s difficult to break the network by any attackers. A perimeter
    network (DMZ) is a small network that is set up separately from an organization's private network and the Internet. In a network, the hosts most vulnerable to attack are those that provide services to users outside of the LAN, such as e-mail, web, RD Gateway,
    RD Web Access and DNS servers. Because of the increased potential of these hosts being compromised, they are placed into their own sub-network called a perimeter network in order to protect the rest of the network if an intruder were to succeed. You can refer
    beneath article for more information.
    RD Gateway deployment in a perimeter network & Firewall rules
    http://blogs.msdn.com/b/rds/archive/2009/07/31/rd-gateway-deployment-in-a-perimeter-network-firewall-rules.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Best Practices for NCS/PI Server and Application Monitoring question

    Hello,
    I am deploying a virtual instance of Cisco Prime Infrastructure 1.2 (1.2.1.012) on an ESX infrastructure. This is being deployed in an enterprise enviroment. I have questions around the best practices for moniotring this appliance. I am looking to monitor application failures (services down, db issues) and "hardware" (I understand this is a virtual machine, but statistics on the filesystem and CPU/Memory is good).
    Firstly, I have enabled via the CLI the snmp-server and set the SNMP trap host destination. I have created a notification receiver for the SNMP traps inside the NCS GUI and enabled the "System" type alarm. This type includes alarms like NCS_DOWN and PI database is down. I am trying to understand what the difference between enabling SNMP-SERVER HOST via the CLI and setting the Notification destination inthe GUI is? Also how can I generate a NCS_DOWN alarm in my lab. Doing NCS stop does not generate any alarms. I have not been able to find much information on how to generate this as a test.
    Secondly, how and which processes should I be monitoring from the Management Station? I cannot easily identify the main NCS procsses from the output of ps -ef when logged in the shell as root.
    Thanks guys!

    Amihan_Zerrudo wrote:
    1.) What is the cost of having the scope in a <jsp:useBean> tag set to 'session'? I am aware that there are a list of scopes like page, application, etc. and that if i use 'session' my variable will live for as long as that session is alive. (did i get this right?). You should rather look to the functional requirements instead of costs. If the bean need to be session scoped (e.g. maintain the logged in user), then do it so. If it just need to be request scoped (e.g. single page form data), then keep it request scoped.
    2.)If the JSP Page where i use that <useBean> is to be accessed hundred of times a day, will it compensate my server resources? Right now i am using the Sun Glassfish Server.It will certainly eat resources. Just supply enough CPU speed and memory to a server. You cannot expect that a webserver running at a Pentium 500MHz with 256MB of memory can flawlessly serve 100 simultaneous users at the same second. But you may expect that it can serve 100 users per 24 hour.
    3.) Can you suggest best practice in memory management given the architecture i described above?Just write code so that it doesn't unnecessarily eat memory. Only allocate memory if your application need to do so. You should rather let the hardware depend on the application requirements, not to let the application depend on the hardware specs.
    4.)Also, I have implemented connection pooling in my architecture, but my application is to be used by thousands of clients everyday.. Can the Sun Glassfish Server take care of that or will I have to purchase a powerful sever?Glassfish is just an application server software, it is not server hardware. Your concerns are rather hardware related.

  • Tips and best practices for translating C into LabVIEW? SERIOUS newbie...

    I need to translate a C function into LabVIEW.  This will be my *first* LabVIEW project.  I've been reading some tutorials, and I'm still struggling to get my brain out of "C/C++ mode" and learn the LabVIEW paradigms.
    Structurally, the function that I need to translate gets called from a while-loop and performs a bunch of mathematical calculations. 
    The basic layout is something like this (this obviously isn't the actual code, it just illustrates the general flow control and techniques that it uses).
    struct Params
    // About 20 int and float parameters
    int CalculateMetrics(Params *pParams,
    float input1, float input2 [etc])
    int errorCode = 0;
    float metric1;
    float metric2;
    float metric3;
    // Do some math like:
    metric1 = input1 * (pParams->someParam - 5);
    metric2 = metric1 + (input2 / pParams->someOtherParam);
    // Tons more simple math
    // A couple for-loops
    if (metric1 < metric2)
    // manipulate metric1 somehow
    else
    // set some kind of error code
    errorCode = ...;
    if (!errorCode)
    metric3 = metric1 + pow(metric2, 3);
    // More math...
    // etc...
      // update some external global metrics variables  
    return errorCode;
    I'm still too green to understand whether or not a function like this can translate cleanly from C to LabVIEW, or whether the LabVIEW version will have significant structural differences. 
    Are there any general tips or "best practices" for this kind of task?
    Here are some more specific questions:
    Most of the LabVIEW examples that I've seen (at least at the beginner level) seem to heavily rely on using the front panel controls  to provide inputs to functions.  How do I build a VI where the input arguments(input1, input2, etc) come as numbers, and aren't tied to dials or buttons on the front panel?
    The structure of the C function seems to rely heavily on the use of stack variables like metric1 and metric2 in order to perform calculations.  It seems like creating temporary "stack" variables in LabVIEW is possible, but frowned upon.  Is it possible to keep this general structure in the LabVIEW VI without making the code a mess?
    Thanks guys!

    There's already a couple of good answers, but to add to #1:
    You're clearly looking for a typical C-function. Any VI that doesn't require front panel opening (user interaction) can be such a function.
    If the front panel is never opened the controls are merely used to send data to the VI, much like (identical to) the declaration of a C-function. The indicators can/will be return values.
    Which controls and indicators are used to sending data in and out of a VI is almost too easy; Click the icon of the front panel (top right) and show connector, click which control/indicator goes where. Done. That's your functions declaration.
    Basically one function is one VI, although you might want to split it even further, dont create 3k*3k pixel diagrams.
    Depending on the amount of calculations done in your If-Thens they might be sub vi's of their own.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • What are best practices for managing my iphone from both work and home computers?

    What are best practices for managing my iphone from both work and home computers?

    Sync iPod/iPad/iPhone with two computers
    Although it isn't possible to sync an Apple device with two different libraries it is possible to sync with the same logical library from multiple computers. Each library has an internal ID and when iTunes connects to your iPod/iPad/iPhone it compares the local ID with the one the device normally syncs with. If they are the same you can go ahead and sync...
    I have my library cloned to a small 1Tb USB drive which I can take between home & work. At either location I use SyncToy 2.1 to update the local copy with the external drive. Mac users should be able to find similar tools. I can open either of the local libraries or the one on the external drive and update the media content of my iPhone. The slight exception is Photos which normally connects to a specific folder on a specific machine, although that can easily be remapped to the current library if you create a "Photos" folder inside the iTunes Media folder so that syncing the iTunes folders keeps this up to date as well. I periodically sweep my library for new files & orphans withiTunes Folder Watch just in case I make changes at one location but then overwrite the library with a newer copy from the other. Again Mac users should be able to find similar tools.
    As long as your media is organised within an iTunes Music or Tunes Media folder, in turn held inside the main iTunes folder that has your library files (whether or not you let iTunes keep the media folder organised) each library can access items at the same relative path from the library folder so the library can be at different drives/paths on different machines. This solution ensures I always have adequate backups of my library and I can update my devices whenever I can connect to the same build of iTunes.
    When working with an iPhone earlier builds of iTunes would remove any file not physically present in the local library, even if there was an entry for it, making manual management practically redundant on the iPhone. This behaviour has been changed but it will still only permit manual management with a library that has the correct internal ID. If you don't want to sync your library between machines on a regular basis just copy the iTunes Library.itl file from the current "home" machine to any other you want to use, then clean out the library entires and import the local content you have on that box.
    tt2

Maybe you are looking for