What is the best way to optimize database resources in a JSP centered webap

hi, i am kindda i am new to jsp. so i make database connections on evry page.Assuming i am working on an app where there could be 300 concurrent users, what is the best approach for me to take?
thanks
obinna

java_everywhere wrote:
hi, i am kindda i am new to jsp. so i make database connections on evry page.JSP shouldn't have anything to do with database access. In any case, you shouldn't be connecting on every page either. You should be recycling connections via a pool.
Assuming i am working on an app where there could be 300 concurrent users, what is the best approach for me to take?You will have to decide for yourself because we don't know what your app does, how much hardware you have, network latency, etc.

Similar Messages

  • What is the best way to Optimize a SQL query : call a function or do a join?

    Hi, I want to know what is the best way to optimize a SQL query, call a function inside the SELECT statement or do a simple join?

    Hi,
    If you're even considering a join, then it will probably be faster.  As Justin said, it depends on lots of factors.
    A user-defined function is only necessary when you can't figure out how to do something in pure SQL, using joins and built-in functions.
    You might choose to have a user-defined function even though you could get the same result with a join.  That is, you realize that the function is slow, but you believe that the convenience of using a function is more important than better performance in that particular case.

  • What is the best way to include an xml file in JSP?

    I have a jsp page that I need to include an xml file. The xml file
              uses an xsl to render the file. What is the best way to include the
              xml file and still maintain the structure of the style sheet?
              Thanks
              Jennifer
              

              The best way is using the tag lib. If you cannot, but you can use JAXP, you can
              try
              javax.xml.transform.Transformer.transform(Source xmlSource,
              Result outputTarget)
              throws TransformerException
              You construct the transformer with you xsl, use you xml file or DOM to form xmlSource,
              and use JSPWriter "out" to form outputTarget (StreamResult). But if your JSP page
              generates the xml itself, tag lib is the only way.
              [email protected] (Jennifer) wrote:
              >[email protected] (Jennifer) wrote in message news:<[email protected]>...
              >> I have a jsp page that I need to include an xml file. The xml file
              >> uses an xsl to render the file. What is the best way to include the
              >> xml file and still maintain the structure of the style sheet?
              >>
              >> Thanks
              >>
              >> Jennifer
              >
              >Or is there a way to parse the xml file with the jsp page to display
              >the information. I cannot use the Java Standard Tag Libraries as the
              >version of iplanet we are running does not support the JSTL
              >
              >Thanks
              >
              >Jennifer
              

  • What is the best way to copy database from overseas?

    Hello All,
    As customer need to migrate their databases(200 databases) to our data center, at this moment, we usually ask customer to send full backup by HDD and then send trn files via FTP, then we restore full backup and apply latest trn files until cutover. However
    FTP is not stable, trn files are always broken.
    The databases size are 3GB~150GB, trn file size are 1MB to 5GB, especially the big trn files are failed every time, we have set transfer mode to Binary in FTP client, but it still fails.
    I have VPN access to their database server, so in this case is there a better way to copy/migrate their databases to our server?
    Customer using Windows 2003 + SqlServer 2008
    We using Windows 2012 R2 + SqlServer 2014
    Appreciate your suggestions/advises.
    Thanks,
    Albert

    You are doing it the best way. I would also ship the tlogs via courier if time permits. Note that the tlogs get very large during maintenance operations. You may find that if you continually ship the tlogs via log shipping and stagger the cutover you may
    be able to do this in waves.
    With this many databases this sort of migration is difficult without downtime.
    looking for a book on SQL Server 2008 Administration?
    http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
    http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941

  • What's the best way to sample multiple AI's with different sampling rates under one task?

    I'm using a PCI-6221 card and CVI 7.1.
    I have a tri-axis vibration sensor and two other pressure transducers.
    I want to take 10k samples from each vibration axis at 80 kHz. This is
    possible by configuring the scan rate of the "vibration task" to 240
    kHz. (The card maximum is 250 kHz).
    I want to take 1k samples from each pressure transducer at 250 kHz. This happens very infrequently.
    Each measurement is required by a separate task.
    I thought I could do this by setting up three finite tasks (1
    vibration, 2 pressure), but DAQmx won't let me run more than one AI
    task at a time. I've read other posts here, and I realize I have to
    add/remove physical channels on-demand.
    What is the best way to optimize this setup so that I'm not hogging up system resources?
    Should I do the following?
    1. Stop the task
    2. Remove the vibration channels from the task
    3. Add in a pressure channel
    4. Configure the pressure channel
    5. Start the task
    6. Take the pressure samples
    7. Stop the task
    8. Remove the pressure channel from the task
    9. Add in the vibration channels
    10. Configure the vibration channels
    11. Start the task
    Also, the vibration portion is running in finite mode, but I'm looping
    it. Should I switch it to continuous and run the "DAQmxReadAnalogF64"
    to sample the latest 10k samples. (If the task is continuous, would I
    pull the latest 10k samples, or would I pull some old buffered samples
    instead?)
    Thank you,
    Nobody

    Hello Nobody,
    If you configure your task timing to acquire a finite number of
    samples, then you can only read the number of samples that you
    specified in your configuration.  Once you try to read more
    samples, you will receive the error you are seeing.
    If you configure your task timing for continuous acquisition,k then you
    can read samples indefinitely.  Any given DAQmx Read will read the
    oldest unread samples in the buffer.
    If you are going to be switching between different tasks, you will definitely need to stop one before you start the other one.
    I hope this helps!
    Eric
    DE For Life!

  • What is the best way to copy aperture library on to external hard drive? I am getting a message that say's "There was an error opening the database. The library could not be opened because the file system of the library's volume is unsupported".

    What is the best way to copy aperture library on to external hard drive? I am getting a message that say's "There was an error opening the database. The library could not be opened because the file system of the library's volume is unsupported". What does that mean? I am trying to drag libraries (with metadata) to external HD...wondering what the best way to do that is?

    Kirby Krieger wrote:
    Hi Shane.  Not much in the way of thoughts - - but fwiw:
    How is the drive attached?
    Can you open large files on the drive with other programs?
    Are you running any drive compression or acceleration programs (some drives arrive with these installed)?
    Can you reformat the drive and try again?
    Hi Kirby,
    I attached the UltraMax Plus with a USB cable. The UltraMax powers the cable so power is not an issue. I can open other files. Also, there is 500GB of files on the drive so I cannot re-format it. Although, I noted I could import the entire Aperture Library. However, I do not want to create a duplicate on my machine because that would be defeating the purpose of the external drive.
    Thanks,
    Shane

  • What is the best way to create a database schema from XML

    What is the best way to create a database schema from XML?
    i have  a complex XML file that I want to create a database from and consistently import new XML files of the same schema type. Currently I have started off by mapping the XSD into Excel and using Mysql for Excel to push into MySQL.
    There must be a more .net microsoft solution for this but I cannot locate the topic and tools by searching. What are the best tools and way to manage this?
    Taking my C# further

    Hi Saythj,
    When mentioning "a database schema from XML", do you mean the
    XML Schema Collections? If that is what you mean, when trying to import XML files of the same schema type, you may take the below approach.
    Create an XML Schema Collection basing on your complex XML, you can find
    many generating tools online to do that.
    Create a Table with the above created schema typed XML column as below.
    CREATE TABLE youTable( Col1 int, Col2 xml (yourXMLSchemaCollection))
    Load your XML files and try to insert the xml content into the table above from C# or some other approaches. The XMLs that can't pass the validation fail inserting into that table.
    If you have any question, feel free to let me know.
    Eric Zhang
    TechNet Community Support

  • What is the best way to match back 3rd party vendor data to our SQL Server Database?

    So we have this 3rd party data that we need to match back to our database. We have determined that the "ID" column that the 3rd party is sending us back data is a concatenated key of our member's SSN, Gender, and CCYYMMDD Birthdate. In 90% of the
    cases, we can match back on this. However, the other 10% we have to try a couple of different ways...using our Member #, using what is called a HFCA #.
    We are talking about 10s and 20s of data here...NOT thousands.
    What is the best way to handle this via SSIS? A SQL Server Stored Procedure to cursor through the 3rd party data or multiple INSERT-SELECT statements trying to marry back the data? My thought process was to cursor through each record, try and match on our
    90% match, and then determine if we have a match or not, and then if we do not, then try our other means. Should I SELECT 1 to see which matching criteria to go with? So in other words, for the first match...
    IF EXISTS(SELECT 1 FROM TableName WHERE ColumnName1 = .....) BEGIN....ELSE...Blah Blah Blah
    or simply continue doing INSERT-SELECTS...
    I guess I am asking about the efficiency of using a cursor within a SQL Server Stored Procedure here.
    Thanks for your review and am hopeful for a reply.

    You are asking a SSIS question but posted in tsql - which is it?  But before you go further, which matching logic should have priority?  Member # or the SSN/gender/birthdate? Note that the priority does not depend on matching success percentage. 
    In other words, you may prefer to match on member # first (even though it has a lower success ratio but a higher confidence ratio), followed by ssn..., followed by whatever. 
    In any case, this sounds much more like a SSIS logic issue.  Your questions regarding cursors and stored procedures seem premature at this point. OTOH it may depend on what you are actually trying to accomplish.  

  • I would like to put the results of a proc on a local database into a table in my azure db. What is the best way to do this?

    I would like to put the results of a proc on a local database into a table in my azure db.  What is the best way to do this?  I dont see the ability to link.  The local db is on my desktop and is mssql12.
    McC

    I am not sure if i set up the linked server correctly here is the schema and an attempt to run a select and the resulting error:
    SELECT  *
    FROM  [CN5E6E9LM2.DATABASE.WINDOWS.NET,1433].[Mkerr_db].dbo.addr
    OLE DB provider "SQLNCLI11" for linked server "CN5E6E9LM2.DATABASE.WINDOWS.NET,1433" returned message "Unspecified error".
    Msg 40515, Level 16, State 2, Line 1
    Reference to database and/or server name in 'Mkerr_db.sys.sp_tables_info_90_rowset_64' is not supported in this version of SQL Server.
    McC

  • What is the best way to import a full database?

    Hello,
    Can anyone tell me, what is the best way to import a full database called test, into an existing database called DEV1?
    when import into an existing database do you have drop the users say pinfo, tinfo schemas are there. do i have drop these and recreate or how it will work, when you impport full database?
    Could you please give step by step instructions....
    Thanks a lot...

    Nayab,
    http://youngcow.net/doc/oracle10g/backup.102/b14191/rcmdupdb005.htmA suggestion that please don't use external sites which host oracle docs since there can not be any assurance that whether they update their content with the latest corrections or not. You can see the updated part no in the actual doc site from oracle,
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmdupdb.htm#i1009381
    Aman....

  • What is the best way to drop and recreate a Primary Key in the Replication Table?

    I have a requirement to drop and recreate a primary key in a table which is part of Transaction replication. What is the best way to fo it other than remove it from replication and add again?
    Thanks
    Swapna

    Hi Swapna,
    Unfortunately you cannot drop columns used in a primary key from articles in transactional replication.  This is covered in
    Make Schema Changes on Publication Databases:
    You cannot drop columns used in a primary key from articles in transactional publications, because they are used by replication.
    You will need to drop the article from the publication, drop and recreate the primary key, and add the article back into the publication.
    To avoid having to send a snapshot down to the subscriber(s), you could specify the option 'replication support only' for the subscription.  This would require the primary key be modified at the subscriber as well prior to adding the article back in
    and should be done during a maintenance window when no activity is occurring on the published tables.
    I suggest testing this out in your test environment first, prior to deploying to production.
    Brandon Williams (blog |
    linkedin)

  • What is the best way to submit a Concurrent Request over a DB Link?

    Hi,
    We have a requirement to submit a Concurrent Request over a DB Link. What is the best way to do this?
    What I've done so far is I've created a function in the EBS instance that executes FND_GLOBAl.APPS_INITIALIZE and submits the Concurrent Request. I then call this function remotely from our NON-EBS database. It seems to work fine but I found out from metalink article id 466800.1 that this is not recommended.
    Why are Concurrent Programs Calling FND_GLOBAL.APPS_INITIALIZE Using DBLinks Failing? [ID 466800.1]
    https://support.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jspx?_afrLoop=11129815723825&type=DOCUMENT&id=466800.1&displayIndex=1&_afrWindowMode=0&_adf.ctrl-state=17dodl8lyp_108
    Can anyone suggest a better approach?
    Thanks,
    Allen

    What I've done so far is I've created a function in the EBS instance that executes FND_GLOBAl.APPS_INITIALIZE and submits the Concurrent Request. I then call this function remotely from our NON-EBS database. It seems to work fine but I found out from metalink article id 466800.1 that this is not recommended.
    Why are Concurrent Programs Calling FND_GLOBAL.APPS_INITIALIZE Using DBLinks Failing? [ID 466800.1]
    https://support.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jspx?_afrLoop=11129815723825&type=DOCUMENT&id=466800.1&displayIndex=1&_afrWindowMode=0&_adf.ctrl-state=17dodl8lyp_108
    Can anyone suggest a better approach?Please log a SR and ask Oracle support for any better (alternative) approach. You can mention in the SR that your approach works properly and ask what would be the implications of using it (even though it is not recommended).
    Thanks,
    Hussein

  • What is the best way to set up Facetime if using multiple computers with one apple ID?

    I currently have FaceTime setup on my iPad 2 using my normal appleID, but have just recently upgraded our iMac from Leopard to Snow leopard, and have added FaceTime to that computer as well. So my question is this. If I want to avoid confusion with which device is called when someone calls us using FaceTime, what is the best way to distinguish the devices? Should I try to use a different email address to reach the iMac? Is there a best-known-method for this?

    That's a nice system Kevin, and it will work very nicely with Photoshop.  I do take it that you have 16Gb RAM in Total?
    250Gb SSD is a good size, but you can still run short, and that will affect Windows performance.  When you get your system, instal WinDirStat which gives you a graphic display of everything on your drive, like below. Clicking on any of the large areas will tell you what and where they are, so you can think about moving cache folders etc. to one of the HDDs.
    Leave the Pagefile.sys on the boot drive.  Think about disabling Hyphenate as it takes a ton of space, and too often crashes on wake up.
    My Documents
    Desktop
    Downloads
    Look at Bridge cache
    iTunes backup
    Other stuff like that.
    Think about another 500Gb drive just for Photoshop Scratch.  Drives are cheap as chips nowadays
    Do yourself a favour, and invest $100 in Shadow Protect (or similar if there is such a thing) SP saves incremental backups every 15 minutes (you can set the interval, but it has no impact on performance with a system like yours).  If you have a problem you can mount the back up at any of those 15 minute points, and open files from it.  You can also make a bootable DVD image of your C drive, and be back up and running five minutes after disaster strikes.
    Optimize Performance in Photoshop
    Photoshop CC and CC 2014 GPU FAQ
    For more ideas, swing by the Premiere Pro Hardware forum.  Those guys are serious good at this stuff, and you'll find links tips and ideas.
    Happy computing, and have fun with your Creative Cloud® apps.

  • What's the best way to handle this?

    I'm not sure what APIs/setup to use for this situation:
    A company wants to store data projects they do for clients. Each year, the data fields are set (as a result of gov't requirements) and they won't change for any client project for that year. however, the fields required can (and usually do) change every year. So things they require this year, might not be needed the next year and new fields might be introduced.
    While there are likely to be many common fields from year to year, there's no way to guarantee which ones will remain consistent. They also want to be able to do searches on the data and fields, for projects within a year and across years.
    What's the best framework/API/configuration to handle this? EJB? Simple JDBC? If so, how should the database be handled? Won't it have to constantly create new fields in a table? Or is there another way to handle this?
    What's the best way from a "clean architecture" standpoint?

    dang, I really have to start over? I finally got all this stuff working again.  well, hopefully it won't be as big a pain this time since the data won't be coming from a different machine.   After completing the Migration Assistant process, I had to reinput a bunch of serial numbers for apps, reinstall print and mouse drivers, etc...  I've finally got the new machine up and running smoothly and now I gotta start over? Sigh.
    I was hoping that either I could rename the current account after deleting the other one, or just move everything from one account to the other and then delete the 'RJM' account.
    ok, so it sounds like here are the steps I need to take:
    - make another full cloned backup of this current machine in Super Duper
    - reboot this machine using the advice in the first post, wipe everything clean and reinstall the OS
    - create a new account like 'user1' and re-do software update (which is like 2.5 gig worth of stuff) and takes like an hour even on a high speed connection
    - then re-do the migration assistant process to the properly named account
    - then delete the 'user1' account
    does that sound right?

  • What is the best way of returning group-by sql results in Toplink?

    I have many-to-many relationship between Employee and Project; so,
    a Employee can have many Projects, and a Project can be owned by many Employees.
    I have three tables in the database:
    Employee(id int, name varchar(32)),
    Project(id int, name varchar(32)), and
    Employee_Project(employee_id int, project_id int), which is the join-table between Employee and Project.
    Now, I want to find out for each employee, how many projects does the employee has.
    The sql query that achieves what I want would look like this:
    select e.id, count(*) as numProjects
    from employee e, employee_project ep
    where e.id = ep.employee_id
    group by e.id
    Just for information, currently I am using a named ReadAllQuery and I write my own sql in
    the Workbench rather than using the ExpressionBuilder.
    Now, my two questions are :
    1. Since there is a "group by e.id" on the query, only e.id can appear in the select clause.
    This prevent me from returning the full Employee pojo using ReadAllQuery.
    I can change the query to a nested query like this
    select e.eid, e.name, emp.cnt as numProjects
    from employee e,
    (select e_inner.id, count(*) as cnt
    from employee e_inner, employee_project ep_inner
    where e_inner.id = ep_inner.employee_id
    group by e_inner.id) emp
    where e.id = emp.id
    but, I don't like the complication of having extra join because of the nested query. Is there a
    better way of doing something like this?
    2. The second question is what is the best way of returning the count(*) or the numProjects.
    What I did right now is that I have a ReadAllQuery that returns a List<Employee>; then for
    each returned Employee pojo, I call a method getNumProjects() to get the count(*) information.
    I had an extra column "numProjects" in the Employee table and in the Employee descriptor, and
    I set this attribute to be "ReadOnly" on the Workbench; (the value for this dummy "numProjects"
    column in the database is always 0). So far this works ok. However, since the numProjects is
    transient, I need to set the query to refreshIdentityMapResult() or otherwise the Employee object
    in the cache could contain stale numProjects information. What I worry is that refreshIdentityMapResult()
    will cause the query to always hit the database and beat the purpose of having a cache. Also, if
    there are multiple concurrent queries to the database, I worry that there will be a race condition
    of updating this transient "numProjects" attribute. What are the better way of returning this kind
    of transient information such as count(*)? Can I have the query to return something like a tuple
    containing the Employee pojo and an int for the count(*), rather than just a Employee pojo with the
    transient int inside the pojo? Please advise.
    I greatly appreciate any help.
    Thanks,
    Frans

    No I don't want to modify the set of attributes after TopLink returns it to me. But I don't
    quite understand why this matters?
    I understand that I can use ReportQuery to return all the Employee's attributes plus the int count(*)
    and then I can iterate through the list of ReportQueryResult to construct the Employee pojo myself.
    I was hesitant of doing this because I think there will be a performance cost of not being able to
    use lazy fetching. For example, in the case of large result sets and the client only needs a few of them,
    if we use the above aproach, we need to iterate through all of them and wastefully create all the Employee
    pojos. On the other hand, if we let Toplink directly return a list of Employee pojo, then we can tell
    Toplink to use ScrollableCursor and to fetch only the first several rows. Please advise.
    Thanks.

Maybe you are looking for