Synchronized blocks: the best approach?

I have a question concerning synchronization within a servlet. As I
          understand it, by default only one instance of a servlet will be
          instantiated by WebLogic. The impact of this is that if an instance
          variable is set in 'doGet', there is no guarantee that the value will be
          unchanged upon execution of 'doPost' (i.e. it is not "thread-safe").
          There are a few approaches to this problem, none of which are very
          appealing to me. If I utilize the "SingleThreadModel" interface, then I am
          guaranteed that the value will not change. However, this will occur at the
          expense of performance (isn't there a maximum of 5 instances?), especially
          if hundreds of users are concurrently attempting to access this same
          servlet. Even if I store a variable in the session, if the user can open
          multiple browser windows, then I could still run into a synchronization
          problem (since each window will share the same session).
          With this in mind (assuming that a user can open multiple browser windows
          and that I do not want to implement "SingleThreadModel"), is my only option
          to place synchronized blocks around the sections of code that I want
          protected?
          I appreciate any input. Thanks.
          

          It is possible to make this really messy - avoid that.
          Don't use intance variables, use only variables declared in the doGet method -
          each execution will have it's own variables and you won't have these problems.
          If you need data to persist from one call to the next, use the httpSession.
          If you need to manage separate browser windows - then put a hidden field or a
          parameter in the URL that identifies that window. Then store the data in the httpsession
          along with the window id -
          session.putValue("USERNAME_"+winId, username);
          Putting synchronized blocks doesn't really protect you from the multiple window
          problem - suppose a user does a search in one window - and the search results
          are stored in the http session. Then they do a second search in a second window
          - even if you protect the variables in a synchronize block - the second results
          will overwrite the first results. Then they start to page through what should
          be the first results in the first window - but they would see the second results.
          Mike Reiche
          Resume at http://home.earthlink.net/~mikereiche/resume.htm
          "Edward Mench" <[email protected]> wrote:
          > I have a question concerning synchronization within a servlet. As
          >I
          >understand it, by default only one instance of a servlet will be
          >instantiated by WebLogic. The impact of this is that if an instance
          >variable is set in 'doGet', there is no guarantee that the value will
          >be
          >unchanged upon execution of 'doPost' (i.e. it is not "thread-safe").
          >
          > There are a few approaches to this problem, none of which are very
          >appealing to me. If I utilize the "SingleThreadModel" interface, then
          >I am
          >guaranteed that the value will not change. However, this will occur
          >at the
          >expense of performance (isn't there a maximum of 5 instances?), especially
          >if hundreds of users are concurrently attempting to access this same
          >servlet. Even if I store a variable in the session, if the user can
          >open
          >multiple browser windows, then I could still run into a synchronization
          >problem (since each window will share the same session).
          >
          > With this in mind (assuming that a user can open multiple browser windows
          >and that I do not want to implement "SingleThreadModel"), is my only
          >option
          >to place synchronized blocks around the sections of code that I want
          >protected?
          >
          > I appreciate any input. Thanks.
          >
          >
          >
          >
          

Similar Messages

  • What is the best approach to return Large data from Stored Procedure ?

    no answers to my original post, maybe better luck this time, thanks!
    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    Create a new farm in the secondary Data Center at the same patch level with the desired configuration. Replicate the databases using the method of choice (Mirroring, AlwaysOn, etc.). Create a downtime window where you can then attach the databases to the
    new farm's Web Application(s)/Service Application(s).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • I have a MacBook Pro 5,4 running OSX 10.6.8 and Safari 5.1.10. A website i like has a known bug with 5.1.10 and recommends I install a newer version of Safari or use Firefox or Chrome. Just looking for advice on the best approach. Thanks!

    I have a MacBook Pro 5,4 running OSX 10.6.8 and Safari 5.1.10. A website i like has a known bug with 5.1.10 and recommends I install a newer version of Safari or use Firefox or Chrome. Just looking for advice on the best approach. Thanks!

    Unfortunately, Safari cannot be updated past 5.1.10 on a Mac running v10.6.8.
    So, the options are to upgrade to a newer OS X or use Firefox or  Chrome.
    Be aware, Apple no longer support Snow Leopard v10.6 >  www.ibtimes.com/apple-kills-snow-leopard-os-x-106-no-longer-receives-security-u pdates-1558393
    See if your Mac can run v10.9 Mavericks >  OS X Mavericks: System Requirements
    If so, you can download and install Mavericks for free from the App Store.
    Read prior to upgrading >   Upgrading to 10.7 and above, don't forget Rosetta! | Apple Support Communities

  • What is the best approach to handle multiple FK with single table.

    If two tables are joined with each other with more than one ways, for example
    MAIN table is (col1, col2,....coln, person_creator_id, person_modifier_id)
    PERSON table is (person_id, name, address,........ phone) etc
    At database level PERSON_CREATOR_FK and PERSON_MODIFIER_FK are defined.
    Objective is to create a report that shows
    col1, col2...coln, person creator name, person modifier name
    If above two objects are imported with FKs in a EUL and discoverer plus is used to create above report. On first inclusion of person name discoverer plus will ask you to pick the join (provided the checkbox to disable this feature is not checked). Once you pick 'person creator' join it will never allow you to pick person modifier name.
    One solution is two create a custom folder with query like
    select col1, col2,...coln,
    pc.name, pc.address,.... pc.phone
    pm.name, pm.address,.... pm.phone
    from main m,
    person pc,
    person pm
    where m.person_id_creator = pc.person_id
    and m.person_id_modifier = pm.person_id
    Second solution is to import the PERSON folder twice in EUL (optionally named one as perosn_creator and other as person_modifier) and manually define one join per table. i.e. join MAIN with PERSON_CREATOR on person_creator_fk and join MAIN with PERSON_MODIFIER table using person_modifier_fk.
    Now discoverer plus will let you drag Name from each person folder without needing to resolve multiple joins.
    Question is, what approach is better OR is there a better way?
    With solution 1 you will not be able to use functions on folder items.
    With solution 2 there is a EUL design overhead of including same object multiple times and then manually defining all join (or deleting unwanted joins), and this could be a problem when you have person_modifier and person_creator in nearly all tables. It could be more complicated if person table is further linked other tables and users want to see that information too. (for instance, if person address is stored in LOCATION table joined with location_id and user want to see both creator address and modifier address....now you will have to create multiple LOCATION folders).
    A third solution could be to register a function in discoverer that return person name when person_id is passed. This will work perfectly for above requirement but a down side is the report will run slower if they need filters on person names (then function will be used in where clause). Also, this solution is very specific to above scenario, it will not work if you want the report developer the freedom to pick any attribute from person table (lets say, person table contain 50 attributes then its not a good idea to register 50 functions).
    Any comments/suggestion will be appreciated.
    thanks

    Hi
    In a roundabout way you have really answered your own question :-)
    In my opinion, the best approach, although by all means not the only approach - see below) would be to have the object loaded as two folders with one join going to the first folder and the second join to the other folder. You would of course name the folders appropriately.
    Here's a workflow that I use all of the time and one that I teach when I'm giving Discoverer Administrator training. It might help you:
    1. Bring in the PERSON folder to begin with
    2. Make all necessary adjustments to bring it up to deployment standard. These adjustments would be: folder name (E.g PERSON_CREATOR), item names, item placement, default positions, default aggregation and so on.
    3. Create or assign the required lists of values
    4. Create any required calculations
    5. Create any required conditions
    6. Create the first join from this folder to MAIN.
    7. Click on the heading for the folder and press CTRL-C.
    8. Click on the heading for the business area and press CTRL-V. A second copy of the folder, complete with all of the adjustments you made earlier will be inserted into the business area.
    Note: joins are not copied, everything else is.
    9. Rename this folder to say PERSON_MODIFIED
    10. Rename the items as appropriate
    11. Add a join from this folder to MAIN - you're done
    Other ideas that I have used and work well would be to use a database view or create a complex folder. Either will work, In both cases you would need to join on some other column other than the ones you referred earlier.
    I hope this helps
    Best wishes
    Michael

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • What is the best approach to converting LV7.1 tags to LV2012 shared variables in multiple VIs?

    What is the best approach to upgrading from LV7.1/DSC tags to LV2012/DSC shared variables, in multiple VIs running on multiple platforms? Our system is composed of  about 5 PCs running Windows 2000/LV7.1 Runtime, plus a PLC, and a main controller running XP/SP3/LV2012. About 3 of the PCs publish sensor information via tags across the LAN to the main controller. Only the main controller is currently being upgraded. Rudimentary questions:
    1. Will the other PCs running the 7.1 RTE (with tags) be able to communicate with the main controller running 2012 (shared variables)?
    2. Is it necessary to convert from tags to shared variables, or will the deprecated legacy tag VIs from LV7.1 work in LV2012?
    3. Will all the main controller VIs need to be incorporated into a project in order to use shared variables?
    4. Is the only way to do this is to find all tag items and replace them with shared variable items?
    Thanks in advance with any information and advice!
    lb
    Solved!
    Go to Solution.

    Hi lb,
    We're glad to hear you're upgrading, but because there was a fundamental change in architecture since version 7.1, there will likely be some portions that require a rewrite. 
    The RTE needs to match the version of DSC your using.  Also, the tag architecture used in 7.1 is not compatible with the shared variable approach used in 2012.  Please see the KnowledgeBase article Do I Need to Upgrade My DSC Runtime Version After Upgrading the LabVIEW DSC Module?
    You will also need to convert from tags to shared variables.  The change from tags to shared variables took place in the transition to LabVIEW 8.  The KnowledgeBase Migrating from LabVIEW DSC 7.1 to 8.0 gives the process for changing from tags to shared variables. 
    Hope this gets you headed in the right direction.  Let us know if you have more questions.
    Thanks,
    Dave C.
    Applications Engineer
    National Instruments

  • Newbie: What is the best approach to integrate BO Enterprise into web app

    Hi
    1. I am very new to Business Objects and .Net. I need to know what's the best approach
    when intergrating bo into my web app i.e which sdk do i use?
    For now i want to provide very basic viewing functionality for the following reports :
    -> Crystal Reports
    -> Web Intellegence Reports
    -> PDF Reports
    2. Where do i find a standalone install for the Business Objects Enteprise XI .Net providers?
    I only managed to find the wssdk but i can't find the others. Business Objects Enteprise XI
    does not want to install on my machine (development) - installed fine on server, so i was hoping i could find a standalone install.

    To answer question one, you can use the Enterprise .NET SDK for each, though for viewing Webi documents it is much easier to use the opendocument method of URL reporting to view them.
    The Crystal Reports and PDF instances can be viewed easily using the SDK.
    Here is a link to the Developer Library:
    [http://devlibrary.businessobjects.com/]
    VB.NET XI Samples:
    [http://support.businessobjects.com/communityCS/FilesAndUpdates/bexi_vbnet_samples.zip.asp]
    C# XI Samples:
    [http://support.businessobjects.com/communityCS/FilesAndUpdates/bexi_csharp_samples.zip.asp]
    Other samples:
    [https://boc.sdn.sap.com/codesamples]
    I answered the provider question on your other thread.
    Good luck!
    Jason

  • What´s the best approach to work with Excel, csv files

    Hi gurus. I got a question for you. According to your experience what's the best approach to work with Excel or csv files that have to be uploaded through DataServices to you datawarehouse.
    Let's say your end-user, who is not a programmer, creates a group of 4 excel files with different calculations in a monthly basis, so they can generate a set of reports from their datawarehouse once the files have been uploaded to tables in your DWH. The calculations vary from month to month. The user doesn't have a front-end to upload the excel files directly to Data Services. The end user needs to keep a track of which person uploaded the files for a determined month.
    1. The end user should place their 4 excel files in a shared directory that will be seen by DataServices.
    2. DataServices will execute certain scheduled job that will read the four files and upload them to the Datawarehouse at a determined time, lets say at 9:00pm.
    It makes me wonder... what happens if the user needs to present their reports immediately so they can´t wait until 9:00pm.  Is it possible for the end user to execute some kind of action (out of the DataServices Environment) so DataServices "could know" that it has to process those files right now, instead of waiting for the night schedule?
    Is there a way that DS will track who was the person who uploaded those files?
    Would it be better to build a front-end for the end user so they can upload their four files directlyto the datawarehouse?
    Waiting for your comments to resolve this dilemma
    Best Regards
    Erika

    Hi,
    There are functions in DS that captures the input files automatically. You could use file_exists() or wait_for_file() option to do that. Schedule the job to run every certain minute and if the file exists then run. This could be done by using a certain file name with date and timestamp etc or after running move the old files to archive and DS wait for new files to show up.
    Check this - Selective Reading and Postprocessing - Enterprise Information Management - SCN Wiki
    Hope this helps.
    Arun

  • What is the best approach to process data on row by row basis ?

    Hi Gurus,
    I need to code stored proc to process sales_orders into Invoices. I
    think that I must do row by row operation, but if possible I don't want
    to use cursor. The algorithm is below :
    for all sales_orders with status = "open"
    check for credit limit
    if over credit limit -> insert row log_table; process next order
    check for overdue
    if there is overdue invoice -> insert row to log_table; process
    next order
    check all order_items for stock availability
    if there is item that has not enough stock -> insert row to
    log_table; process next order
    if all check above are passed:
    create Invoice (header + details)
    end_for
    What is the best approach to process data on row by row basis like
    above ?
    Thank you for your help,
    xtanto

    Processing data row by row is not the fastest method out there. You'll be sending much more SQL statements towards the database than needed. The advice is to use SQL, and if not possible or too complex, use PL/SQL with bulk processing.
    In this case a SQL only solution is possible.
    The example below is oversimplified, but it shows the idea:
    SQL> create table sales_orders
      2  as
      3  select 1 no, 'O' status, 'Y' ind_over_credit_limit, 'N' ind_overdue, 'N' ind_stock_not_available from dual union all
      4  select 2, 'O', 'N', 'N', 'N' from dual union all
      5  select 3, 'O', 'N', 'Y', 'Y' from dual union all
      6  select 4, 'O', 'N', 'Y', 'N' from dual union all
      7  select 5, 'O', 'N', 'N', 'Y' from dual
      8  /
    Tabel is aangemaakt.
    SQL> create table log_table
      2  ( sales_order_no number
      3  , message        varchar2(100)
      4  )
      5  /
    Tabel is aangemaakt.
    SQL> create table invoices
      2  ( sales_order_no number
      3  )
      4  /
    Tabel is aangemaakt.
    SQL> select * from sales_orders
      2  /
            NO STATUS IND_OVER_CREDIT_LIMIT IND_OVERDUE IND_STOCK_NOT_AVAILABLE
             1 O      Y                     N           N
             2 O      N                     N           N
             3 O      N                     Y           Y
             4 O      N                     Y           N
             5 O      N                     N           Y
    5 rijen zijn geselecteerd.
    SQL> insert
      2    when ind_over_credit_limit = 'Y' then
      3         into log_table (sales_order_no,message) values (no,'Over credit limit')
      4    when ind_overdue = 'Y' and ind_over_credit_limit = 'N' then
      5         into log_table (sales_order_no,message) values (no,'Overdue')
      6    when ind_stock_not_available = 'Y' and ind_overdue = 'N' and ind_over_credit_limit = 'N' then
      7         into log_table (sales_order_no,message) values (no,'Stock not available')
      8    else
      9         into invoices (sales_order_no) values (no)
    10  select * from sales_orders where status = 'O'
    11  /
    5 rijen zijn aangemaakt.
    SQL> select * from invoices
      2  /
    SALES_ORDER_NO
                 2
    1 rij is geselecteerd.
    SQL> select * from log_table
      2  /
    SALES_ORDER_NO MESSAGE
                 1 Over credit limit
                 3 Overdue
                 4 Overdue
                 5 Stock not available
    4 rijen zijn geselecteerd.Hope this helps.
    Regards,
    Rob.

  • What is the best approach to install BI statistics in SAP BI ?

    Hello All,
    what is the best approach to install BI statistics in SAP BI ?
    by collecting objects in standard BI content- 0TCT*objects or
    by executing some standard tcodes.
    Regards,
    Siva

    the best approach is based on version of your BW system follow up install steps in notes:
    BW 3.x:
    309955 - BW statistics - Questions, answers and errors
    BW 7.x
    934848 - Collective note: (FAQ) BI Administration CockpitBW 7.x
    Cheers,
    m./

  • What's the best approach/program for finding and eliminating duplicate photos on my hard drive?

    What's the best approach/program for finding and eliminating duplicate photos on my hard drive? I have a "somewhat" older version of iPhoto (5.0.4), and it doesn't seem to offer anything like that except during the importing phase of syncing my phone...

    I wonder, is there room to transfer them to your phone, & then back to filter them?

  • What is the best approach to capture TBOM's for a SAP SRM system/functionality?

    Hello SCN Community,
    It would be much appreciated if somebody could share some information about the following....
    What is the best approach to create TBOM's for a SAP SRM system? The SRM functionality is basically consisting out of multiple ABAP Web Dynpro's that are connected as a process via a SAP Portal (as is understand it). The entrypint to the SRM functionality is via the SAP Portal.
    Do I first have to create a link to the Portal via an SAP Web Application link in SOLAR01 and then start recording? Will it record only the portal objects or also the ABAP Web Dynpro objects?
    Do I have to list all the separate ABAP Web Dynpro's in SOLAR01 and use those as a starting point?
    I am myself more familair with more classical SAP ABAP ECC systems and transactions.  I could hardly find any information on the use of BPCA and the required TBOM's in the area of SRM.... Any help would be much appreciated!
    Kind Regards,
    Guido Jacobs

    Hi Guido,
    today was a new blog released, maybe this helps:
    BPCA - Powerful Risk Eliminator
    Best Regards,
    Christoph

  • Definition of the best approach on how to do reporting between BPC and BW

    Hi,
    I need your opinion in the definition of the best approach on how to do reporting between BPC and BW.
    For example if we want to do reporting using BW on Actuals Vs Budget how should we manage this since technically BPC Model and BW InfoCube is different?
    BPC Models have the Budget and BW has the actuals, but the InfoObject that is used for Account is different. What is the best approach to do the reporting?
    Thanks in advance,
    JA

    Hi Gersh
    I already thought in that option, but the problem is the Yellow requests in the Infocube that are not used by VP.
    In the past I used Report RSAPO_CLOSE_TRANS_REQUEST_ALL3 in the virtual function module to close the requests, but now I didn't want to use VP based on function module.
    Is there any option to use data in Yellow requests in VP based on DTP?
    Best regards,
    JA

  • Logging the Userid ... what is the best approach?

    Hi guys,
    I have a hard time to decide on the best approach to audit user information.
    Consider the the usual information you use for auditing purposes. Normally you have the columns:
    created_on
    created_by
    updated_on
    updated_by
    In an Apex application I would record the value of :APP_USER in the columns created_by and updated_by.
    But what happens if the login name changes? Would you go on and update all relevant tables to update the now changed name?
    There could be auditing triggers involved which you would have to disable.
    On the other hand I have checked the tables of Oracle Applications. Auditing is important for SOX compliancy I believe. They reference the USER_ID (number), not the login name.
    But in many cases you also manipulate tables from within a sqlplus session and thus you don't have a value for APP_USER for which you could do a reverse lookup of the USER_ID. Thus I usually do my logging as nvl(v('APP_USER'), user).
    But then I will run into problems when the username changes.
    What is your take on this? Any suggestions?
    Regards,
    ~Dietmar.

    Hi Denes,
    technically you are right, they don't actually use referential integrity. But they store the value of FND_USER.USER_ID as created_by last_updated_by.
    If I was to record the user_id then I would use a foreign key to an existing local user table.
    This way I could always reference the user since a delete would not be possible.
    And if the name changes, well then create a new account and dissable the old one.Well, good point. But you would loose all references you might have created for this user (if applicable): user preferences, privileges, etc.
    I had an actual use case in a German bank a few years ago. They changed the naming convention for all User accounts in all systems. Thus only the login name changed but the user identity stayed the same.
    ~Dietmar.

Maybe you are looking for

  • Is there a way to open swf's within a flash movie

    i want to change my website so that when you click a button on the flash movie it opens up a swf file in the flash but i want the swf to be in a seperate file from the flash, is this possible, if so how?

  • Adobe Acrobat Pro 9.4.6 Sticky Note Icons are huge and can not resize

    My coworker is having a problem with the size of the Sticky Note (comments) Icon.  It is huge and we can not resize. When you open the icon, it takes up the whole page.  We can change the Font size, but the actual icon will not resize. The same docum

  • Petstore development tools

    Can someone tell me what tools were used to develop PetStore?? I know J2EE, and JSP, and all that other Java fluff, but I'm talking about what actual application tools were used to put it all together. Was Forte Enterprise Edition used or something?

  • DVD Studio Pro upgrade

    Is it possible to upgrade from DVD Studio Pro 2.0.5 to DVD Studio Pro 4 without having to purchase the full version of Final Cut Pro Studio?

  • Crackling Audio

    Hi all! I'm using Final Cut Express HD and I'm having problems when I import songs from my internal hard drive. They sound fine when they're played . . . until they're put into a Final Cut project and rendered. After that is done, the song suddenly h