Best Performance? - having a single stateless java pojo as delegate- give it application scope?

I'm curious about which is best from a performance standpoint -<br /><br />All of our flex calls access a single java pojo. In my remoting-config.xml I'm currently declaring the destination a property scope of 'application': <br /><br /><destination id="UIServicesDelegate"><br />    <channels><br />      <channel ref="my-amf"/><br />      <channel ref="my-local-amf"/ <br />    </channels> <br />     <properties><br />        <factory>spring</factory><br />        <source>uiServicesDelegateBean</source><br />        <scope>application</scope><br />    </properties><br /></destination><br /><br />All of the UIServiceDelegate methods are stateless however, so I'm wondering if I'm gaining anything by giving it scope session. Since their all stateless I"m assuming application scope would be the best from a performance standpoint? I'm assuming in this case only one object will ever be instantiated? <br /><br />Assuming it was between Session and Request scope, is there a lot of overhead instantiating the new server side object each time? I would assume performance would be better using the Session in this case, with the only draw back of some server-side ram being chewed up storing the object in the Session. <br /><br />If you want "singleton" type approach, I figure just using application scope is the preferred approach?

You wont get any api for directly accessing Servlet application objects from any ejb. I dont think MDB either solves your problem directly. Indirectly you can place a request to some servlet (should be there for each JVM and web application ) and update your application scope variable. I would suggest you to cache the data in database if the size of cache is large. otherwise any open caching tools may help you.

Similar Messages

  • Should I do this with Java Code or Stored Procs ? (for best performance)

    Hi All,
    I need to decide where should I implement my business logic, in Java code or Stored procs.
    Here is the requirement :
    - One Order has 70 products (Order_Table )
    - Can be duplicate products, so I have to do summarize / grouping by product
    - For every product, I have to check, if it is entitled for a Bonus product, then I have to Insert one to Bonus_Table.
    - This is done when/after the transaction is SAVED (COMMIT)
    The question is, which one has better PERFORMANCE :
    (1) Create a rowsetIterator on the Order details (70 products) and call a stored procedure to do the logic for every single product (so that the Insert to Bonus_Table done in stored proc). means the stored proc will be called 70 times.
    OR
    (2) After the transaction is COMMITted, call the stored procs ONCE to do the logic for all the products at once.
    OR
    (3) I do all the logic with Java Code within ADF
    Given the requirement above, which approach is most efficient / best performance ?
    Thank you very much,
    xtanto

    Problem with this is that you ask 100 people and you probably get 100 different answers. ;o)
    Many would say that you push as much business logic into the database with your data; others might say you only put data in your database and your business logic is kept on the application server.
    In reality your would probably have a mix of both and your decision would probably be influenced by your own background ...
    Can't be more precise than that.
    Grant

  • Performance of DVD won't encode on anything but Best performance

    I haven't been able to change the quality of the dvd from best performance because the box at the bottom just greys out and it stops encoding all together. My movie is too big for being best performance and I would like to be able to change it so I can burn my movie! I have the most recent version of iDVD and I have tried other movies and am having the same problem with all of them. Pleaaase help!!

    Did you go to iDVD's Preferences from within your iDVD project?
    It is under the top menu bar, iDVD. Click on Preferences, then select the 'Projects' tab, and you should see an 'Encoding' dropdown bar that has the quality options. For iDVD 8/9, these should be: 'Best Performance' 'High Quality' and 'Professional Quality'
    'Best Performance' is for projects that are under 60 minutes (for single-layer disks). 'High Quality' and 'Professional Quality' are for projects that are 60 to 120 minutes (SL). Professional quality takes a bit longer than the high quality setting, but the quality differences are probably not noticeable.
    You need to change the settings in the Preference file before trying to burn/encode your project.

  • Which XML Parser gives best performance? Please respond!!

    Hi,
    I am trying to figure out what is the best performing XML parser. I know that SAX implementation is good for XML reading and DOM is good when building XML documents.
    Now, I want to know which parser (JAXP? JDOM? Piccolo?) I understand that JAXP underneath uses Xerces and SAX2. Is it right?
    Is it a good practice to have a single application using a SAX parser for reading xml docs and a DOM parser to build xml parser?
    We are also planning to migrate from Apache Soap to Apache Axis. Do you have any recommendations?

    I think JAXP is an API, not a parser. It uses an underlying parser called Crimson by default. If you want it to use other parsers you can configure it to do so. I can't tell you which parser is more fastest.
    The easiest way of reading and writing XML documents is to use an XML data binding library such as JAXB or castor. It's much nicer than implementing the SAX callback methods or building document trees. The steps involve are...
    1. Write an XML Schema
    2. Tell the XML data binding tool to generate the source code to marshall / unmarshall XML documents to and from java objects
    3. Compile the source code
    4. Package the classes into a library
    5. Use the library in your application
    Steps 2-4 can be added into your build script.
    It may take you a couple of days to become familiar with the tools, but will save you weeks of maintenance & debugging.

  • Performance on Select Single&Write  AND Select*(For All Entries)&Read&Write

    Hi Experts,
    I got a code review problem & we are in a argument.
    I need the best performance code out of this two codes. I have tested this both on 5 & 1000 & 3000 & 100,000 & 180,000 records.
    But still, I just need a second opinion of experts.
    TYPES : BEGIN OF ty_account,
            saknr   TYPE   skat-saknr,
            END OF ty_account.
    DATA : g_txt50      TYPE skat-txt50.
    DATA : g_it_skat    TYPE TABLE OF skat,       g_wa_skat    LIKE LINE OF g_it_skat.
    DATA : g_it_account TYPE TABLE OF ty_account, g_wa_account LIKE LINE OF g_it_account.
    Code 1.
    SELECT saknr INTO TABLE g_it_account FROM skat.
    LOOP AT g_it_account INTO g_wa_account.
      SELECT SINGLE txt50 INTO g_txt50 FROM skat
        WHERE spras = 'E'
          AND ktopl = 'XXXX'
          AND saknr = g_wa_account-saknr.
      WRITE :/ g_wa_account-saknr, g_txt50.
      CLEAR : g_wa_account, g_txt50.
    ENDLOOP.
    Code 2.
    SELECT saknr INTO TABLE g_it_account FROM skat.
    SELECT * INTO TABLE g_it_skat FROM skat
      FOR ALL ENTRIES IN g_it_account
          WHERE spras = 'E'
            AND ktopl = 'XXXX'
            AND saknr = g_it_account-saknr.
    LOOP AT g_it_account INTO g_wa_account.
      READ TABLE g_it_skat INTO g_wa_skat WITH KEY saknr = g_wa_account-saknr.
      WRITE :/ g_wa_account-saknr, g_wa_skat-txt50.
      CLEAR : g_wa_account, g_wa_skat.
    ENDLOOP.
    Thanks & Regards,
    Dileep .C

    Hi Dilip.
    from you both the code I have found that you are selecting 2 diffrent fields.
    In Code 1.
    you are selecting SAKNR and then for these SAKNR you are selecting TXT50 from the same table.
    and in Code 2 you are selecting all the fields from SAKT table for all the values of SAKNR.
    I don't know whats your requirement.
    Better you declare a select option on screen and then fetch required fields from SAKT table for the values entered on screen for SAKNR.
    you only need TXT50 and SAKNR fields.
    so declare two types one for SAKNR and another for TXT50.
    Points to be remember.
    1. while using for all entries always check the for all entries table should not be blank.
    2. you will have to fetch all the key fields in table while applying for all entries,
        you can compare key fields with a constant which is greater than initial value.
    3. while reading the table sort the table by the field on which you are going to read it.
    try this:
    TYPES : BEGIN OF ty_account,
    saknr TYPE skat-saknr,
    END OF ty_account.
    TYPES : begin of T_txt50,
          saknr type saknr,
          txt50 type txt50,
    end of t_txt50.
    DATA: i_account type table of t_account,
          w_account type t_account,
          i_txt50 type table t_txt50,
          w_txt50 type t_txt50.
    select SAKNR from SKAT into table i_account.
    if sy-subrc = 0.
    sort i_account by saknr.
    select saknr txt50 from SKAT into table i_txt50
    for all entries in i_account
    where SAKNR = i_account-SAKNR
    here mention al the primary keys and compare them with their constants.
    endif.     
    Note; here you need to take care that, you will have to fetch all the key fields in table i_txt50.
    and compare those fields with there constants which should be greater than initial values.
    they should be in proper sequence.
    now for writing.
    loop at i_account into w_account.
    clear w_txt50.
    sort i_txt50 by saknr.
    read table i_txt50 into w_txt50 with key SAKNR = w_account-saknr
    if sy-subrc = 0.
    write: w_txt50-saknr, w-txt50-txt50.
    clear w_txt50, w_account.
    endif.
    endloop.
    Hope it wil clear your doubts.
    Thanks
    Lalit

  • How to get Test Utility for PI 7.3 Single Stack JAVA only.

    Hi All,
       We are having PI 7.31. Single Stack JAVA only.
    we want to test some scenario. Can anybody tell me how can i test the scenario in PI 7.3
    is there any test utility by which i can send some test messages to PI 7.3.
    from where to download this utility.?
    Regards,
    Umesh

    Hi,
    The test message functionality is currently available in  PI dual stack only..
    Lets hope that in future releases this functionality may available in Single Stack also.
    Regards,
    Mastan vali

  • How to connect multiple Xserve Raid for Best Performance

    I like to get an idea how to connect multiple Xserve Raid to get the best performance for FCP to do multiple stream HD.

    Again, for storage (and retrieval), FireWire 400 should be fast enough. If you are encoding video directly to the external drive, then FireWire 800 would probably be beneficial. But as long as the processing of the video is taking place on the fast internal SATA drive, and then you are storing files on the external drive, FireWire 400 should be fine.
    Instead of speculating about whether it will work well or not, you need to set it up and try your typical work flow. That is the only way you will know for sure if performance is acceptable or not.
    For Time Machine, you should use a single 1.5TB drive. It is likely that by the time your backup needs comes close to exceeding that space, you will be able to buy a 3TB (or larger) single drive for the same cost. Also, I would not trust a RAID where the interaction between the two drives is through two USB cables and a hub. If your primary storage drive fails, you need your backup to be something that is simple and reliable.
    Oh, and there should be no problem with the adapter, if you already have it and it works.
    Edit: If those two external drives came formatted for Windows, make sure you have use Disk Utility Partition tab to repartition and reformat the drive. When you select the drive in the Disk Utility sidebar, at the bottom of the screen +Partition Map Scheme+ should say *GUID Partition Table*. When you select the volume under the drive in the sidebar, Format should say *Mac OS Extended (Journaled)*.

  • What's iOS is the best performance for iP4 & 3Gs

    Hi all guys here.
    How are you all guys?
    I'm new here.
    I'm now running IOS 4.1 for my both iPhone(3Gs and 4)
    But i want the new features.
    Can anyone here recommend me what the best IOS for iP3Gs and iP4? ( d best performance )
    Without battery draining or make my iphone slower.
    I'm greatly appreciate that guys.
    Trillion Thanks!!!

    Doesn't really matter, since the only iOS available for either of your phones is iOS 4.3.5, if you chose to update. Users have complained about every single iOS update since the first update for the original iPhone was released. Fact is, the vast majority of users have zero issues & happily go about their ways.

  • What is best performing approach to report building in my case?

    Hi all,
    I want to know what is the best performing approach in the case of an overload of the system,
    understood as large number of concurrent operations.
    Each operation is a query that, in most cases, returns a large amount of data.
    I am interested in the approach that not create bottlenecks and slow down, blocks for a long time the system.
    The alternatives that I would like more information about are:
    1) reports built with the JDBC (JNDI) specifying "java:jdbc/xxxxdatasource"
    (taken from the oracle-ds.xml's jndi-name tag) as "Connection name (optional)"
    with the query written into the rpt file and runned by Crystal Reports that I think makes a direct connection to DB
    and integrated into Java with Java Reporting Component.
    This approach has also threads limits, depending on the version of the report engine.
    2) reports built with "Field definition only" with the query written and runned into the my application that call the report only through the resultSet to be displayed
    (reportClientDoc.getDatabaseController().setDataSource(resultSet, tableName , tableName);)
    My concern with this approach is that it seems to require loading all results into memory
    and generating the report in one big step.
    Is there a way to avoid this? Some-how to page through report data?
    I've also read that Crystal Reports can work with any data provider that implements ResultSet.
    Is this true? If so, could I create my own custom ResultSet implementation that would let me
    page through my results without loading everything into memory at-once?
    If possible, please point me to the documentation for this approach.
    I haven't been able to find any examples.
    If there is a better approach that I haven't mentioned, please let me know.
    Thanks in advance

    The first option is the best one for performance.  The only time you should use result sets is when you need to do runtime manipulation of the data through your application and is not acheivable in a stored procedure.

  • DVD Disk Space Requirements - Best Performance vs. Quality

    I put together an iDVD 8 project that has about 135 minutes of content. When I go to burn the DVD using the Best Performance setting, is says it takes about 7gb (and I need to use DL). When I switch to the Professional Quality setting the disk space requirements are about half (almost to the point I can use a single-layer DVD). It seems it should be the exact opposite where more detailed, less compressed images should take more disk space.
    Can someone explain this apparent contradiction? Thanks.

    Google and learn about MPEG2 compression.... this is the backbone to iDVD. In general you have three encoding options:
    CBR: Constant Bit-Rate
    Single Pass VBR (variable bit-rate)
    Two-pass VBR
    CBR keeps the bit rate constant and therefore is the fasted way of encoding.
    VBR will vary the bit rate. Scenes with a lot of motion get a higher bit rate, static scenes a lower bit rate.
    Two pass VBR will analyze the entire sequence on the first pass to determine where and how high or low to go with the bit rate and then on the second pass, do the actual encoding. The two pass method may determine that you can go with a much lower average bit rate thus saving precious disc space. This method also takes twice as long!
    But, that is just the math. Some sequences look better via CBR while others look better with two-pass VBR. Unfortunately, there isn't really a way to determine this prior to the lengthy encoding process. The best thing one can do is keep content as short as possible. If I had 135 minutes of material and I was using iDVD, I would want to create 2 DVDs instead of one. But that's me.
    Mike

  • Encoding: Best Performance vs. Best Quality

    So I have a project that if I select Best Performance it shows that the DVD will be 3.4GB, and if I select Best Quality it shows the DVD will be 2.0GB.
    So my question is: isn't that contrary to logic? Why wouldn't Best Quality be the largest size? Is Best Quality going to give me the best possible video quality?

    Since you have posted to the iDVD 6 forum, I asssume that is what you have.
    iDVD 6 has two encoding modes: 'Best Performance' and 'Best Quality'. iDVD '08 adds: 'Professional Quality'.
    People misunderstand the names and I wish Apple had used different names for all three modes.
    The below applies to a single layer disc (double the times for a double layer disc):
    'Best Performance' uses a fixed video encoding bit-rate that produces a DVD with a data playback bit-rate just about as high as a set-top DVD player can handle. This limits content to 60 minutes or less.
    'Best Quality' uses a fixed video encoding bit-rate that is BASED ON THE TOTAL AMOUNT OF CONTENT BEING COMPRESSED and is best suited for content between 60 and 120 minutes. Note that all the content is encoded at the same bit-rate so that it can fit on a single layer disc. (Apple calls this single-pass variable bit-rate encoding because 120 minutes of content gets compressed more than 60 minutes of content.)
    The new 'Professional Quality' uses a variable video encoding bit-rate that is BASED ON THE INFORMATION IN THE CONTENT BEING COMPRESSED. It uses a two-pass process to first evaluate the content and then encode it based on the motion/detail of individual clips. It is best suited for content between 60 and 120 minutes. Note that not all the content is encoded at the same bit-rate BUT the maximum data bit-rate on playback can not exceed the playback capability of a set-top DVD player. (This is two-pass, variable bit-rate encoding.) This means the BEST encoded quality should be about what is obtained with 'Best Performance' for content under 60 minutes.
    If your content is under 60 minutes, use 'Best Performance'. If your content is between 60 minutes and 120 minutes, use 'Professional Quality' if your processor is fast enough and you don't mind waiting about twice the time required for 'Best Quality'.
    About the only thing Apple can do to further improve the quality of DVD encoded video is to offer compressed audio instead of just the present uncompressed PCM audio because the audio 'eats up' part of the playback bit-rate a set-top DVD player can handle. Compressed audio would make more of the maximum playback bit-rate available for video.
    In your case, with iDVD 6, use 'Best Performance' for content under 60 minutes and 'Best Quality' for content over 60 minutes. Remember that your menu content counts against the total time limit.
    F Shippey

  • OC4J: Pool of stateless Java WS instances ?

    There is the following paragraph in OAS WebServices documentation:
    "For a stateless Java implementation, Oracle Application Server Web Services
    creates multiple instances of the Java class in a pool, any one of which may be
    used to service a request. After servicing the request, the object is returned to
    the pool for use by a subsequent request."
    Writing Stateless and Stateful Java Web Services
    Oracle® Application Server Web Services
    Developer’s Guide
    10g (9.0.4)
    Part No. B10447-01
    Could anybody quide me with finding a way how this pool of instances can be managed ?
    For now, it seems like only single instance of stateless servlet is created,
    and it's not possible to serve concurrent requests in parallel.
    Is it really so ?
    Thanks.

    Is the SAME instance from the pool used to process the HeaderCallback AND service the request?
    I want to set a private variable in the processHeaders callback method and then use it in the service method.

  • VMware Fusion Ideal Settings for Best performance

    Can someone please help with the ideal settings for VMware Fusion on mac for best performance.
    My Machine Details
    MacBook Pro (13-inch Late 2011), Mac OS X (10.7.3)
    Processor : 2.8 GHz Intel Core
    Memory : 16 GB 1333 MHz DDR3
    Storage : 750 GB SATA Disk
    Graphics : Intel HD Graphics 3000 512 MB
    Am using VMware Fusion Version 4.1.2
    At Present the resources I have allocated for VMFusion are
    Processor      : 1 Core
    Memory         : 4 GB
    Thanks
    Brij

    VMware Fusion doesn't slow down the Mac just because having it installed. You have 16 GB of memory in the computer and you assigned 4 GB for the virtual machine you have, so you shouldn't have any memory problem. Also, your Mac has got a great processor that run virtual machines without any problem.
    If you feel your Mac slow while running the virtual machine, just reduce the amount of memory assigned to the virtual machine, depending on the operating system you have in the virtual machine. For example, I recommend 2 GB or more for Windows 7, and you can run Linux virtualized assigning only 512 MB of memory

  • Best Quality vs Best Performance - I don't understand why

    This is a serious question - why does iDVD bother to offer you a choice between "Best Quality" (BQ) and "Best Performance" (BP)?
    First, these names are confusing (really, you could switch them and you'd be none the wiser as what they are telling you).
    Second, if your project is longer than 60 mins, then iDVD will tell you to switch the preference to "Best Quality" if you've got it set to the other. Hence, there is no choice, and the preference serves no purpose. iDVD should simply change the preference for you as it is annoying to have to open the prefs pane to change it.
    Third, if your project is shorter than 60 mins, the consensus of other threads on this topic is that no-one can tell the difference between between BQ and BP, since the VBR of BQ is never higher than the CBR of BP. So again, the choice seems pointless, apart from enabling background encoding.
    So my proposal is: iDVD should bin this essentially meaningless and confusingly worded preference, and instead have a single check box "Enable background encoding, if possible" (ie, your project is shorter than 60 mins).
    You know it makes sense.

    Ah, well yes I know how to feedback suggestions to Apple. My question here was - am I missing anything in suggesting they get rid of it? No point making a suggestion if I am inadequately informed, or no-one agrees it is a good idea.
    Seems like the answer is no-one disagrees, and so I will make the suggestion.

  • Exchange 2010 CAS proxy to Exchange 2013 CAS: Use the following link to open this mailbox with the best performance:

    Hello,
    I've installed Exchange 2013 into Exchange 2010 infrastructure
    [ single Exchange 2010 server; single AD site; AD = 2003 ],
    and moved one mailbox [ Test user ] to Exchange 2013.
    When I login internally through 2013 OWA to access mailboxes on 2010, then proxy works fine.
    When I login internally through 2010 OWA to access mailboxes on 2013, then a message appears:
        Use the following link to open this mailbox with the best performance: with link to 2013 OWA...
    What is wrong ?
    I've checked and changed settings by:
    Get-OwaVirtualDirectory, Set-OwaVirtualDirectory
    [PS] C:\work>Get-OwaVirtualDirectory -Identity 'ex10\owa (Default Web Site)' | fl server,name, *auth*,*redir*,*url*
    Server                        : EX10
    Name                          : owa (Default Web Site)
    ClientAuthCleanupLevel        : High
    InternalAuthenticationMethods : {Basic, Fba, Ntlm, WindowsIntegrated}
    BasicAuthentication           : True
    WindowsAuthentication         : True
    DigestAuthentication          : False
    FormsAuthentication           : True
    LiveIdAuthentication          : False
    AdfsAuthentication            : False
    OAuthAuthentication           : False
    ExternalAuthenticationMethods : {Fba}
    RedirectToOptimalOWAServer    : True
    LegacyRedirectType            : Silent
    Url                           : {}
    SetPhotoURL                   :
    Exchange2003Url               :
    FailbackUrl                   :
    InternalUrl                   : https://ex10.contoso.com/owa
    ExternalUrl                   : https://ex10.contoso.com/owa
    [PS] C:\work>Get-OwaVirtualDirectory -Identity 'ex13\owa (Default Web Site)' | fl server,name, *auth*,*redir*,*url*
    Server                        : EX13
    Name                          : owa (Default Web Site)
    ClientAuthCleanupLevel        : High
    InternalAuthenticationMethods : {Basic, Ntlm, WindowsIntegrated}
    BasicAuthentication           : True
    WindowsAuthentication         : True
    DigestAuthentication          : False
    FormsAuthentication           : False
    LiveIdAuthentication          : False
    AdfsAuthentication            : False
    OAuthAuthentication           : False
    ExternalAuthenticationMethods : {Fba}
    RedirectToOptimalOWAServer    : True
    LegacyRedirectType            : Silent
    Url                           : {}
    SetPhotoURL                   :
    Exchange2003Url               :
    FailbackUrl                   :
    InternalUrl                   : https://ex13.contoso.com/owa
    ExternalUrl                   :
    best regards Janusz Such

    Hi Janusz Such,
    Based on my knowledge, CAS proxy can only from later version to previous version.
    Some like CAS2013 to CAS2010/2007, CAS2013 to CAS2013. 
    Thanks
    If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Mavis Huang
    TechNet Community Support

Maybe you are looking for