Propose suplementary documentation on client_result_caching optimization

I want to research on Client side result cahing to be able for optimization on that sector, since as I have been informed from another thread, its not just activating it and setting a big enough cahing size.
Why client caching doesnt lowers NET traffic, yet improves time ? Why client caching doesnt lowers NET traffic, yet improves time ?
I've found through SE, the following pages, are they relative and enough?
Do you have to propose more?
Introducing Oracle Database Cache
http://docs.oracle.com/cd/A97335_02/caching.102/a88706/ic_intro.htm
Configuring Your Application Environment
http://docs.oracle.com/cd/A97335_02/caching.102/a88706/ic_setup.htm#1008039
Caching and Pooling
http://www.oracle.com/technetwork/articles/sql/11g-caching-pooling-088320.html
query result cache in oracle 11g
http://www.oracle-developer.net/display.php?id=503

>
I want to research on Client side result cahing to be able for optimization on that sector, since as I have been informed from another thread, its not just activating it and setting a big enough cahing size.
I've found through SE, the following pages, are they relative and enough?
>
Well now you are making a better start at it.
There are at least three aspects to caching: 1) understanding what cacheing functionality is available, 2) understanding how to implement that functionality and 3) understanding how to apply that functionality to your particular use cases.
Then there are the three major 'cache' locations: 1) cacheing in the DB, 2) cacheing in the middle-tier (e.g. app server) and 3) cacheing in the client tier.
That Oracle Database Cache doc you link is NOT the client cache that your thread has been talking about; it is a middle-tier cache and is for very specific uses and has very specific restrictions; in particular this one:
>
Note:
For this release of Oracle Database Cache, you must cache entire tables; you cannot cache just the parts of tables that satisfy the query. As a result, a data set must be an entire database table.
>
The client cache discussed in your thread is a cache that is maintained on the client tier for OCI clients. That cache is NOT strictly a 'one table and the entire table' type of cache like the database cache is. That cache can be maintained by Oracle behind the scenes as DML is performed on the tables that determine the result set.
There is also a RESULT SET CACHE available on the DB where query results are cached. For other sessions (ANY other sessions even strictly on the server) that execute the SAME query that produced those results Oracle can provide the results from the cache.
So the first priority is to understand your architecture and identify what cacheing opportunities are available for it.
1. RESULT SET CACHE - a server side cache of results that can be reused by other sessions 'that execute the same or a compatible query'
2. CLIENT cache - similar to the result set cache except the cache is maintained on the OCI client. The cache results can be reused by other threads of the client; that reuse is strictly client-to-client.
3. DATABASE CACHE - a middle-tier cache that, as the above note shows, 'must cache entire tables'
If you aren't using a 'middle-tier' there is no place in your architecture for a DATABASE CACHE.
If you aren't using OCI there is no place in your architecture for a CLIENT cache.
If none of your sessions every repeat the same/compatible query withing the cache retention time there is no place in your architecture for a RESULT SET CACHE.
You are the only one that knows your current and/or planned architecture and if/how one or more of those 'caches' might be beneficial.
Additionally Oracle supports Database Change Notification whereby your code can be notified when certain changes are made to tables/columns of interest.
See 'Database Application Developer's Guide - Fundamentals' in the Oracle® Database Application Developer's Guide - Fundamentals doc
http://docs.oracle.com/cd/B19306_01/appdev.102/b14251/adfns_dcn.htm
>
What Is Database Change Notification?
Database Change Notification is a feature that enables client applications to register queries with the database and receive notifications in response to DML or DDL changes on the objects associated with the queries. The notifications are published by the database when the DML or DDL transaction commits.
During registration, the application specifies a notification handler and associates a set of interesting queries with the notification handler. A notification handler can be either a server side PL/SQL procedure or a client side C callback. Registrations are created on all objects referenced during the execution of the queries. The notification handler is invoked when a transaction subsequently changes any of the registered objects and commits.

Similar Messages

  • Où peut-on trouver une documentation comme celle de Flex 3.0 pour flash builder 4.7?

    Je suis à la recherche d'un site qui propose une documentation similaire à celle qu'on pouvait trouver à l'adresse
    http://livedocs.adobe.com/flex/3/html/help.html?content=layouts_06.html
    mais pour Flash Builder 4.7. 
    J'ai développé une application avec Flex 3.0 et j'aimerais bien la faire évoluer vers FB 4.7 mais c'est vraiment pénible sans documentation, sans design view, etc...
    Merci d'avance pour vos réponses! ;-)
    Eric

    The Status Bar is no longer present in Firefox 4.0 b7 and replaced by the Add-ons Bar that extensions can use.
    You can use the Status-4-Evar extension to replace some functionality.
    * Status-4-Evar: https://addons.mozilla.org/firefox/addon/235283/
    Open the "Affichage > Barres d'outils > Personnaliser" window after you have installed the Status-4-Evar extension and drag the items (Status Text, Progress Meter, Download Status) upon the Add-ons Bar (Affichage > Barres d'outils > [X] Barre des modules)
    See also http://forums.mozillazine.org/viewtopic.php?f=23&t=2001965 [Ext] Status-4-Evar 2010.12.19.22
    http://translate.google.com/translate_t?hl=fr&langpair=auto|fr#auto|fr|

  • When migrating from 10g to 11g with tables that have LOB columns, should the columns be converted to Securefile LOB formats?

    We have an ongoing debate over the merits of switching to securefile LOBS. This is Oracle recommended approach but what are the main benefits?
    Thsi would allow parallel impdp to take place also has benefit for compression, but are there any other driving factors to consider?

    The fact that a hacking solution works well with a particular binary file on a particular database does not guarantee that it will not destroy another file and another database beyond repair. In case this happens, people will call Oracle Support for help. And Oracle Support may refuse (though it tries not to do this without reason) to help with such a corrupted database. This is a consequence of "unsupported" modifications to binary files. Therefore, publishing this type of advice with little or no word of warning nor disclaimer is simply irresponsible (especially, if simpler and supported solutions exist).
    I did not suggest that you should waste time checking if your particular solution had already been documented or not but I did suggest that you see if the methods you proposed were documented. And, frankly speaking, this is not what an Oracle DBA should really have to check. I also cannot believe that you are not aware that modification of files in any undocumented binary format (not only Oracle's) is generally unsupported. And, while common and somehow acceptable in low-risk home or research applications, such modifications are a bad idea in high-risk production business use.
    -- Sergiusz

  • Performance problem in 7.6.6.10

    We have a performance problem after doing the update from MaxDB 7.6.6.3 to 7.6.6.10.  
    The symptom is that querys with the "<>" operator in the WHERE-Clause on a indexed Integer/SmallInteger-column slows down extremly, e.g. "WHERE FILEDNAME <> 1".
    On large tables the query is very, very slow.
    The dbanalyser shows "DIFFERENT STRATEGIES FOR OR-TERMS". 
    A way to reproduce the prob:
    Create a table with 2 columns
    CREATE TABLE "ADMIN"."TEST"
         "INTID"  Integer  NOT NULL,
         "FLAG"  Smallint,
         PRIMARY KEY("INTID")
    Index on Column FLAG
    CREATE INDEX "IDX_TEST" ON "ADMIN"."TEST"("FLAG" ASC)
    Insert about 1000 lines in the TEST
    INSERT INTO TEST (SELECT ROWNO, 1 FROM LARGETABLE WHERE ROWNO <= 1000)
    (The easiest way for me to fill the table.)
    Call the dbanalyser
    EXPLAIN SELECT * FROM TEST WHERE FLAG <> 1
    OWNER  TABLENAME  COLUMN_OR_INDEX  STRATEGY                                PAGECOUNT
    ADMIN  TEST                        DIFFERENT STRATEGIES FOR OR-TERMS                8
                      IDX_TEST         RANGE CONDITION FOR INDEX              
                                       ONLY INDEX ACCESSED                    
                      FLAG                  (USED INDEX COLUMN)               
                      IDX_TEST         RANGE CONDITION FOR INDEX              
                                       ONLY INDEX ACCESSED                    
                      FLAG                  (USED INDEX COLUMN)               
                                            RESULT IS COPIED   , COSTVALUE IS           6
                                       QUERYREWRITE - APPLIED RULES:          
                                          DistinctPullUp                                1
    The statement is fast because of the small table, but I think the startegy is wrong.

    > We have a performance problem after doing the update from MaxDB 7.6.6.3 to 7.6.6.10.  
    > The symptom is that querys with the "<>" operator in the WHERE-Clause on a indexed Integer/SmallInteger-column slows down extremly, e.g. "WHERE FILEDNAME <> 1".
    > On large tables the query is very, very slow.
    > Index on Column FLAG
    > -
    > CREATE INDEX "IDX_TEST" ON "ADMIN"."TEST"("FLAG" ASC)
    > The statement is fast because of the small table, but I think the startegy is wrong.
    Hmm.. what other strategy would you propose?
    The single table optimizer tries to estimate how many pages would need to be read to find the data required.
    It figures that for your statement there won't be many pages required, so a index access might be beneficial.
    And to use the index efficiently it transforms your unequality to a "larger then" OR "smaller then" condition.
    So you get "DIFFERENT STRATEGIES FOR OR-TERMS".
    If you look closely you'll find that both strategies actually are "RANGE CONDITION FOR INDEX" on the IDX_TEST index.
    The difference between them both is the range (the start/stop-key combination) used for the index reading.
    Anyhow - unquality conditions are always problematic to DBMS.
    They are designed to be quick to find data that is equal or like some condition.
    regards,
    Lars

  • Strange read-through operation after entry processor work

    Hi.
    We use the combination cache listener - entry processor to do some actions when the data comes to coherence. We use Oracle Coherence Version 3.5.3/465.
    Just after the entry processor has set the new value for the entry, the new "get" operation is called for the cache and jdbc hit is done for this key.
    Here is the entry processor:
    public Object process(Entry entry) {       
            if (!entry.isPresent()) {
                // No entities exist for this CoreMatchingString - creating new Matching unit
                MatchingUnit newUnit = new MatchingUnit(newTrade);
                entry.setValue(newUnit, true);
                return null;
            ((MatchingUnit)entry.getValue()).processMatching(newTrade);
            return null;
        }Very interesting, that if I use entry.setValue(value) without second parameter - I recieve the db hit just on setValue method. According to docs, setValue() with one parameter returns the previous value and its logical, that the cache hit (and therefore the db hit) is done just on set. But I use the overloaded version void setValue(java.lang.Object oValue, boolean fSynthetic), which is said to be light and should not fetch previous version of the object. But this is done anyway! Not on setValue itself, but just after the process() method called.
    Actually it's strange, that coherence tries to fetch the previous value in the case it didn't exist! The cache.invoke(matchingStr, new CCPEntryProcessor(ccp)) is invoked on not existing record and it is created just on invokation. May be it's the bug or the place for optimization.
    Thanks

    bitec wrote:
    Thanks, Robert, for such detailed answer.
    Still not clear for me, why synthetic inserts are debatable. There are lots of cases, when the client simply updates/inserts the record (using setValue()) and does not need to recieve the previous value. If he needs, he will launch the method:Hi Anton,
    it is debatable because the purpose of the fSynthetic flag is NOT so that you can optimize a cache store operation away. Synthetic event means that this is not real change in the data set triggered by the user, it is something Coherence has done on the backing map / on the cache for its own reasons and decisions to be able to provide high-availability to the data, and it only changes that particular Coherence node's subset of data but does not have a meaning related to the actually existing full data set. Such reasons are partition movement and cache eviction (or possibly any other reasons why Coherence would want to change the content of the backing map without telling the user that anything has changed).
    If you set the synthetic flag, you are trying to masquerade a real data change as an event which Coherence decided to trigger. This is why it is debatable. Also, synthetic backing map events may not always lead to dispatching cache events (for partition rebalance they definitely not). This optimization may also be extended to synthetic cache events.
    java.lang.Object setValue(java.lang.Object oValue)and recieve the previous value. If he doesn't, he calls:
    void setValue(java.lang.Object oValue, boolean fSynthetic)and DOESN'T recieve the previous value as method is marked void. Thus he cannot get the previous value anyhow using this API, except of the direct manual db call. Yep, because Coherence is not interested in the old value in case of a synthetic event. The synthetic methods exist so that some entry can be changed in Coherence (usually by Coherence itself) in a way that it indicates a synthetic event so that listeners are not notified.
    Some valid uses for such functionality for setValue invoked by user code could be compacting some cached value and replacing the value the backing map stores with the compacted representation, which does not mean a change in the meaning of the actual cached value, only the representation changes. Of course if the setValue(2) method does not actually honor the synthetic flag, then such functionality will still incur all the costs of a normal setValue(1) call.
    But the previous value is anyway fetched by Coherence itself just after process() and client anyway doesn't get it! But any listeners on the cache may need to get it due to cache semantics reasons.
    In this case I regard this as the bug, cause the client, which uses this API doesn't expect the cache hit to take place (no return value for this overloaded setValue() method), but it takes and leads to some extra problems, resulting from read-through mechanizm.
    I would not regard it as a bug, it is probably the case of documenting a possible optimization too early, when it ultimately did not get implemented. I definitely would not try to abuse it to set a value without triggering a db fetch, as again, the intention of the synthetic flag is related not only to the cache loader functionality, but also to events and marking that a change indicates a real data change or a Coherence data management action.
    Now I understand why coherence does not know, whether this is inserted or updated value, thanks for details.
    Anton.
    *Edited: I thought about this problem from the point of the oracle user, but may be this additional hit is necessary for event producers, which need to make the events, containing old/new values. In this case this seems to be the correct behaviour.... Seems that I need some other workaround to avoid db hit. The best workaround is the empty load() method for cachestore...You can try to find a workaround, but it is an ugly can of worms, because of the scenario when more than one thread tries to load the same entry, and some of them try to load it with a reason.
    You may try to put some data into thread-local to indicate that you don't really want to load that particular key. The problem is that depending on configuration and race conditions your cache loader may not be called to clean up that thread-local as some other thread may just be invoked right now and in that case its return value is going to be returned to all other threads, too, so you may end up with a polluted thread.
    Best regards,
    Robert
    Edited by: robvarga on Oct 15, 2010 4:25 PM

  • Photoshop elements 6 freezes

    Hi friends, I need some help...
    Photoshop Elements 6 always freezes when trying to edit/fix any picture (guided edit, full edit, etc.)
    I´ve uninstalled and installed again and again, but always the same.
    Please, give me some advice here ¡!
    Thanks in advance.
    Regards and happy new year,
    Carlos

    ...it was the catalog that was somehow corrupted
    Thanks everybody for your help !!!!!!!!!!!!!
    And ok, I'm going to think about upgrading the HD...
    Thanks again,
    Carlos
    _cut & paste for the solution provided by Johnrellis_
    Repair or create a catalog if the Organizer wont start
    Start PSE to get to the Welcome screen. Hold down the Ctrl, Alt, and Shift keys and click Organize. Keep holding down the keys until you see the Catalog window. Select your catalog and click Repair. Select repair anyway even if PSE doesnt find problems. Then for good measure, click Optimize. (The documentation says that Optimize merely optimizes the performance of your catalog, but some people have reported that it also enabled catalog conversion to proceed after they initially encountered errors.)
    If youve changed PSE 6 to start in the Organizer or Editor and thus cant get to the Welcome screen, start the Editor, do Window > Welcome, and in the lower-left corner change it back to Start Up With Welcome Screen.
    If repairing the catalog doesnt help, repeat the steps above, but instead of clicking Repair, click New to create a new test catalog. If youre able to import photos into the new catalog and the Organizer works correctly, then theres likely something about the old catalog that is corrupted in some other way. In that case, try recreating the Organizers thumbnails.

  • Photoshop Elements 4.0 und Tablet PC

    Ich nutze Phtoshop Elements auf einem Toshiba M200 Tablet PC. Leider akzeptiert die Pinselfunktion meinen Tablet PC nicht als druckempfindliches Tablet obwohl dieser druckempfindlich ist. Hat jemand eine Idee wie man das konfigurieren kann?

    Repair or create a catalog if the Organizer wont start
    Start PSE to get to the Welcome screen. Hold down the Ctrl, Alt, and Shift keys and click Organize. Keep holding down the keys until you see the Catalog window. Select your catalog and click Repair. Select repair anyway even if PSE doesnt find problems. Then for good measure, click Optimize. (The documentation says that Optimize merely optimizes the performance of your catalog, but some people have reported that it also enabled catalog conversion to proceed after they initially encountered errors.)
    If youve changed PSE 6 to start in the Organizer or Editor and thus cant get to the Welcome screen, start the Editor, do Window > Welcome, and in the lower-left corner change it back to Start Up With Welcome Screen.
    If repairing the catalog doesnt help, repeat the steps above, but instead of clicking Repair, click New to create a new test catalog. If youre able to import photos into the new catalog and the Organizer works correctly, then theres likely something about the old catalog that is corrupted in some other way. In that case, try recreating the Organizers thumbnails.
    This says for pse6 but i have pse5 and the editor didnt open and i tried this and now all works fine hope this might help you

  • Photoshop Elements 9 wont start

    Hi
    I've successfully been using Photoshop Elements 7 for some time now, but have recently downloaded a trial version of Photoshop Elements 9 (PSE9). The installation was successfull - no errors. When I started PSE9, the welcome screen started fine, I selected the organiser. The familiar organiser started - then before it completed its load, it crashed and returned me to the Windows desktop.
    Looking through the forums, the advice was to ensure underlying software (Apple, Mircosoft etc ) is OK....done that, still would not start. Have also de-installed PSE7 and Premier Elements 7, cleaned the register, renamed the PSA file......still nothing - the organiser wont start.
    Finally, I've started the editor - that works fine.....however, the organiser still wont start from the editor.
    Help appreciated
    Graham

    Start PSE to get to the Welcome screen. Hold down the Ctrl, Alt, and Shift keys and click Organize. Keep holding down the keys until you see the Catalog window.  Select your catalog and click Repair.  Select “repair anyway” even if PSE doesn’t find problems.
    Then for good measure, click Optimize.  (The documentation says that Optimize merely optimizes the performance of your catalog, but some people have reported that it also enabled catalog conversion to proceed after they initially encountered errors.)
    If repairing the catalog doesn’t help, repeat the steps above, but instead of clicking Repair, click New to create a new test catalog.
      If you’re able to import photos into the new catalog and the Organizer works correctly, then there’s likely something about the old catalog that is corrupted in some other way.
    try and let me know your comments.
    Thanks,
    Garry

  • Nesting of two tables

    Hey,
    Is it possible to nest two tables?

    Suhas Saha wrote:
    >
    > What Yuri is suggesting is the old "Parallel Cursor" technique
    >
    > Imho nested LOOPs can be handled easily (& elegantly) using SORTED / HASHED tables. Read the SAP documentation on the optimization of the WHERE condition in LOOPs for further details.
    >
    > BR,
    > Suhas
    I explicitly did not advise to sort the table with report source code to avoid possible destroy of the coding lines order. Not because I forgot about loops on sorted tables

  • Reducing the memory footprint of Sybase ASE

    Hello All,
    We are doing a test install of SAP Solution Manager 7.01 on Sybase ASE 15.7.
    Since this is just a test setup, we started off with a lower-than-recommended hardware configuration (4 GB RAM only) due to time constraints.
    While post install performance of SolMan was decent, performance during solman_setup (setting up technical monitoring) has become appalling. We are not able to complete the configuration process at all as the SolMan configuration web application has become very unpredictable and extremely slow.
    The SolMan install is centralized and on a windows 2008 box. Windows task manager shows consistent memory usage of up to 90 - 95%. We also tried reducing the total number of work processes to just 8 but that did not help much. We see in 'task manager > resource monitor' that sqlserver.exe process is committing close to 2 GB of RAM when in fact it seems to be using much lesser when seen directly in 'task manager > process tab'. Please tell us about any memory optimization we can perform for Sybase ASE in order to complete out Solman_setup. We were hoping that we could change the  'max memory'/'total logical memory' setting for the DB directly using DBACOCKPIT tcode but could not do so as the parameters seems to be read-only. We could not find much documentation regarding memory optimization for the DB. Please help out. Thanks!
    -Regards,
    Arvind

    FWIW ... ASE's 'max memory' setting can be changed on the fly, while 'total logical memory' is a calculated value that you cannot change (ie, it's 'read only'; changing 'max memory' will cause 'total logical memory' to change automatically). [NOTE: DBACOCKPIT is a SAP-provided application that sits on top of ASE; while I know what's doable when connected directly to ASE I do not know if DBACOCKPIT has disabled the ability to change some configuration settings like 'max memory'.]
    As for the SolMan performance issues ... I'd recommend reposting your issue in the SAP Applications on ASE discussion group where you're likely to get the attention of more folks with SAP application (on ASE) experience.  (While someone may jump in here with SolMan suggestions, SolMan is a SAP application and this group isn't really geared towards SAP applications.)

  • DOCUMENTATION - Cisco Unified Workforce Optimization - Call Recording

    Hello,
    I'm trying to sell the Cisco Unified Workforce Optimization - Call Recording option in a customer, but he is asking me to send him some documentation about capacities and functionalities. I'm not able to find this kind of information of the website. Can anybody help me? Thanks!

    Hi,
    Please check the following link. hope this helps!!!
    http://www.cisco.com/en/US/products/ps8293/products_data_sheets_list.html
    Thanks,
    Dass
    Please rate useful posts

  • Error in maintenence optimizer while selecting files for download for EHP4

    HI,
    We are trying to download files for upgrade of ECC (HCM) and Portal systems from EHP3 to EHP4.
    This download we are doing through the Solman 7.0. Solman is currently at 7.0 but going to be upgraded soon to EHP1.
    Problem is that after we select the files for download of ECC EHP4, in step 2.1 i.e. 'choose stack', the next screen shows error as "Only one EhP release is allowed additionally".
    Is this because the Solman is not upgraded to EHP1 yet or soem other reason?
    Please let us know.
    Thanks for your time.
    Thanks & Regards,
    Kunal.

    ..have a look at the How-To Install EHP4 Guide (http://service.sap.com/~sapidb/011000358700000293582009E.PDF)
    3.4.2 Only one EHP release is allowed
    Issue: You get the error message: u201COnly one EHP release is allowedu201D
    Reason: This problem is documented in SAP Note 1139602.
    If you get this error message, then you already have installed another SAP enhancement package on this system before (for example SAP enhancement package 3 for SAP ERP 6.0).
    The error tells you that you have chosen too few technical usages. If you continued with this selection, you would not update all of the software components to SAP enhancement package 4 that are currently on SAP enhancement package 3. So in the end, some software components would be on SAP enhancement package 3 while others would be on SAP enhancement package 4. That way you would have several SAP enhancement package versions in one system, which is a state that is not supported.
    Solution: SAP Solution Manager Maintenance Optimizer requests you to select more technical usages and proposes a list for you to choose from.
    If you implement SAP Note 1287216, then Maintenance Optimizer automatically pre-selects the technical usages that are already installed on the system. In the example here, the technical usage Central Applications has been installed on a previous version of SAP enhancement packages.
    In rare cases you still get the error after accepting what SAP Solution Manager proposes. The reason for this is that you updated software components without properly covering technical usages.
    Hope that helps - best regards,
    Christian.

  • Rule based & Cost based optimizer

    Hi,
    What is the difference Rule based & Cost based optimizer ?
    Thanks

    Without an optimizer, all SQL statements would simply do block-by-block, row-by-row table scans and table updates.
    The optimizer attempts to find a faster way of accessing rows by looking at alternatives, such as indexes.
    Joins add a level of complexity - the simplest join is "take an appropriate row in the first table, scan the second table for a match". However, deciding which is the first (or driving) table is also an optimization decision.
    As technology improves a lot of different techiques for accessing the rows or joining that tables have been devised, each with it's own optimium data-size:performance:cost curve.
    Rule-Based Optimizer:
    The optimization process follows specific defined rules, and will always follow those rules. The rules are easily documented and cover things like 'when are indexes used', 'which table is the first to be used in a join' and so on. A number of the rules are based on the form of the SQL statement, such as order of table names in the FROM clause.
    In the hands of an expert Oracle SQL tuner, the RBO is a wonderful tool - except that it does not support such advanced as query rewrite and bitmap indexes. In the hands of the typical developer, the RBO is a surefire recipie for slow SQL.
    Cost-Based Optimizer:
    The optimization process internally sets up multiple execution proposals and extrapolates the cost of each proposal using statistics and knowledge of the disk, CPU and memory usage of each of the propsals. It is not unusual for the optimizer to analyze hundred, or even thousands, of proposals - remember, something as simple as a different order of table names is a proposal. The proposal with the least cost is generally selected to be executed.
    The CBO requires accurate statistics to make reasonable decisions.
    Even with good statistics, the complexity of the SQL statement may cause the CBO to make a wrong decision, or ignore a specific proposal. To compensate for this, the developer may provide 'hints' or recommendations to the optimizer. (See the 10g SQL Reference manual for a list of hints.)
    The CBO has been constantly improving with every release since it's inception in Oracle 7.0.12, but early missteps have given it a bad reputation. Even in Oracle8i and 9i Release 1, there were countless 'opportunities for improvement' <tm> As of Oracle 10g, the CBO is quite decent - sufficiently so that the RBO has been officially deprecated.

  • Optimization of remote calls of EJB

    Hello,
    does the SAP Web AS support automatic optimization of remote calls of EJBs such that the overhead associated with remote calls is avoided iff the target EJB of the call actually runs in the same JVM (and therefore would allow a local call)? This would imply that in this case objects are passed by reference instead of by value.
    To clarify: I am aware of the fact that EJBs can be called through Local and LocalHome interfaces. But that prevents the distribution of the EJBs. What I am looking for is to always use Remote and Home interface (remote call) and let the AppServer optimize the call to be virtually local if possible.
    From what I know, JBoss and WebLogic support this feature. Is there anything like that for the Web AS. What do I need to configure?
    Any hint is greatly appreciated. Please let me know if you need additional clarification on my question. Thanks!
    With kindest regards,
    Nick.

    Hi Nick,
    The optimizations I was talking about are a proprietary internal functionality and not something application developers can rely on. That's why they are not documented in any external documentation. According to your problem, my proposal is to declare both the remote and local interfaces of the beans and use the proper one depending on whether the bean client wants to pass parameters by value or by reference.
    SAP does not have plans to dynamically (or automatically as you call it) switch from calling by value to calling by reference as this is not just a performance optimization - this breaks the functionality. If we decide to do it, we will have at least two problems:
    1. Incompatibility with the RMI specification
    2. Incompatibility with previous versions
    As I already mentioned, there are EJB components that rely on being called by value, no matter whether the client resides in the same JVM or is a remote one.
    I still cannot get your goal - both insisting on remote interfaces and expecting object-by-reference.
    Best regards,
    Viliana

  • Column optimization in GUI_DOWNLOAD--Exce

    Hi Experts,
       I am writing an Excel file using GUI_DOWNLOAD function module. Is there any way to do column optimization in Excel file while downloading.
    Thanks and regards,
    Venkat

    Hi,
    There is a Complete & Very good documentation by SAP available on this URL. Please read this.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/204d1bb8-489d-2910-d0b5-cdddb3227820
    Hope your query get solved.
    Thanks and regards,
    Ramani N

Maybe you are looking for

  • Can I (and how) convert a VI created with LV 7 in LV 5?

    I received two VI files created with Labview 7.0, one is VI 7.0, the other LV 6.1. My problem is that I only have LabView 5.0 installed (and no possibility of having the 7.0 release), and it doesn't open them (I get the "Error code: 8", message). Is

  • Constant Correlation in BPM

    Hi All, I have a requirement to collect 3 different formatted text files (using FCC to XML) which do not relate in any way except that they all need to be collected and need to be sent in a sequence while delivering to the other side. Since there wer

  • Flash CS6 not responding every few seconds.

    I've been working on a few animation projects for the past few days, and Flash CS6 always tends to show the spinning blue wheel and white screen for like 10 seconds. This is very irritating. I would probably be done with the project by now if it were

  • Why Won't My Captivate 2 capture all the frames in my webpage?

    When I try to capture a mutliframe webpage, Captivate only captures one frame and leaves blank the other frames. Has anyone else encountered this problem? Is there any solution?

  • Help for dynamic selection

    Hello guy, I want to dynamically search for the occurrence of certain words in each data record(each field) in each table. Below is my code Parameters: p_tabname like dd02l-tabname. SELECT * FROM dd02l   INTO  TABLE itab   WHERE as4local = 'A'    AND