Performance impact related to workbook

Hi All,
Please let me know the performance impact on workbooks if i insert 13 work sheets,
How much performance impact will be there?
If i insert all reports reports in 2 worksheets by splitting them,
How much performance impact ll be there?
Please let me know which one is better way
Urgent!...
Thanks,
Sarau

Hello Sarau,
The performance is purely based on the Query selection, since that decide the number of records to be fetched.
The more you filter the query the performance will be better.
Since you are having many queries in a workbook, make sure that all have common variable screen.
Thanks
Chandran

Similar Messages

  • Index creation online - performance impact on database

    hi,
    I have oracle 11.1.0.7 database running on Linux as 3 node RAC.
    I have a huge table which has more than 255 columns and is about 400GB in size which is also highly fragmented because of constant DML activities.
    Questions:
    1. For now i am trying to create an index Online while the business applications are running.
    Will there be any performance impact on the database to create index Online on a single column of a table 'TBL' while applications are active against the same table? So basically my question will index creation on a object during DML operations on the same object have performance impact on the database? is there a major performance impact difference in the database in creating index online and not online?
    2. I tried to build an index on a column which has NULL value on this same table 'TBL' which has more than 255 columns and is about 400GB in size highly fragmented and has about 140 million rows.
    I requested the applications to be shutdown, but the index creation with parallel of 4 a least took more than 6 hours to complete.
    We have a Pre-Prod database which has the exported and imported copy of the Prod data. So the pre-Prod is a highly de-fragmented copy of the Prod.
    When i created the same index on the same column with NULL, it only took 15 minutes to complete.
    Not sure why on a highly fragmented copy of Prod it took more than 6 hours compared to highly defragmented copy of Pre-Prod where the index creation took only 15 minutes.
    Any thoughts would be helpful.
    Thanks.
    Phil.

    How are you measuring the "fragmentation" of the table ?
    Is the pre-prod database running single instance or RAC ?
    Did you collect any workload stats (AWR / Statspack) on the pre-prod and production systems while creating (or failing to create) the index ?
    Did you check whether the index creation ended up in-memory, single pass or multi pass in in the two environments ?
    The commonest explanation for this type of difference is two-fold:
    a) the older data needs a lot of delayed block cleanout, which results in a lot of random I/O to the undo tablespace - slowing down I/O generally
    b) the newer end of the table is subject to lots of change, so needs a lot of work relating to read-consistency - which also means I/O on the undo system
      --  UPDATED:  but you did say that you had stopped the application so this bit wouldn't have been relevant.
    On top of this, an online (re)build has to lock the table briefly at the start and end of the build, and in a busy system you can wait a long time for the locks to be acquired - and if the system has been busy while the build has been going on it can take quite a long time to apply the journal file to finish the index build.
    Regards
    Jonathan Lewis

  • Performance impact on the size of the CHM file

    Is there any impact on performance depending on the size of a
    CHM file?

    The main issues people have with help file performance
    (regardless of whether it is a CHM file) are related to the number
    of images, DHTML hotspots, bookmarks and links they have in a
    topic. The number of topics in a CHM should not be an issue. What
    exactly are you trying to access the performance impact of?

  • Performance impact in Oracle 8i - BLOB vs BFILE

    Hi Guys,
    We are evaluting intermedia to store multimedia objects.
    Does any know if storing and retreiving documents in Oracle database has impact on standard data stored in the database?
    Is it worth having a seperate database instance for storing tables with intermedia objects?
    Pal

    Part 2:
    Example 1: Let us estimate the storage requirements for a data set consisting of 500 video clips comprising a total size of 250MB (average size 512K bytes). Assume a LOB chunk size of 32768 bytes. Our model estimates that we need (8000 * 32) bytes or 250 k bytes for the index and 266 MB to hold the media data. Since the original media size is 250 MB, this represents about a 6.5% storage overhead for storing the media data in the database. The following table definition could be used to store this amount of data.
    create table video_items
    video_id number ,
    video_clip ordsys.ordvideo
    -- storage parameters for table in general
    tablespace video1 storage (initial 1M next 10M )
    -- special storage parameters for the video content
    lob (video_clip.source.localdata) store as
    (tablespace video2 storage (initial 260k next 270M )
    disable storage in row nocache nologging chunk 32768);
    Example 2: Let us estimate the storage requirements for a data set consisting of 5000 images with an average size of 56K bytes. The total amount of media data is 274 MB. Since the average image size is smaller, it is more space efficient to choose a smaller chunk size, say 8K, to store the data in the lob. Our model estimates that we will need about 313 MB to store the data and a little over 1 MB to store the index. In this case the 40 MB of storage required beyond the raw media content size represents a 15% overhead.
    Estimating retrieval costs
    Performance testing has shown that Oracle can achieve equivalent and even higher throughput performance for media content retrieval than a file system. The test was configured to retrieve media data from a server system to a requesting client system. In the database case, simple C client programs used OCI with LOB read callbacks to retrieve the data from the database. For the file system case, the client program used the standard C library functions to read data from the file system. Note that in this client server configuration, files are served remotely by the file server system. In essence, we are comparing distributed file system performance with Oracle database and SQLNet performance. These tests were performed on Windows NT 4 SP5.
    Although Oracle achieved higher absolute performance, the relative CPU cost per unit of throughput ranged from 1.7 to 3 times the file system cost. (For these tests, database performance ranged from 3.4 million to 9 million bytes/sec while file system performance ranged from 2.6 million bytes/sec to 7 million bytes/sec as the number of clients ranged from 1 to 5) One reason for the very high relative CPU cost at the higher end of performance is that as the 100 Mbs network approaches saturation, the system used more CPU to achieve the next increment of throughput. If we restrict ourselves to not exceeding 70% of network utilization, then the database can use up to 2.5 times as much CPU as the file system per unit of throughput.
    NOTE WELL: The extra CPU cost factors pertain only to media retrieval aspect of the workload. They do not apply to the entire system workload. See example.
    Example: A file based media asset system uses 10% of a single CPU simply to serve media data to requesting clients. If we were to store the media in an Oracle database and retrieve content from the database then we could expect to need 20-25% of a single CPU to serve content at the same throughput rate.

  • Performance Impact When Using SNC Communication

    Hello,
    Does anybody know if and how much performance impact there is if we use SNC for communication between the SAP Server and SAPGUI?
    I think there are two areas that may be impacted; Network and server CPU.
    For network load, I did find a part in "Front-End Network Requirements for SAP Business Solutions" document saying "overhead of roughly 350 bytes per user interaction step" but it does not specify the type of encryption.  I wonder if there is any other info on this?
    For CPU impact, how much overhead should I consider for sapgui access?
    I see no field for this in the quicksizer and I can't seem to find any white papers on this subject.
    Thank you in advance.

    >
    Peter Adams wrote:
    > Ken,
    >
    > if you plan to use SAPcryptlib for SNC between SAP servers, then you should use a SAPcryptolib-compatible solution for the SNC communication between SAPGUI and SAP server, and there is only one vendor who can provide this. Let me know, if you need help finding it. My contact information is in my SDN business card.
    Just so Kan is clear - It is not legal to use the SAP cryptolib provided by SAP for SNC between SAP GUI and SAP servers, so if x.509 is the desired mechanism you need to purchase additional software from the company which Peter works for to provide SAP GUI SNC-based SSO. I think instead, Kan might be using the free SAP supplied SNC Kerberos library, which is why I asked him to confirm this in my last post. I doubt he is interested to buy any third party software.
    > As to the performance discussion: first of all, yes, there will be a small performance impact if SNC is used (no matter which type or implementation), but from our experience with many actual SNC implementations, I can state that this is practically not relevant. It is not noticeable by users. There were never any performance discussions with customers. See also SAP Note 1043694.
    I agree with this - the performance impact is not noticed by users, but the system managers who look after the servers where SAP is installed, and the team responsible for the network need to be aware of any differences (if any) when SNC is turned on and when SNC is turned off. I think this is why Kan is asking these questions, not because he is concerned about users noticing any difference when they logon to SAP.
    > Just a first quick comment on certain statements above: Tim's arguments for proving his overall statement are not conclusive from my perspective. Nor do I think his overall statement itself is correct.
    The facts I mentioned are well known facts, e.g. symmetric crypto is far better from performance point of view than asymmetric. I know the examples I have shown which I found when doing a quick google search were not conclusive, but they were shown as initial examples, not necessarily the best examples. This is why I specifically mentioned that if you search in google yourself you will see many more references where comparisons are done between Kerberos (e.g. symmatric) compared with PKI (e.g. asymmetric).
    > First of all, he only selects one aspect of performance - CPU impact of encryption algorithms.
    No, I didn't. Some of the examples I referred to also discuss other differences. I also mentioend other differences such as memory and what protection level is used when configuring SNC.
    > But for a true comparison, you'd have to look at all relevant aspects (latency, network overhead, ...).
    Yes, I agree. No doubts here.
    >Network performance overhead is usuallly worse with Kerberos than with PKI.
    This is not true. When SAP is using SNC, the GSS-API standard is used and so the only network communication involves SAP software sending a standard GSS token from the workstation to the SAP server, and this GSS token is often about the same size, regardless of which mechanism is used, so any network performance differences are not related to the mechanism, but more related to the complexity of the cryptography used on each end (mostly on the server side).
    >Second, you need to look at the specific usage scenario. For example, the first report referenced by Tim is an analysis about different Token Profile mechanism for WS Security, for one specific implementation. This does not allow to draw any conclusion for the SNC use case in general, and for sure not for a specific implemenation. It does not take the overhead for the encryption of the message content into account. Third, Tim associates PKI exclusively with asymmetric encryption. Yes, it is well known that asymmetric algorithms are slower than symmetric ones, but it is also well known that the encryption of the message content (by far the majority of the data) happens with symmetric encryption algorithms in the PKI scenario. With PKI-based SNC, you can even select a symmetric algorithm and use a more performant one that the ones that Kerberos prescribes.
    Kerberos works with many different symmetric algorithms as well, so mentioning that the alg is selectable is not relavent to any comparison.
    > To summarize, I will try and collect facts that will support the opposite point of view. From our practical experience, the performance overhead is not relevant, and criteria like consistency with SAPcryptolib, strength of security, ease of administration, choice of authentication and encryption mechanism, etc. are much more important.
    >
    > Peter

  • Active Table Logging T000 performance impact

    Hi fellow SAP experts,
    I need some advice on system performance impact when switching on Table Logging for T000 - configuration in production please?
    We have decided to turn on Table Logging for auditing purposes, only allowing developer config in production following a volume of evidence being supplied.
    I need to know how much this activation is going to impact the performance of the companies production environments, how much storage, memory, performance, etc. this function is going to consume and how much of the above consumables I need to cater for now and in the future?
    We have a Dual Track environment, BAU want to switch on Table Logging for fix on fail, I want to swich it on for Project deliveries.
    Please advise, with referencing if possible?
    Thank you kindly
    Paul

    Hi Paul,
    There has been a constant debate whether table logging affects the System performance, especially in Produciton environment. Please see my comments below :
    1) To turn on logging for table T000, you will have to activate the parameter rec/client, with values for one client or all the clients in the System depending on your requirement.
    2) This parameter setting will not only log changes to T000, but also for over 28000 tables.
    3) But these are customizing tables which usually contain a relatively small amount of data which is changed occasionally.
    4) After activating, if you suddenly find performance issues, you can check which tables are causing issues via transaction SCU3.
    5) You can go to transaction SE13 and deactivate logging for a table, if you find too many entries for any particular table in SCU3.
    So, logging tables doesn't necessarily impact performance. Hope this helps. Please refer to SAP Note-608835 related to this.
    Best Regards,
    Savitha

  • Performance impact of using Web Services?

    As BEA and other vendors continue to add Web Services support
    to their enterprise software, what is your plan for
    quantifying the performance impact and the functional
    correctness of using web services before going live with the
    final application?
    Empirix is hosting a free one hour web event discussion on
    web services testing and automated web services testing
    solutions on Thursday, January 17, 2-3pm Eastern time.
    To sign-up for this web event or learn about other web
    events being offering by Empirix this month, go to:
    http://webevents.empirix.com
    For your convenience, here is the complete abstract:
    The advent of web services has brought the promises of
    integrating multiple software applications from
    heterogeneous networks and for exchanging information
    from vendor-to-vendor or vendor-to-consumer in a
    standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be
    critical that web services undergo performance testing.
    As with any enterprise software project, the adoption of
    proper test methodologies and use of testing tools will
    play a key part in the overall success or failure of
    projects utilizing web services. In a compressed
    software project schedule, an organization must
    quickly determine if its web services will operate
    successfully under a variety of load conditions. Like other
    web-based technologies, successful web services will need
    to respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing
    challenges created by this emerging technology, along with
    the variety of testing solutions available. Automated
    web service testing will be discussed and demonstrated
    using FirstACT, the first web services performance testing solution available
    on the market. Using a sample web
    service, automatic test case creation, scalability testing,
    and results analysis will be explored.
    If you wish to download FirstACT prior to the web event, you can do so at:
    http://www.empirix.com/downloads/FirstACT

    As BEA and other vendors continue to add Web Services support
    to their enterprise software, what is your plan for
    quantifying the performance impact and the functional
    correctness of using web services before going live with the
    final application?
    Empirix is hosting a free one hour web event discussion on
    web services testing and automated web services testing
    solutions on Thursday, January 17, 2-3pm Eastern time.
    To sign-up for this web event or learn about other web
    events being offering by Empirix this month, go to:
    http://webevents.empirix.com
    For your convenience, here is the complete abstract:
    The advent of web services has brought the promises of
    integrating multiple software applications from
    heterogeneous networks and for exchanging information
    from vendor-to-vendor or vendor-to-consumer in a
    standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be
    critical that web services undergo performance testing.
    As with any enterprise software project, the adoption of
    proper test methodologies and use of testing tools will
    play a key part in the overall success or failure of
    projects utilizing web services. In a compressed
    software project schedule, an organization must
    quickly determine if its web services will operate
    successfully under a variety of load conditions. Like other
    web-based technologies, successful web services will need
    to respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing
    challenges created by this emerging technology, along with
    the variety of testing solutions available. Automated
    web service testing will be discussed and demonstrated
    using FirstACT, the first web services performance testing solution available
    on the market. Using a sample web
    service, automatic test case creation, scalability testing,
    and results analysis will be explored.
    If you wish to download FirstACT prior to the web event, you can do so at:
    http://www.empirix.com/downloads/FirstACT

  • Performance impact of Web Services

    As WebLogic adds support for Web Services to its platform, what is
    your plan for quantifying the performance impact and the functional
    correctness of using web services before going live with the final
    application.
    Empirix is hosting a free one hour web event discussion on web
    services testing and automated web services testing solutions on
    Thursday, January 17, 2-3pm Eastern time.
    To register for this web event or learn about other web events being
    offering by Empirix this month, go to:
    http://webevents.empirix.com
    The complete abstract is below:
    The advent of web services has brought the promises of integrating
    multiple software applications from heterogeneous networks and for
    exchanging information from vendor-to-vendor or vendor-to-consumer in
    a standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be critical that
    web services undergo performance testing. As with any enterprise
    software project, the adoption of proper test methodologies and use of
    testing tools will play a key part in the overall success or failure
    of projects utilizing web services. In a compressed software project
    schedule, an organization must quickly determine if its web services
    will operate successfully under a variety of load conditions. Like
    other web-based technologies, successful web services will need to
    respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing challenges
    created by this emerging technology, along with the variety of testing
    solutions available. Automated web service testing will be discussed
    and demonstrated using FirstACT, the first web services performance
    testing solution available on the market. Using a sample web service,
    automatic test case creation, scalability testing, and results
    analysis will be explored.

    Hi,
    We test several frameworks and find out that usually JAXB 2.0 performs better than XMLBeans, but that is not a strict rule.
    Regards,
    LG

  • Performance issues -- related to printing

    Hi All,
    I am haviing production system performance issues related to printing. endusers are telling the printing is slow for almost printers. We are having more that 40 to 50 printers in landscape.
    As per my primary investigation I didnt find any issues in TSP01 & TSP02 tables. But I can see the table TST01 and TST03 table having many number of entries (more that lakh). I dont have idead about this table. Is ther eany thing related to this table where the print causes slowness or any other factors also makes this printing issue .. Please advice ..
    thanks in advance

    Hai,
    Check the below link...
    http://help.sap.com/saphelp_nw70/helpdata/en/c1/1cca3bdcd73743e10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/fc/04ca3bb6f8c21de10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/86/1ccb3b560f194ce10000000a114084/content.htm
    TemSe cannot administer objects that require more than two gigabytes of storage space, regardless of whether the objects are stored in the database or in the file system. Spool requests of a size greater than two gigabytes must therefore be split into several smaller requests.
    It is enough if you perform the regular background jobs and Temse consistency checks for the tables.
    This will help in controlling the capacity problems.
    If you change the profile parameter rspo/store_location parameter value to 'G' this will make the performance better. The disadvantages are TemSe data must be backed up and restored separately from the database using operating system tools, In the event of problems, it can be difficult to restore consistency between the data held in files and the TemSeu2019s object management in the database. Also you have take care of the Hard disk requirements because in some cases, spool data may occupy several hundred megabytes of disk storage. If you use the G option, you must ensure that enough free disk space is available for spool data.
    Regards,
    Yoganand.V

  • Performance impact using nested tables and object

    Hi,
    Iam using oracle 11g.
    While creating a package, iam using lot of nested tables created based on objects which will be passed between multiple functions in the package..
    Will it have any performance impact since all the data is stored in the memory.
    How can i measure the performance impact when the data grows ?
    Regards,
    Oracle User
    Edited by: user9080289 on Jun 30, 2011 6:07 AM
    Edited by: user9080289 on Jun 30, 2011 6:42 AM

    user9080289 wrote:
    While creating a package, iam using lot of nested tables created based on objects which will be passed between multiple functions in the package.. Not the best of ideas in general, in PL/SQL. This is not client code that can lay sole claim to most of the memory. It is server code and one of many server processes that need to share the available resources. So capitalism is fine on a client, but you need socialism on the server? {noformat} ;-) {noformat}
    Will it have any performance impact since all the data is stored in the memory.Interestingly yes. Usually crunching data in memory is better. In this case it may not be so. The memory used is the most expensive memory Oracle can use - the PGA. Private process memory. This means each process copy running that code, will need lots of memory.
    If you're not passing the data structures by reference, it means even bigger demands on memory as the data structure needs to be copied into the call stack and duplicated.
    The worse case scenario is that such code consumes so much free server memory, and make such huge demands on having that in pysical memory, it trashes memory management as the swap daemons are unable to keep up with the demand of swapping virtual memory pages into and out of memory. Most CPU time is spend by the swap daemons.
    I have seen servers crash due to this. I have seen a single PL/SQL process causing this.
    How can i measure the performance impact when the data grows ?Well, you need to look at the impact of your code on PGA memory. It is not SQL performance or I/O performance that is a factor - just how much private process memory your code needs in order to execute.

  • Regarding performance impact if I do DB accessing coding in comp Controller

    Hi ,
    This is my project requirement, I have to use some com compoment which in turn fetches data from the database. I am using a java com bridge tool to do this. This tool is generating the java proxy classes for the VB com component.
    I am using java proxy classes( This class files are using JNI to connect to VB COM compnent and fetch the data from DB) in my webdynpro component controller.
    The architecture is aas below
    WEBDYNPRO    >>   JAVA Classes object( generated by the JAVA- COM bridge   tool )                         >>   JAVA-COM bridge  tool >> VB COM+ Component   >> SQL server.
    The issue
       Performance :-   first time it is OK but for Consecutive calls the application is going down very visibly and after 4 iteration it hangs . When I look at the log I am getting this
    Message : Exception occured during processing of Web Dynpro application com/oreqsrch/com.oreqsrchapp.OReqSrchApp.
    The causing exception is nested.
    [EXCEPTION]
    com.sap.tc.webdynpro.services.session.LockException: Thread SAPEngine_Application_Thread[impl:3]_36 failed to acquire exclusive lock on client session ClientSession(id=(J2EE9536400)ID1120562150DB11245826542790956137End_1159630423). Existing locks: LockingManager(ThreadName:SAPEngine_Application_Thread[impl:3]_36, exclusive client session lock:
    ClientSessionLock(SAPEngine_Application_Thread[impl:3]_9), shared client session locks: ClientSessionSharedLockManager([]), app session locks: ApplicationSessionLockManager([]), current request: com/oreqsrch/com.oreqsrchapp.OReqSrchApp).
    Hint: Take a thread dump of the server node to find the blocking thread that causes the problem.
    Is this issue because I have return the code data access code in the component controller rather wrting in some beans ?
    My questions regarding
    What would the performance impact if write the DB access code in the webdynpro component controller rather than writing in a bean or an EJB?( I know ideally DB access code has to write in Bean or EJB ).
    Please address  this with respedct to performance  point of view .
    thanks
    pkiran

    Hi Both,
    Thanks for the reply.
    Yes they are closed and set it to null;
    Connection max and mini properties are controlled at COM+ components in VB.
    Since I am using COM - JAVA bridge,  I am just invoking the methods defined ijn the VB code  thru the bridge tool. all the objects which are retrieving the data are closed and nullify it.
    My question is
    if I write DB access code in component control instead in EJB or Java bean, will there be any performance issue ?
    regards
    pkiran

  • Before you can perform print-related tasks such as page setup - Internet Explorer 8

    I understand there have been similar threads on this but they all related to printing directly out of Acrobat reader rather than a browser.
    On some machines when printing a PDF from within Internet Explorer the error "before you can perform print-related tasks such as page setup" is displayed and users are unable to print. I have removed printers, changed the default printer none of which have worked. If you change Acrobat reader to not display PDFs in Internet Explorer printing works fine.
    System Information
    Windows 7 Enterprise
    Acrobat Reader 9.3.0
    Internet Explorer 8
    Any information regarding this would be appreciated.
    Cheers
    Jason

    This works in Photoshop cs4: SOLUTION!!! I get the same msg... "before you can perform..." etc. on some images that I've printed before but not on others. I haven't found the cause, but here's an easy WORKAROUND: Open your file, Select All, Copy, File>new, paste... and print away!!!

  • EBS performance impact using it as a Data Source

    I have a quick question on EBS performance. If I set up the EBS Database as a data source for SSRS (SQL Server Reporting Services), would there be a performance impact on EBS, due to SSRS accessing EBS Data for reports generation? Now, I know there'll always be a hit depending on the volume of data being accessed. But, my question is, will it be significantly higher using an external reporting tool using an ODBC connection rather than native XML Publisher.

    I have a quick question on EBS performance. If I set up the EBS Database as a data source for SSRS (SQL Server Reporting Services), would there be a performance impact on EBS, due to SSRS accessing EBS Data for reports generation? Now, I know there'll always be a hit depending on the volume of data being accessed. But, my question is, will it be significantly higher using an external reporting tool using an ODBC connection rather than native XML Publisher.Hi,
    Tough to answer without looking at data; my suggestion would be to have a test EBS environment setup, get permission from the vendors to run performance test without buying license - compare AWRs from both scenarios and then decide.
    Generally speaking, native XML publisher (BI Publisher) has less of database performance hit than external reporting tools using ODBC.
    Hope this helps.
    Regards,

  • Performance Impact with OR concatenation / Inlist Iterator

    Hello guys,
    is there any performance impact with using OR concatenations or some IN-Lists?
    The function of both is the "same":
    1) Concatenation (OR-processing)
    SELECT * FROM emp WHERE mgr# = 1 OR job = ‘YOURS’;- Similar to query rewrite into 2 seperate queries
    - Which are then ‘concatenated’
    2) Inlist Iterator
    SELECT * FROM dept WHERE d# in (10,20,30);- Iteration over enumerated value-list
    - Every value executed seperately
    - Same as concatenation of 3 “OR-red” values
    So i want to know if there is any performance impact if using IN-Lists instead of OR concatenations.
    Thanks and Regards
    Stefan

    The note is very misleading and far from complete; but there is one critical point of difference that you need to observe. It's talking about using a tablescan to deal with an IN-list (and that's NOT "in-list iteration"), my comments start by saying "if there is a suitable indexed access path."
    The note, by the way, describes a transformation to a UNION ALL - clearly that would be inefficient if there were no indexed access path. (Given the choice between one tablescan and several consecutive tablescans, which option would you choose ?).
    The note, in effect, is just about a slightly more subtle version of "why isn't oracle using my index". For "shorter" lists you might get an indexed iteration, for "longer" lists you might get a tablescan.
    Remember, Metalink is not perfect; most of it is just written by ordinary people who learned about Oracle in the normal fashion.
    Quick example to demonstrate the difference between concatenation and iteration:
    drop table t1;
    create table t1 as
    select
         rownum     id,
         rownum     n1,
         rpad('x',100)     padding
    from
         all_objects
    where
         rownum <= 10000
    create index t1_i1 on t1(id);
    execute dbms_stats.gather_table_stats(user,'t1')
    set autotrace traceonly explain
    select
         /*+ use_concat(t1) */
         n1
    from
         t1
    where
         id in (10,20,30,40,50,60,70,80,90,100)
    set autotrace offThe execution plan I got from 8.1.7.4 was as follows - showing the transformation to a UNION ALL - this is concatenation and required 10 query block optimisations (which were all done three times):
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=20 Card=10 Bytes=80)
       1    0   CONCATENATION
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       3    2       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       4    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       5    4       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       6    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       7    6       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       8    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       9    8       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      10    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      11   10       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      12    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      13   12       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      14    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      15   14       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      16    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      17   16       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      18    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      19   18       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      20    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      21   20       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)This is the execution plan I got from 9.2.0.8, which doesn't transform to the UNION ALL, and only needs to optimise one query block.
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=10 Bytes=80)
       1    0   INLIST ITERATOR
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=3 Card=10 Bytes=80)
       3    2       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=2 Card=10)Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • Performance Impact for the Application when using ADFLogger

    Hi All,
    I am very new to ADFLogger and I am going to implement this in my Application. I go through Duncan Mill's acticles and got the basic understanding.
    I have some questions to be clear.
    Is there any Performance Impact when using ADFLogger to slower the Appllication.
    Is there any Best Practices to follow with ADFLogger to minimize the negative impact(if exists).
    Thanks
    Dk

    Well, add a call to a logger is a method call. So if you add a log message for every line of code, you'll see an impact.
    You can implement it in a way that you only write the log messages if the log level if set to a level which your logger writes (or lower). In this case the impact is like having an if statement, and a method call if the if statement returns true.
    After this theory here is my personal finding, as I use ADFLogger quite a lot. In production systems you turn the log lever to WARNING or higher so you will not see many log messages in the log. Only when a problem is reported you set the log level to a lower value to get more output.
    I normally use the 'check log level before logging a message' and the 'just print the message' combined. When I know that a message is printed very often, I first check the level. If I assume or know that a message is only logged seldom, I just log it.
    I personally have not seen a negative impact this way.
    Timo

Maybe you are looking for

  • My backlight keeps going dark....

    My backlight keeps going off and my screen goes dark. You can still make out all of the features...but everything is really dark. It will just go on and go off. Sometimes when I close my computer and just open it up again, the backlight comes on agai

  • How to create a Deep Links Structure?

    Hi, What would be a good concept in order to arrange a strong nested files-navigation-structure? -when it has to englobe up to 6 or 7 sub-directories. In other words, what would be a suitable construction inside a mega-menu in order to arrange space

  • Repairs under Warranty whilst moving countries

    Hi, I bought my Toshiba Satelite laptop 10 months ago in Germany and i´m moving back to England in one month´s time, my laptop has had really great timing and there´s a problem with the cooling system. My warranty isn´t worldwide however it does say

  • How to check remotely that a PC has been restarted after Labview Runtime 2011 has been installed on it?

    Hello, I am deploying a Labview 2011 application on 150 XP-machines in various plants worldwide. Currently all machines still have an old Labview 8.21 runtime (+old DAQmx). I have sent a procedure to my colleagues in the plants so that they install t

  • 11gr2 database compatible parameter

    I have a 11g r2 database and the compatible parameter was originally set to 11.2.0.0.0. I did an alter system set compatible='10.2.0.4' scope=spfiile; Now when I try to start the database, it says control file version 11.2.0.0.0 incompatible with ORA