Performance Impact with InfoCube Compression

Hi,
Is there any delivered content which gives a comparative analysis of performance before InfoCube compression and after it? If not then which is the best way to have such stats?
Thank you,
sam

The BW Technical Content cubes/queries can tell you if a query is performing better at differnet points in time.  I like ot always compare a volume of queries before and after, rather than look at a single execution.  As mentioned, ST03 can provide info, as can RSRT.
Three major components of compression that can aid performance:
<u><b>Compression</b></u>
The compression itself - how many rows do you end up with in the E fact table compared to what you had in the F fact table.  this all depends on the data - some cubes compress quite a bit, others, not at all, e.g.
Lets say you have a cube with a time grain of Calendar Month.  You load trans to it daily.  A particular combination of characteristic values on a transaction occurs every day so after a month you have 30 transactions spread across 30 Requests in the F fact table.  Now you run compression - these 30 rows would compress to just 1 row.  You have now reduced the volume of ddata in your cube to just about 3% of what it used to be.  Queries should run much faster in this case.  In real life, doubt you would see a 30 - 1 reduction, but perhaps a 2 - 1 or 3 - 1 is reasonable.  It all depends on your data and your model.
<b><u>Zero Elimination</u></b>
Some R3 appls generate trans where all the KFs are 0, or generate trans that offset each other, netting to 0.  Specifying Aero Elimination during compression will get rid of those records.
[<b>u]Partitioning</u></b>
The E fact table can be partitioned on 0FISCPER or 0CALMONTH.  If you have queries that restrict on those characteristics, the DB can narrow in on just the partitions that hold the relevant data (partition pruning is how it is usually referred to).  if a query on goes after 1 month of data form a cube that has 5 years of data, this can be a big benefit.

Similar Messages

  • Performance Impact with OR concatenation / Inlist Iterator

    Hello guys,
    is there any performance impact with using OR concatenations or some IN-Lists?
    The function of both is the "same":
    1) Concatenation (OR-processing)
    SELECT * FROM emp WHERE mgr# = 1 OR job = ‘YOURS’;- Similar to query rewrite into 2 seperate queries
    - Which are then ‘concatenated’
    2) Inlist Iterator
    SELECT * FROM dept WHERE d# in (10,20,30);- Iteration over enumerated value-list
    - Every value executed seperately
    - Same as concatenation of 3 “OR-red” values
    So i want to know if there is any performance impact if using IN-Lists instead of OR concatenations.
    Thanks and Regards
    Stefan

    The note is very misleading and far from complete; but there is one critical point of difference that you need to observe. It's talking about using a tablescan to deal with an IN-list (and that's NOT "in-list iteration"), my comments start by saying "if there is a suitable indexed access path."
    The note, by the way, describes a transformation to a UNION ALL - clearly that would be inefficient if there were no indexed access path. (Given the choice between one tablescan and several consecutive tablescans, which option would you choose ?).
    The note, in effect, is just about a slightly more subtle version of "why isn't oracle using my index". For "shorter" lists you might get an indexed iteration, for "longer" lists you might get a tablescan.
    Remember, Metalink is not perfect; most of it is just written by ordinary people who learned about Oracle in the normal fashion.
    Quick example to demonstrate the difference between concatenation and iteration:
    drop table t1;
    create table t1 as
    select
         rownum     id,
         rownum     n1,
         rpad('x',100)     padding
    from
         all_objects
    where
         rownum <= 10000
    create index t1_i1 on t1(id);
    execute dbms_stats.gather_table_stats(user,'t1')
    set autotrace traceonly explain
    select
         /*+ use_concat(t1) */
         n1
    from
         t1
    where
         id in (10,20,30,40,50,60,70,80,90,100)
    set autotrace offThe execution plan I got from 8.1.7.4 was as follows - showing the transformation to a UNION ALL - this is concatenation and required 10 query block optimisations (which were all done three times):
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=20 Card=10 Bytes=80)
       1    0   CONCATENATION
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       3    2       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       4    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       5    4       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       6    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       7    6       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       8    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       9    8       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      10    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      11   10       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      12    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      13   12       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      14    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      15   14       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      16    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      17   16       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      18    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      19   18       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      20    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      21   20       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)This is the execution plan I got from 9.2.0.8, which doesn't transform to the UNION ALL, and only needs to optimise one query block.
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=10 Bytes=80)
       1    0   INLIST ITERATOR
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=3 Card=10 Bytes=80)
       3    2       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=2 Card=10)Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • Performance tuning with HTTP compression

    We currently are using Oracle 11g and IE8. The 11g UI has been pretty slow and when i looked up for perfomance tuning one of the methods was http compression as stated in the link below -
    http://blogs.oracle.com/pa/entry/obiee_11g_user_interface_ui
    The portion of the above information of the has been actually except from Oracle Support ID (1312299.1).
    Now I have made the changes as suggested but am wondering how can i test it to make sure the changes have surely improved the performance or not. It doesnt talk of any testing or verification methods.

    Just curious , what is the compression ratio ..?
    i mean by how much does the volume come down when the records are moved from F to E tables.
    also is the time spend on compression of the Cube or on the aggregates or on the deletion of records in the F table..?
    what is the ratio of aggregate volume to the cube volume...?
    You might want to set an trace on the compression session to get answers on where it is spending most of its time
    Compression is equivalent to executing a query on the F table  and summarising the results with the request ID column , appending the results to E table and deleting the compressed request from F table.
    Thanks.

  • Performance impact with Parent Hierachy 2 and a new dimension

    Hello all
    I have a Consolidation requirement for which I have two solutions. These are:
    Sol A>> Create a new dimension for reporting on accounts.
    Sol B>> Create a new Parent Hierarchy in Account dimension.
    Please help me on two open questions:
    1. If comparing the performance, which solution is better?
    2. Which solution has lesser impact on Business rules?
    Regards
    Abhishek

    When you add a memeber into one dimension you have to added also into second dimensions if you are using two dimensions instead of one. So that's means you have doeble maintenance.
    Business rules will not be impacted if you are using a second hierachy but it can have an impact in case when you will create a new dimension instead to use  a second hierachy.
    I Hope this make sense.
    Regards
    Sorin Radulescu

  • Usage Tracking performance impact

    Hi all,
    I just started to implement the OBIEE usage tracking functionalities.
    Anyone implemented it and call share some experience on the performance impact with/without usage tracking enabled??
    Thanks
    BCHK

    If you use direct_insert option impact will be minimized. In general usage tracking captures report run activity and logs into table, so there is overhead on the system to insert to table or write to file. trade off is the information you get with usage tracking..
    hope this helps..
    Edited by: Kasyap on Mar 23, 2013 10:32 PM

  • DBCC SHRINKFILE with NOTRUNCATE has any performance impact in log shipping?

    Hi All,
    To procure space I'm suggested to use below command on primary database in log shipping and I just want
    to clarify whether it has any performance impact on primary database in log shipping and also is it a recommended practice to use the below command
    in regular intervals in
    case the log is using much space of the drive. Please suggest on this. Thank You.
    "DBCC
    SHRINKFILE ('CommonDB_LoadTest_log', 2048, NOTRUNCATE)"
    Regards,
    Kalyan
    ----Learners Curiosity Never Ends----

    Hi Kalyan \ Shanky
    I was not clear in linked conversation so adding some thing :
    As per http://msdn.microsoft.com/en-us//library/ms189493.aspx
    ----->TRUNCATEONLY is applicable only to data files. 
    BUT
    As per : http://technet.microsoft.com/en-us/library/ms190488.aspx
    TRUNCATEONLY affects the log file.
    And i also tried , it does works.
    Now Truncateonly : Releases all free space at the end of the file to the operating system but does not perform any page movement inside the file. The data file is shrunk only to the last allocated extent. target_percent is ignored if specified
    with TRUNCATEONLY.
    So
    1. if i am removing non used space it will not effect log shiping or no log chain will broke.
    2. If you clear unsued space it will not touch existing data. no performance issue
    3. If you clear space and then due to other operation log file will auto grow it will put unnecessary pressure on database to allocate disk every time. So once you find the max growth of log file let it be as any how again it will grow to same size.
    4. Shrinking log file is not recommeded if its again and again reaching to same size . Until unless you have space crunch
    Thanks Saurabh Sinha
    http://saurabhsinhainblogs.blogspot.in/
    Please click the Mark as answer button and vote as helpful
    if this reply solves your problem

  • Drawbacks of Infocube compression

    Hi Experts,
    is there any drawbacks of infocube compression??
    Thanks
    DV

    Hi DV
    During the upload of data, a full request will always be inserted into the F-fact table. Each request gets
    its own request ID and partition (DB dependent), which is contained in the 'package' dimension. This
    feature enables you, for example, to delete a request from the F-fact table after the upload. However,
    this may result in several entries in the fact table with the same values for all characteristics except the
    Best Practice: Periodic Jobs and Tasks in SAP BW
    request ID. This will increase the size of the fact table and number of partitions (DB dependent)
    unnecessarily and consequently decrease the performance of your queries. During compression,
    these records are summarized to one entry with the request ID '0'.
    Once the data has been
    compressed, some functions are no longer available for this data (for example, it is not possible to
    delete the data for a specific request ID).
    Transactional InfoCubes in a BPS environment
    You should compress your InfoCubes regularly, especially the transactional InfoCubes.
    During compression, query has an impact if it hits its respective aggregate. As every time you finish compressing the aggregates are re-built.
    With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced.
    "If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube in reporting while the compression is running. With other manufacturers’ databases, you will see a warning if you try to execute a query on an InfoCube while the compression is running. In this case you can execute the query once the compression has finished executing."
    Hope this may help you
    GTR

  • Performance impact of using Web Services?

    As BEA and other vendors continue to add Web Services support
    to their enterprise software, what is your plan for
    quantifying the performance impact and the functional
    correctness of using web services before going live with the
    final application?
    Empirix is hosting a free one hour web event discussion on
    web services testing and automated web services testing
    solutions on Thursday, January 17, 2-3pm Eastern time.
    To sign-up for this web event or learn about other web
    events being offering by Empirix this month, go to:
    http://webevents.empirix.com
    For your convenience, here is the complete abstract:
    The advent of web services has brought the promises of
    integrating multiple software applications from
    heterogeneous networks and for exchanging information
    from vendor-to-vendor or vendor-to-consumer in a
    standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be
    critical that web services undergo performance testing.
    As with any enterprise software project, the adoption of
    proper test methodologies and use of testing tools will
    play a key part in the overall success or failure of
    projects utilizing web services. In a compressed
    software project schedule, an organization must
    quickly determine if its web services will operate
    successfully under a variety of load conditions. Like other
    web-based technologies, successful web services will need
    to respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing
    challenges created by this emerging technology, along with
    the variety of testing solutions available. Automated
    web service testing will be discussed and demonstrated
    using FirstACT, the first web services performance testing solution available
    on the market. Using a sample web
    service, automatic test case creation, scalability testing,
    and results analysis will be explored.
    If you wish to download FirstACT prior to the web event, you can do so at:
    http://www.empirix.com/downloads/FirstACT

    As BEA and other vendors continue to add Web Services support
    to their enterprise software, what is your plan for
    quantifying the performance impact and the functional
    correctness of using web services before going live with the
    final application?
    Empirix is hosting a free one hour web event discussion on
    web services testing and automated web services testing
    solutions on Thursday, January 17, 2-3pm Eastern time.
    To sign-up for this web event or learn about other web
    events being offering by Empirix this month, go to:
    http://webevents.empirix.com
    For your convenience, here is the complete abstract:
    The advent of web services has brought the promises of
    integrating multiple software applications from
    heterogeneous networks and for exchanging information
    from vendor-to-vendor or vendor-to-consumer in a
    standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be
    critical that web services undergo performance testing.
    As with any enterprise software project, the adoption of
    proper test methodologies and use of testing tools will
    play a key part in the overall success or failure of
    projects utilizing web services. In a compressed
    software project schedule, an organization must
    quickly determine if its web services will operate
    successfully under a variety of load conditions. Like other
    web-based technologies, successful web services will need
    to respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing
    challenges created by this emerging technology, along with
    the variety of testing solutions available. Automated
    web service testing will be discussed and demonstrated
    using FirstACT, the first web services performance testing solution available
    on the market. Using a sample web
    service, automatic test case creation, scalability testing,
    and results analysis will be explored.
    If you wish to download FirstACT prior to the web event, you can do so at:
    http://www.empirix.com/downloads/FirstACT

  • Performance impact of Web Services

    As WebLogic adds support for Web Services to its platform, what is
    your plan for quantifying the performance impact and the functional
    correctness of using web services before going live with the final
    application.
    Empirix is hosting a free one hour web event discussion on web
    services testing and automated web services testing solutions on
    Thursday, January 17, 2-3pm Eastern time.
    To register for this web event or learn about other web events being
    offering by Empirix this month, go to:
    http://webevents.empirix.com
    The complete abstract is below:
    The advent of web services has brought the promises of integrating
    multiple software applications from heterogeneous networks and for
    exchanging information from vendor-to-vendor or vendor-to-consumer in
    a standardized way.
    As web service technologies are deployed within and across
    organizations over the next several years, it will be critical that
    web services undergo performance testing. As with any enterprise
    software project, the adoption of proper test methodologies and use of
    testing tools will play a key part in the overall success or failure
    of projects utilizing web services. In a compressed software project
    schedule, an organization must quickly determine if its web services
    will operate successfully under a variety of load conditions. Like
    other web-based technologies, successful web services will need to
    respond quickly and correctly when implemented.
    During our presentation, we will discuss the testing challenges
    created by this emerging technology, along with the variety of testing
    solutions available. Automated web service testing will be discussed
    and demonstrated using FirstACT, the first web services performance
    testing solution available on the market. Using a sample web service,
    automatic test case creation, scalability testing, and results
    analysis will be explored.

    Hi,
    We test several frameworks and find out that usually JAXB 2.0 performs better than XMLBeans, but that is not a strict rule.
    Regards,
    LG

  • Queries Performance impact

    Hi Team,
              We have few queries which were running good until last week; but for the past 3 days these queries were facing severe some performance issues and timeout dumps in the back-end.
    For some selections it is running long and for some selection it is executing quicker and for some selection it is getting time out.
    We made a complete data rebuild for the queries connected data targets (data rebuild from the source) before 3 days; after which  the query performance issue faced.
    No changes were made to the queries or objects for the last 2 months.
    Data Flow -  Query -> Multi Provider -> Infoset -> InfoCube -> DSO -> Datasource (DB Connect).
    Note:
    In Query we have nested aggregation to handle the result rows; but again no changes to it for the past 2 months.
    We have loaded data in one single request at the InfoCube level.
              I mean some 2 million records with different plants in one single request do it have performance impact while reading data?
    Can anyone please throw light on the possible cause for the performance issue?
    Thanks
    Regards
    San

    Hi San,
    As you said that you completely loaded data and then  only your performance issues started, can you please tell whether you are using any BIA  for reporting?.
    If not BIA,  can  you please  delete the DB  staticstics for those Infocubes and then  create the DB  statistics for the same.
    Also you completely rebuild the  data which means  drop and reload, your PSATEMPSPACE OR  your temporarrily file space   might have  completely build. Ask your basis team to check the  space in the tables.
    Regards,
    Rajesh

  • Performance problem with Mavericks.

    Performance problem with Mavericks. My Mac is extremly slow after upgrading to Mavericks. What can i do to solve that?

    If you are still experiencing slow down issues, it maybe because of a few other reasons.
    Our experience with OS X upgrades, and Mavericks is no exception, is that users have installed a combination of third party software and/or hardware that is incompatible and/or is outdated that causes many negative performance issues when upgrading to a new OS X version.
    Your Mac's hard drive maybe getting full.
    Do you run any antivirus software on your Mac? Commercial Antivirus software can slow down and negatively impact the normal operation of OS X.
    Do you have apps like MacKeeper or any other maintenance apps like CleanMyMac 1 or 2, TuneUpMyMac or anything like these apps, installed on your Mac? These types of apps, while they appear to be helpful, can do too good a job of data "cleanup" causing the potential to do serious data corruption or data deletion and render a perfectly running OS completely dead and useless leaving you with a frozen, non-functional Mac.
    Your Mac may have way too many applications launching at startup/login.
    Your Mac may have old, non-updated or incompatible software installed.
    Your Mac could have incompatible or outdated web browser extensions, plugins or add-ons.
    Your Mac could have connected third party hardware that needs updated device drivers.
    It would help us to help you if we could have some more technical info about your iMac.
    If you so choose, please download, install and run Etrecheck.
    Etrecheck was developed as a simple Mac diagnostic report tool by a regular Apple Support forum user and technical support contributor named Etresoft. Etrecheck is a small, unobstrusive app that compiles a static snapshot of your entire Mac hardware system and installed software.
    This is a free app that has been honestly created to provided help in diagnosing issues with Macs running the new OS X 10.9 Mavericks.
    It is not malware and can be safely downloaded and installed onto your Mac.
    http://www.etresoft.com/etrecheck
    Copy/paste and post its report here in another reply thread so that we have a complete profile of your Mac's hardware and installed software so we can all help continue with your Mac performance issues.
    Thank you.

  • Index creation online - performance impact on database

    hi,
    I have oracle 11.1.0.7 database running on Linux as 3 node RAC.
    I have a huge table which has more than 255 columns and is about 400GB in size which is also highly fragmented because of constant DML activities.
    Questions:
    1. For now i am trying to create an index Online while the business applications are running.
    Will there be any performance impact on the database to create index Online on a single column of a table 'TBL' while applications are active against the same table? So basically my question will index creation on a object during DML operations on the same object have performance impact on the database? is there a major performance impact difference in the database in creating index online and not online?
    2. I tried to build an index on a column which has NULL value on this same table 'TBL' which has more than 255 columns and is about 400GB in size highly fragmented and has about 140 million rows.
    I requested the applications to be shutdown, but the index creation with parallel of 4 a least took more than 6 hours to complete.
    We have a Pre-Prod database which has the exported and imported copy of the Prod data. So the pre-Prod is a highly de-fragmented copy of the Prod.
    When i created the same index on the same column with NULL, it only took 15 minutes to complete.
    Not sure why on a highly fragmented copy of Prod it took more than 6 hours compared to highly defragmented copy of Pre-Prod where the index creation took only 15 minutes.
    Any thoughts would be helpful.
    Thanks.
    Phil.

    How are you measuring the "fragmentation" of the table ?
    Is the pre-prod database running single instance or RAC ?
    Did you collect any workload stats (AWR / Statspack) on the pre-prod and production systems while creating (or failing to create) the index ?
    Did you check whether the index creation ended up in-memory, single pass or multi pass in in the two environments ?
    The commonest explanation for this type of difference is two-fold:
    a) the older data needs a lot of delayed block cleanout, which results in a lot of random I/O to the undo tablespace - slowing down I/O generally
    b) the newer end of the table is subject to lots of change, so needs a lot of work relating to read-consistency - which also means I/O on the undo system
      --  UPDATED:  but you did say that you had stopped the application so this bit wouldn't have been relevant.
    On top of this, an online (re)build has to lock the table briefly at the start and end of the build, and in a busy system you can wait a long time for the locks to be acquired - and if the system has been busy while the build has been going on it can take quite a long time to apply the journal file to finish the index build.
    Regards
    Jonathan Lewis

  • Performance impact using nested tables and object

    Hi,
    Iam using oracle 11g.
    While creating a package, iam using lot of nested tables created based on objects which will be passed between multiple functions in the package..
    Will it have any performance impact since all the data is stored in the memory.
    How can i measure the performance impact when the data grows ?
    Regards,
    Oracle User
    Edited by: user9080289 on Jun 30, 2011 6:07 AM
    Edited by: user9080289 on Jun 30, 2011 6:42 AM

    user9080289 wrote:
    While creating a package, iam using lot of nested tables created based on objects which will be passed between multiple functions in the package.. Not the best of ideas in general, in PL/SQL. This is not client code that can lay sole claim to most of the memory. It is server code and one of many server processes that need to share the available resources. So capitalism is fine on a client, but you need socialism on the server? {noformat} ;-) {noformat}
    Will it have any performance impact since all the data is stored in the memory.Interestingly yes. Usually crunching data in memory is better. In this case it may not be so. The memory used is the most expensive memory Oracle can use - the PGA. Private process memory. This means each process copy running that code, will need lots of memory.
    If you're not passing the data structures by reference, it means even bigger demands on memory as the data structure needs to be copied into the call stack and duplicated.
    The worse case scenario is that such code consumes so much free server memory, and make such huge demands on having that in pysical memory, it trashes memory management as the swap daemons are unable to keep up with the demand of swapping virtual memory pages into and out of memory. Most CPU time is spend by the swap daemons.
    I have seen servers crash due to this. I have seen a single PL/SQL process causing this.
    How can i measure the performance impact when the data grows ?Well, you need to look at the impact of your code on PGA memory. It is not SQL performance or I/O performance that is a factor - just how much private process memory your code needs in order to execute.

  • Performance impact on the size of the CHM file

    Is there any impact on performance depending on the size of a
    CHM file?

    The main issues people have with help file performance
    (regardless of whether it is a CHM file) are related to the number
    of images, DHTML hotspots, bookmarks and links they have in a
    topic. The number of topics in a CHM should not be an issue. What
    exactly are you trying to access the performance impact of?

  • Regarding performance impact if I do DB accessing coding in comp Controller

    Hi ,
    This is my project requirement, I have to use some com compoment which in turn fetches data from the database. I am using a java com bridge tool to do this. This tool is generating the java proxy classes for the VB com component.
    I am using java proxy classes( This class files are using JNI to connect to VB COM compnent and fetch the data from DB) in my webdynpro component controller.
    The architecture is aas below
    WEBDYNPRO    >>   JAVA Classes object( generated by the JAVA- COM bridge   tool )                         >>   JAVA-COM bridge  tool >> VB COM+ Component   >> SQL server.
    The issue
       Performance :-   first time it is OK but for Consecutive calls the application is going down very visibly and after 4 iteration it hangs . When I look at the log I am getting this
    Message : Exception occured during processing of Web Dynpro application com/oreqsrch/com.oreqsrchapp.OReqSrchApp.
    The causing exception is nested.
    [EXCEPTION]
    com.sap.tc.webdynpro.services.session.LockException: Thread SAPEngine_Application_Thread[impl:3]_36 failed to acquire exclusive lock on client session ClientSession(id=(J2EE9536400)ID1120562150DB11245826542790956137End_1159630423). Existing locks: LockingManager(ThreadName:SAPEngine_Application_Thread[impl:3]_36, exclusive client session lock:
    ClientSessionLock(SAPEngine_Application_Thread[impl:3]_9), shared client session locks: ClientSessionSharedLockManager([]), app session locks: ApplicationSessionLockManager([]), current request: com/oreqsrch/com.oreqsrchapp.OReqSrchApp).
    Hint: Take a thread dump of the server node to find the blocking thread that causes the problem.
    Is this issue because I have return the code data access code in the component controller rather wrting in some beans ?
    My questions regarding
    What would the performance impact if write the DB access code in the webdynpro component controller rather than writing in a bean or an EJB?( I know ideally DB access code has to write in Bean or EJB ).
    Please address  this with respedct to performance  point of view .
    thanks
    pkiran

    Hi Both,
    Thanks for the reply.
    Yes they are closed and set it to null;
    Connection max and mini properties are controlled at COM+ components in VB.
    Since I am using COM - JAVA bridge,  I am just invoking the methods defined ijn the VB code  thru the bridge tool. all the objects which are retrieving the data are closed and nullify it.
    My question is
    if I write DB access code in component control instead in EJB or Java bean, will there be any performance issue ?
    regards
    pkiran

Maybe you are looking for

  • External hard drive is present in Airport Utility but not on PC

    I've hooked up a Passport 500GB external hard drive via USB hub.  Airport Utility recognizes the hard drive; however, it is not present anywhere on my PC so that I can save file to it?

  • Xcelsius - SWF embedded in ppt creates large file - how to reduce it?

    Hi there, I have successfully created my dashboard and need to add the swf file to a ppt with additional information. The actual swf file is about 1.2MB, the ppt without it is about 800KB but combined, the file is 3.8MB - even trying to zip it only g

  • New Lot Creation - Urgent

    Hi all, I have an issue to discuss. I have some stock that has not been entered into the system and that stock is free of cost. I want a quality inspection for this stock. How can i do a quality inspection of this stock without raising a PO?? I know

  • Belle fp2 for n8

    I guess every n8 fan would love their phone updated to belle fp2!!

  • Implement the VC to old system

    Hi Everybody Is there anybody have experience in such case as below. - What should we take care of when we implement the variant configuration to the old system(SAP)       which has already run for years.      For instance: How about the current stoc