Performance ISSUE related to AGGREGATE

hi Gem's can anybody give the list of issue which we can face related to AGGREGATE maintananece in support project.
Its very urgent .plz.........respond to my issue.its a urgent request.
any link any thing plz send me
my mail id is
    [email protected]

Hi,
Try this.
"---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
If aggregates contain incorrect data, you must regenerate them.
Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
Check   SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
Generate Report in RSRT 
http://help.sap.com/saphelp_nw04/helpdata/en/74/e8caaea70d7a41b03dc82637ae0fa5/frameset.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
/people/juergen.noe/blog/2007/12/13/overview-important-bi-performance-transactions
/people/prakash.darji/blog/2006/01/26/query-optimization
Cube Performance
/thread/785462 [original link is broken]
Thanks,
JituK

Similar Messages

  • Performance issues -- related to printing

    Hi All,
    I am haviing production system performance issues related to printing. endusers are telling the printing is slow for almost printers. We are having more that 40 to 50 printers in landscape.
    As per my primary investigation I didnt find any issues in TSP01 & TSP02 tables. But I can see the table TST01 and TST03 table having many number of entries (more that lakh). I dont have idead about this table. Is ther eany thing related to this table where the print causes slowness or any other factors also makes this printing issue .. Please advice ..
    thanks in advance

    Hai,
    Check the below link...
    http://help.sap.com/saphelp_nw70/helpdata/en/c1/1cca3bdcd73743e10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/fc/04ca3bb6f8c21de10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/86/1ccb3b560f194ce10000000a114084/content.htm
    TemSe cannot administer objects that require more than two gigabytes of storage space, regardless of whether the objects are stored in the database or in the file system. Spool requests of a size greater than two gigabytes must therefore be split into several smaller requests.
    It is enough if you perform the regular background jobs and Temse consistency checks for the tables.
    This will help in controlling the capacity problems.
    If you change the profile parameter rspo/store_location parameter value to 'G' this will make the performance better. The disadvantages are TemSe data must be backed up and restored separately from the database using operating system tools, In the event of problems, it can be difficult to restore consistency between the data held in files and the TemSeu2019s object management in the database. Also you have take care of the Hard disk requirements because in some cases, spool data may occupy several hundred megabytes of disk storage. If you use the G option, you must ensure that enough free disk space is available for spool data.
    Regards,
    Yoganand.V

  • Performance issues related to logging (ForceSingleTraceFile option)

    Dear SDN members,
    I have a question about logging.
    I like to place my logs/traces for every application in different log files. By doing this you have to set the ForceSingleTraceFile option to NO (in the config tool).
    But in a presentation of SAP, named SAP Web Application Server 6.40; SAP Logging and Tracing API, is stated:
    - All traces by default go to the default trace file.
         - Good for performance
              - On production systems, this is a must!!!
    - Hard to find your trace messages
    - Solution: Configure development systems to pipe traces and logs for applications to their own specific trace file
    But I want the logs/traces also by our customers (production systems) in separate files. So my question is:
    What are the performance issues we face, if we turn the ForceSingleTraceFile option to NO by our customers?
    and
    If we turn the ForceSingleTraceFile to NO will the logs/traces of the SAP applications also go to different files? If so, then I can imagine that it will be difficult to find the logs of the different SAP applications.
    I hope that someone can clarify the working of the ForceSingleTraceFile setting.
    Kind regards,
    Marinus Geuze

    Dear Marinus,
    The performance issues with extensive logging are related to high memory usage (for concatenation/generation of the messages which are written to the log files) and as result increased garbare collection frequency, as well as high disk I/O and CPU overhead for the actual logging.
    Writing to same trace file, if logging is extensive can become a bottleneck.
    Anyway it is not related to if you should write the logs to the default trace of a standard location. I believe that the recommendation in the documentation is just about using the standard logging APIs of the SAP Java Server, because they are well optimized.
    Best regards,
    Sylvia

  • Performance issue related to OWM? Oracle version is 10.2.0.4

    The optimizer picks hash join instead of nested loop for the queries with OWM tables, which causes full table scan everywhere. I wonder if it happens in your databases as well, or just us. If you did and knew what to do to solve this, it would be great appriciated! I did log in a SR to Oracle but it usually takes months to reach the solution.
    Thanks for any possible answers!

    Ha, sounded like you knew what I was talking about :)
    I thought the issue must've had something to do with OWM because some complicate queries have no performance issue while they're regular tables. There's a batch job which took an hour to run now it takes 4.5 hours. I just rewrote the job to move the queries from OWM to regular tables, it takes 20 minutes. However today when I tried to get explain plans for some queries involve regular tables with large amount of data, I got the same full table scan problem with hash join. So I'm convinced that it probably is not OWM. But the patch for removing bug fix didn't help with the situation here.
    I was hoping that other companies might have this problem and had a way to work around. If it's not OWM, I'm surprised that this only happens in our system.
    Thanks for the reply anyway!

  • Performance issue related to EP 6.0 SP2 Patch 5

    We have implemented SAP EP6.0 SP2 Patch5.We have also configured IIS 6.0 to access our portal from internet.
    When we access the portal from the internet,it is very slow.Sometime,pages are taking 5-10 minutes to load.
    I am using the cacheing technique for the iview.I wanted to know whether it is a good idea to use cacheing coz it is taking lot of time to load the iview.
    I would really appreciate any coments or suggestions.

    Paritosh,
    I think you need to analyze the issue step by step as the response time seems to be very high. Here are a few suggestions. Response time high could be due to many factors - Server Side, Network and Client Browser setting. Let us analysis the case step by step.
    1) Do a basic test to access the EP within the same network (LAN) to make sure we eliminate network to verify everything works fine within the LAN.
    2) If performance is not acceptable within LAN, then accessing over WAN or Internet will not be better anyway. If LAN performance is not acceptable (it requires you know the acceptable response time, say 5 sec or something), you need to find out whether you have large contents in the page you are accessing. You need to know how many iViews you have in the page. What kind of iViews are they - are they going to backend system? If they are going to the backend, how is the going? Are they using ITS or JCo-RFC? If it goes through ITS, how about accessing the same page directly via ITS? Do you get the same problem? If you are using via JCo, have you monitor RFC traffic (size of data and number of round trips using ST05).
    There could be many other potential issues. Have you done proper tuning of EP for JVM parameters, threads, etc? Are you using keep-alive settings in the dispatcher, firewall, and load balancer (if any)? Are you using compression enabled in the J2EE Server? Do you use content expirations at the J2EE Server? How is your browser setting for browser cache?
    In summary, we like to start with EP landscape with all components. We need to make sure that response time is acceptable within LAN. If we are happy, we can look into Network part for WAN/Internet performance.
    Hope it will give you a few starting points. Once you provide more information, we can follow-up.
    Thanks,
    Swapan

  • Performance issue related to Wrapper and variable value retrievel

    If I have a array of int(primitive array) and on the other hand if I have an array of it's corresponding Wrapper class , while dealing is there is any performance difference betwen these 2 cases . If in my code I am doing to conversion from primitive to wrapper object , is that affecting my performnace as there is already a concept of auto-boxing.
    Another issue is that if I acces the value of a variable name (defined in in superclass) in subclass by ' this.getName() ' rather than ' this.name ' . is there ne performance diffreance between the 2 cases.

    If I have a array of int(primitive array) and on the
    other hand if I have an array of it's corresponding
    Wrapper class , while dealing is there is any
    performance difference betwen these 2 cases . If in
    my code I am doing to conversion from primitive to
    wrapper object , is that affecting my performnace as
    there is already a concept of auto-boxing.I'm sure there is. It's probably not worth worrying about until you profile your application and determine it's actually an issue.
    Another issue is that if I acces the value of a
    variable name (defined in in superclass) in subclass
    by ' this.getName() ' rather than ' this.name ' .
    is there ne performance diffreance between the 2
    cases.Probably, but that also depends on what precisely getName() is doing doesn't it? This is a rather silly thing to be worrying about.

  • Performance Issue related to SAP EP

    Hi All,
    Performance wise, I would like to know which one is better SAP EP or SAP GUI?
    Also how good is SAP EP for handling Large Scale Data Entry Transactions and Printing Jobs.

    Paritosh,
    I think you need to analyze the issue step by step as the response time seems to be very high. Here are a few suggestions. Response time high could be due to many factors - Server Side, Network and Client Browser setting. Let us analysis the case step by step.
    1) Do a basic test to access the EP within the same network (LAN) to make sure we eliminate network to verify everything works fine within the LAN.
    2) If performance is not acceptable within LAN, then accessing over WAN or Internet will not be better anyway. If LAN performance is not acceptable (it requires you know the acceptable response time, say 5 sec or something), you need to find out whether you have large contents in the page you are accessing. You need to know how many iViews you have in the page. What kind of iViews are they - are they going to backend system? If they are going to the backend, how is the going? Are they using ITS or JCo-RFC? If it goes through ITS, how about accessing the same page directly via ITS? Do you get the same problem? If you are using via JCo, have you monitor RFC traffic (size of data and number of round trips using ST05).
    There could be many other potential issues. Have you done proper tuning of EP for JVM parameters, threads, etc? Are you using keep-alive settings in the dispatcher, firewall, and load balancer (if any)? Are you using compression enabled in the J2EE Server? Do you use content expirations at the J2EE Server? How is your browser setting for browser cache?
    In summary, we like to start with EP landscape with all components. We need to make sure that response time is acceptable within LAN. If we are happy, we can look into Network part for WAN/Internet performance.
    Hope it will give you a few starting points. Once you provide more information, we can follow-up.
    Thanks,
    Swapan

  • VC - Compile and Deploy performance issues related to UserID

    Dear Guru's,
    I'm currently working at a customer where a small team of 4 is working with VC 7.0.
    One user has very long Compile and Deploy times. We first thought that it was related to his workstation.
    Then one of the other guys logged in on his PC and run the compile + deploy ant then it suddenly takes seconds again.
    So we created a new userID for this user who has the issues "<oldUI>+test" and suddenly all is back to normal for him.
    But, now here it comes that we deleted his old userID and created it again, but the issue is still there.
    So my assumption is that there is some kind of faulty record or index or something other strange linked to his userID.
    What can this be and how can we solve it?
    Thanks in advance!
    Benjamin

    Hi Anja,
    We use VC on 7.0 and we do not have any integration with the DTR.
    So in other words we use the default way of working with VC.
    The user had his models in his Personal folder then moved it to the Public folder so that other colleagues could see/try them as well. It doesn't matter where the model is stored (public or personal) as long if this specific UID is used compiling/deploying goes very slow... the log files do not give much info, why this happens...
    Cheers,
    Benjamin

  • Performance issue related to BSIS table:pls help

    Theres a select statement which fetches data from BSIS table.
    As in the where clause the only key field used is BUKRS its consuming more time.Below is the code.
    Could you please tell me how to improvise this piece of code.
    I tried to fecth first from BKPF table based on the selection screen paramater t001-bukrs and then for all entries in BKPF fetched from BSIS.But it didnt worked.
    your help would be very much appreciated.Thanks in advance.
      SELECT bukrs waers ktopl periv
             FROM t001
             INTO TABLE i_ccode
             WHERE bukrs IN s_bukrs.
    SELECT bukrs hkont gjahr belnr buzei bldat waers blart monat bschl
    shkzg mwskz dmbtr wrbtr wmwst prctr kostl
               FROM bsis
               INTO TABLE i_bsis
               FOR ALL ENTRIES IN i_ccode
               WHERE bukrs EQ i_ccode-bukrs
               AND   budat IN i_date.
    Regards
    Akmal
    Moved by moderator to the correct forum
    Edited by: Matt on Nov 6, 2008 4:10 PM

    Dnt go for FOR ALL ENTRIES  it will not help in this case .Do like below , you can see a lot of performance improvement.
    SELECT bukrs waers ktopl periv
             FROM t001
             INTO TABLE i_ccode
             WHERE bukrs IN s_bukrs.
    sort i_ccode by bukrs.
    LOOP AT i_ccode.
       SELECT bukrs hkont gjahr belnr buzei bldat waers blart         monat bschl shkzg mwskz dmbtr wrbtr wmwst prctr kostl
             FROM bsis
            APPENDING TABLE i_bsis
            WHERE bukrs EQ i_ccode-bukrs
            AND   budat IN i_date.
      ENDLOOP.
    I dnt know why perform is good for the above query than "bulrs in so_bukrs" .This willl help , i m sure. this approach helped me.
    Edited by: Karthik Arunachalam on Nov 6, 2008 8:52 PM

  • Performance Issue related to RFC

    Hi All,
    Iam moving attachments from CRM to R/3 .For this Iam using RFC.If Iam attaching multiple files at a time.Do I need to call the RFC in Loop or should I call it Once for all attachments.Which gives better PERFORMANCE.
    One more thing If I call the RFC in SYNCHRONOUS  mode what happends if the server is down the other side for two to three days.
    If I call the RFC in ASYNCHRONOUS  mode I need to work on the return values of the RFC.How to handle this situation.
    Plz give me the reply as early as possible.
    Thanks,
    Saritha

    Hi,
    If an RFC Channel already exists between the client and server the same channel will be used between the systems. Hence, even calling in a loop should not be a problem. But in here the data needs to go one by one through the channel. Try to send them in a table as this goes as one chunk of data.
    In case of ASYNCHRONOUS also if  you want to receive results then the called system should be up. for this the syntax is
    CALL FUNCTION 'FM' STARTING NEW TASK DESTINATION <dest>
    PERFORMING <form> ON END OF TASK
    EXPORTING
    EXCEPTIONS
    FORM <form> USING taskname.
      RECEIVE RESULTS FROM <fm>
      IMPORTING
    ENDFORM.
    But in any case, the called system should be open for connections.
    Try if possible tRFC calls.
    Regards,
    Goutham

  • Query on Performance issues relating to a report

    Hi Group,
    I have an issue while running a report which was creating Business Partners for (both Company and the Contact person and as well as relationship b/w them).
    This report was having BAPI( for creating Business Partners ) and also for creating relationships and the report was taking too much of response time.
    I was thinking it to be the reason for calling BAPIs. But, I want to know from you that is that the real cause or it might be the other cause.
    So please kindly let me know inputs from your side on this.
    thanks in advance.
    Regards,
    Vishnu.

    Hi
    I think it's always better to use the provided standard fm's and bapi's to make changes to the data in the system instead of directly placing them in the tables.
    One thing you can do is try to use parallel processing. E.g 10.000 BP's should be created. In that case schedule 4 jobs to create the Bp's instead of 1 job creating the whole lot.
    Kind regards, Rob Dielemans

  • CPU Performance Issue

    We are facing a performance issue related to cube refresh. Even a small cube consumes around 70% of the CPU while refreshing.
    Details of the cube: This cube has 10 dimensions and 46 straight forward measures (mix of sum and avg as aggregation algorithm). No compression. Cube is partitioned (48 partitions). Main source of the data is a materialized view which is partitioned in the same way as the cube.
    Data Volume: 1200 records in the source to be processed daily (almost evenly distributed across the partition)
    Cube is refreshed using: DBMS_CUBE.BUILD(<<cube_name>>,'SS',true,5,false,true,false);
    Environment - Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 (on AIX 6.1 64 bit), AWM - awm11.2.0.2.0A
    Hardware Configuration
    4.2 GHz, 3.5 CPU Core 32 GB RAM
    Has anyone faced similar kind of issue? Is there any server level / database level parameter that needs tweaking to control this behaviour?

    Here is another trick to combine sum and average measures in the same cube. It relies on the AGGCOUNT function in OLAP DML, which you can look up in the reference guide.
    Suppose that you have a cube with two measures, A and B, and that you want to aggregate A using SUM and aggregate B using AVG.
    Step 1: Make the cube be compressed and aggregate all measures (A and B) using SUM.
    If you do this using AWM, then the solve specification should include a MAINTAIN COUNT clause. To double check, look at the user_cubes view for your cube (TEST in my example).
    select consistent_solve_spec from user_cubes where cube_name = 'TEST';
    CONSISTENT_SOLVE_SPEC
    SOLVE
      SUM
        MAINTAIN COUNT
        ALLOW OVERFLOW
        ALLOW DIVISION BY ZERO
        IGNORE NULLS OVER ALL
    )You can hand edit the XML for the cube if this hasn't happened. Here is what you want to see in the XML for the cube.
        <ConsistentSolve>
          <![CDATA[SOLVE
      SUM
        MAINTAIN COUNT
         OVER ALL
    )]]>Don't worry about the slight difference in syntax -- this is due to different printing routines in the client Java and the server c code.
    Step 2: Verify that the cube's VARIABLE has the WITH AGGCOUNT option.
    My cube is named TEST, so the variable is named TEST_STORED, and my cube is dimensioned by TIME and PRODUCT. You can run this in the OLAP Worksheet.
    dsc test_stored
    DEFINE TEST_STORED VARIABLE LOCKDFN NUMBER WITH NULLTRACKING WITH AGGCOUNT CHANGETRACKING <TEST_PRT_TEMPLATE <TEST_MEASURE_DIM TIME PRODUCT>>
    Step 3: Define a new Calculated Measure, B_AVG, in the cube to get the average for measure B.
    Select "OLAP DML Expression" as the "Calculation Type" and enter the following expression. Obviously you need to adjust for the cube and measure names. I am putting new lines into this for readability.
    QUAL(
      NVL2(AGGCOUNT(TEST_STORED), TEST_STORED / AGGCOUNT(TEST_STORED), TEST_STORED),
    TEST_MEASURE_DIM 'B')
    Step 4: Specify the $LOOP_VAR on the new measure
    Execute the following in the OLAP Worksheet. (Again, correct for measure and cube names.) It instructs the server to loop the cube sparsely. If you don't do this, you will get dense looping and poor query performance. You only need to do this once (per calculated measure). If you save the AW to XML after this, then the LOOP_VAR value will be saved in the XML itself.
    call set_property('$LOOP_VAR' 'TEST' 'B_AVG' 'TEST_STORED')For reporting purposes you should look at measures A and B_AVG.

  • Cube Refresh Performance Issue

    We are facing a strange performance issue related to cube refresh. The cube which used to take 1 hr to refresh is taking around 3.5 to 4 hr without any change in the environment. Also, the data that it processes is almost the same before and now. Only these cube out of all the other cubes in the workspace is suffering the performance issue over a period of time.
    Details of the cube:
    This cube has 7 dimensions and 11 measures (mix of sum and avg as aggregation algorithm). No compression. Cube is partioned (48 partitions). Main source of the data is a materialized view which is partitioned in the same way as the cube.
    Data Volume: 2480261 records in the source to be processed daily (almost evenly distributed across the partition)
    Cube is refreshed with the below script
    DBMS_CUBE.BUILD(<<cube_name>>,'SS',true,5,false,true,false);
    Has anyone faced similar issue? Please can advise on what might be the cause for the performance degradation.
    Environment - Oracle Database 11g Enterprise Edition Release 11.2.0.3.0
    AWM - awm11.2.0.2.0A

    Take a look at DBMS_CUBE.BUILD documentation at http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_cube.htm#ARPLS218 and DBMS_CUBE_LOG documentation at http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_cube_log.htm#ARPLS72789
    You can also search this forum for more questions/examples about DBMS_CUBE.BUILD
    David Greenfield has covered many Cube loading topics in the past on this forum.
    Mapping to Relational tables
    Re: Enabling materialized view for fast refresh method
    DBMS CUBE BUILD
    CUBE_DFLT_PARTITION_LEVEL in 11g?
    Reclaiming space in OLAP 11.1.0.7
    Re: During a cube build how do I use an IN list for dimension hierarchy?
    .

  • Cache-flush VM-related performance issue

    Dear forum,
    I've got a peculiar performance issue going on with the BDB pagecache being flushed to disk. I've managed to reproduce the issue perfectly on three out of three quite different systems that I've tried on, so it is at least quite well-defined.
    My usage pattern for the database in question is such that I periodically (perhaps once every 10-60 seconds or so) need to read through an amount of values (around 500-2000 or so) from a database containing a rather large amount (in the millions, at least) of keys. There are a few writes for every such batch, but not very many (a couple of tens). The keys that are read each batch are quite random, and very likely to be completely different from batch to batch. The database is a DB_HASH.
    When I do that, BDB seems to dirty a lot of pages in the page cache (which I have currently sized at 512 MB so that pages don't have to be forced out from it), from what I can tell by manipulating refcounts and stuff, so all in all, a single batch seems to dirty some 10-40 MB or so of the mmapped cache region. (I check this using pmap -x on Linux.) Note that when I speak of pages and the dirtying of them here, I mean at the VM level, not the BDB level.
    A while after this has happened, the VM comes around and wants to flush the dirty pages to disk, so it batches writes of large portions (often the entire set of dirty pages, but sometimes it only does 10-20 MB or so at a time; this detail shouldn't matter) of the dirtied pages to the backing block device. Since the dirty pages are often rather interspersed in the region file, such a flush usually requires a couple of thousands of write ops, so it might sometimes take up to 10-20 seconds for the requests to complete.
    If the program, then, again tries to dirty any of the pages while they are waiting to be flushed, which is often the case, the VM will block it until the page in question is flushed. This means that the thread in question might very well be blocked for up to 20 seconds, causing quite annoying wait times.
    How to deal with this problem? I've considered trying to put the region files on tmpfs or so, but that seems like such an excessive measure for a problem which, from what I can tell, should be commonplace.
    On a very related note, I've noticed a large discrepancy in the I/O performance between the systems I've tried this on. Two of the systems in question manage to carry out some 200-500 write ops per second on my test load, while the third manages closer to 2000-3000 write ops per second, which makes quite a difference. What makes it very weird is that the faster system uses the exact same hard drive as one of the slower systems. I know this isn't exactly a BDB-specific question, but I thought someone around here might have experience in the matter. All three systems use Linux and S-ATA hard disks (not SSDs), but they use different S-ATA host adapters, different kernel versions and are configured in quite different ways.
    Thanks for reading my wall of text! I'm sorry for dragging on so long, but I didn't know how to describe the situation more briefly.
    Edited by: Dolda2000 on Mar 23, 2013 8:08 AM

    As a follow-up on this, it appears that the blocking behavior was introduced in Linux 3.0 to stabilize pages under writeback:
    http://lwn.net/Articles/486311/
    It seems that the commits that introduced the behavior can be safely patched away, and also that it is due to change in 3.9, but for now, this is not the route I took to solve it.
    Rather, I wrote a patch to Berkeley DB to allow me to store the region files in another directory than the environment root directory, and used it to store them in /dev/shm -- that is, on tmpfs, which avoids writeback of the region files altogether.
    If you want the patch, it is here for db4.8 (which what Debian Stable uses), and here for 5.1, which is what Debian Testing uses.
    (For some reason, the hyperlink format suggested by the forum doesn't seem to be working?)

  • Performance Issue's Related in Adance table in advance table

    Hi,
    Can anybody let me know what are the performance issues in advance table in advance table,because i am having big performance issue while implementing advance table in advance table, my inner table is rendering very slowly.
    Thanks

    Table in a table is a performance eating structure :), because ur VL will cache both parent and child VO rows in JVM.The only way to improve the performance is to tune your sql queries.
    --Mukul                                                                                                                                                                                                                                                                                                                                                                                                                                       

Maybe you are looking for

  • Programs freezing after 10.5.3 - Force Quit doesn't work anymore.

    Pretty much the topic... After 10.5.3 lots of programs of different kinds are freezing more often. Force quit closes their windows but doesn't actually kill the app. What could I do? Cheers. JP

  • MP4 Merge or Join Files Problem. MP4 Maximum File Size?

    I just came back from vacation with my girlfriend, and shot a bunch of home movies with my Canon 5d Mark II. The files created by the camera are MP4's. There are a dozen of files in total and each one is around 1 GB at 720p. What I want to do now is

  • Can't install photoshop from Creative suite 6 disc

    Hi, try to install CS6 several times. Exit code 6 keep showing up with 4 errors / 9 warnings. I check on adobe forum with exit code 6 problem solving step by step. It still not able to install photoshop successfully. Please help. Thank you Attached c

  • Accessing graphics object?

    Hi, I was wondering, is the only place the graphics object relevent inside the paintComponent method? I'm asking because I have a method that draws a custom progress bar. I have another method that draws that progress bar filling up over a arbitrary

  • Error: The XML parser encountered an error in your deployment descriptor. P

    Hi, I am very new to weblogi. I wrote a ejb in eclipse with xDoclet. When I deploy the ejb, Im getting this error: <b>Exception:weblogic.management.ApplicationException: Prepare failed. Task Id = 5 Module: SessionLessEjb Error: [EJB:011025]The XML pa