Anyone else spending too much time DEBUGGING the tutorial?

Just a thought. Spending more time working through the fixes than learning J2EE.

I second that. What part of the Tutorial is holding you up?
-Ian Evans

Similar Messages

  • My high school aged child is spending too much time on Facebook, Tumblr to the detriment of home work.  Is there any way I can limit the access to these sites to between 8pm and 10pm?

    My high school aged child is spending too much time on Facebook and Tumblr is there any way I can limit the access time  on these sites to  from 8pm to 10pm?

    System Preferences>Parental Controls has time limits - check out this intro to Parental Controls from Cult of Mac on YouTube.
    Clinton

  • Report taking too much time in the portal

    Hi freiends,
    we have developed a report on the ods,and we publish the same on the portal.
    the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
    is there any way to sort out this issue,like can we send the report to the individual user's mail id
    so that they can not log in to the portal
    or can we create the same report on the cube.
    what could be the main difference if the report made on the cube or ods?
    please help me
    thanks in advance
    sridath

    Hi
    Try this to improve performance of query
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • Time machine spends too much time backing up

    Even when I have very little changed information to back up - say 15MB - time machine takes 10-15 minutes.   Every hour.   It spends like 3-4 minutes "Setting up", another 5-8 minutes "Backing up XXX of 15.2MB", and then another 3-5 minutes "cleaning up".  
    Syncing or backing up changed data on the same drive takes Synchronize! X Plus barely over a minute.  Maybe a 1 minute compare time and then 10s of file copying, then done.   Why is TM so much slower?
    Since I can't change how often TM runs (that's really annoying, BTW, Apple), this means that almost 25% of the time my mac is disk-bound and slow.  Extremely frustrating to work on.   I often end up working on my 3-year-old macbook pro because it doesn't have to deal with TM (I just sync it to the desktop every couple of days with Synchronize! X Plus) and so in practice it's a whole lot faster to work with than the mighty 2.66 Ghz Quad-Core Xeon Mac Pro with 8GB RAM.
    I don't think it's anything wrong with the TM disk.   I've observed this kind of behavior on every mac I've used with TM in the last four years - at least three different macs and at least nine different external drives.
    Why on Earth does TM take geological time to back up my mac every hour?    I know it has some comparisons to do, but copying those files should only take a few seconds.     Does it behave this way for everyone, or am I just unlucky?

    this means that almost 25% of the time my mac is disk-bound and slow. 
    I run Time machine on my server. It uses under 2 percent of CPU. I find that it works at low priority and does not interfere with other activities. I have not observed any substantial slowdown due to disk access or any other reason.
    know it has some comparisons to do, but copying those files should only take a few seconds.
    If you want it locked up so hard that you cannot even type, it probably could copy those files in a few seconds. It is intended to do its work in the background, and allow you to continue to do other work.
    Since I can't change how often TM runs...
    Oh yes, you can. Apple did not give access to those parameters directly, but those settings and others are in several .plist files and can be changed with third-party utilities.
    What computer are you backing up, and how much memory is installed?
    Are you ever seeing any Pageouts on the Activity Monitor Memory display?
    Are you backing up directly to a local disk, over Wireless, or over Ethernet, and at what speed?
    Is the disk that contains the Time Machine backup files used for any other purpose?
    Have you ever made changes to the Time Machine disk using the Finder?

  • It is taking too much time run the page from Jdeveloper (R12)

    Hi Gurus
    When i m running a page in R12 from Jdeveloper ,it is taking abt 15-20 minutes to render the page ,it is happening first time with me ,i m working on Jdeveloper 10G .Would some one please give some hints . Am i missing some setting in Jdeveloper.
    Thanx in advance
    Pratap

    Pratap
    First thing that cause pages to load slow is the location of your database server.If you are working locally & your db server is located far in that case, your page loads slowly.Moreover you can try connecting with any server instance to test if this is not the issue with only that instance.Moreover if you would be running the page in Debug mode then also your page will load slowly.
    Please update the thread if you finds any other issue.So that it may help others too
    Thanks
    AJ

  • MacBook spends too much time loading

    Whether its the internet loading advertisements (videos or inter-actives), or opening larger applications, my macbook always seems to freeze up, either for a few seconds or minutes to unexpectedly quitting the program. Does anyone know if this strictly has to do with RAM, or maybe cookies?

    Have you repaired permissions?: Repair Permissions
    http://support.apple.com/kb/HT1452
    How much free space do you have on your Hard Drive.
    What OS are you running?
    How much Ram do you have?
    Have you tried running any maintenance programs, such as Onyx?

  • It is taking too much time run the page from Jdeveloper

    Hi Gurus
    When i m running a page in from Jdeveloper ,it is taking abt 20-25 minutes to render the page ,it is happening first time with me ,i m working on Jdeveloper 10G version .Would some please give some hints .
    Am i missing some setting in Jdeveloper.
    Thanx in advance
    Pratap

    thank you shay for reply ,
    i m sorry , i feel i have not given complete information ,
    I m using JDeveloper for oracle EBS R12 and downloaded the patch p8431482_R12_GENERIC from metalink ,and i running the same excercises which is shipped with Jdeveloper.
    Thanx
    Pratap

  • Cursor spending too much time cloning

    I am profiling the performance of scanning through a database using a cursor. The breakdown looks something like this:
    Total - 3800 ms
    IN.fetchTarget - 1250 ms
    IN.latch - 300 ms
    Cursor.beginMoveCursor - 1260 ms
    Cursor.endMoveCursor - 920 ms
    Most of the time in begin and end move cursor is dealing with cloning. So effectively it is taking more time to deal with the cursor than to retrieve the data from disk. That doesn't sound right. Am I doing something wrong? Is there a way to not clone the cursor?
    Instead of using the cursor if I use get for each element the time is comparable. Here is the breakdown
    Total - 4800 ms
    IN.fetchTarget - 1250 ms
    Tree.search - 940 ms
    Cursor.init - 1380 ms (almost all the time incrementing an atomic integer)
    Cursor.close - 830 ms
    Cursor.lockLN - 300 ms
    I was hoping that by using the cursor I will cut down most of the overhead (search, repeated cursor init and close) and the time will come down to close to 1250 ms.

    I strongly suggest that rather than doing profiling, you look at your app throughput (without -ea or a profiler running), and look at the EnvironmentStats. The stats tell you what is going on in JE, and tuning based on the stats is normally the most fruitful. Looking at things in a profiler may help to optimize JE code itself, but is less likely to be fruitful for tuning your app, and a profiler is usually more beneficial after you have tuned the app, based on looking at the stats.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Updating the ipod with itunes takes too much time

    i have like 3000 songs in my itunes library ,but when i change some track's names o change anything of my songs in a diferent way like putting their names in capital letters or changing their artworks..etc ,itunes takes too much time updating the changes to my ipod................. why?
    ipodvideo   Windows XP  
      Windows XP  

    Generally the iPod update is pretty quick unless you are making many hundreds of changes at a time. It could be the USB port is slow, try it in another port, preferably on the back of the PC, some computers have underpowered ports on the front panel. Also the recommended system requirements are for USB 2, it will work from a USB 1 port but much slower if that's the type of port you have.

  • Taking too much time in Rules(DTP Schedule run)

    Hi,
    I am Scheduling the DTP which have filters to minimize the load data.
    when i run the DTP it is taking too much time in the "rules" (i can see the  DTP monitor ststus package by pakage and step by step like "Start routine" "rules" and "End Routine")
    here it is consuming too much time in Rules Mapping.
    what is the problem and any solutions please...
    regards,
    sree

    Hi,
    Time taken at "rules" depends on the complexity involved there in ur routine. If it is a complex calculation it will take time.
    Also check ur DTP batch settings, ie how many no. of background processes used to perform  DTP, Job class.
    U can find these :
    goto DTP, select goto menu and select "Settings for Batch Manager".
    In the screen increase no of Processes from 3 to higher no(max 9).
    ChaNGE job class to 'A'.
    If ur DTP is still running , cancel it ie Kill the DTP, delete from the Cube,
    Change these settings and run ur DTP one more time.
    U can observer the difference.
    Reddy

  • Auto Invoice Program taking too much time : problem with update sql

    Hi ,
    Oracle db version 11.2.0.3
    Oracle EBS version : 12.1.3
    Though we have a SEV-1 SR with oracle we have not been able to find much success.
    We have an auto invoice program which runs many times in the day and its taking too much time since the begining . On troubleshooting we have found one query to be taking too much of the time and seek suggestion on how to tune it.  I am attaching the explain plan for the for same.  Its an update query. Please guide.
    Plan
    UPDATE STATEMENT  ALL_ROWSCost: 0  Bytes: 124  Cardinality: 1 
      50 UPDATE AR.RA_CUST_TRX_LINE_GL_DIST_ALL
      27 FILTER 
      26 HASH JOIN  Cost: 8,937,633  Bytes: 4,261,258,760  Cardinality: 34,364,990 
      24 VIEW VIEW SYS.VW_NSO_1 Cost: 8,618,413  Bytes: 446,744,870  Cardinality: 34,364,990 
      23 SORT UNIQUE  Cost: 8,618,413  Bytes: 4,042,339,978  Cardinality: 34,364,990 
      22 UNION-ALL 
      9 FILTER 
      8 SORT GROUP BY  Cost: 5,643,052  Bytes: 3,164,892,625  Cardinality: 25,319,141 
      7 HASH JOIN  Cost: 1,640,602  Bytes: 32,460,436,875  Cardinality: 259,683,495 
      1 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993  Bytes: 402,499,500  Cardinality: 20,124,975 
      6 HASH JOIN  Cost: 853,567  Bytes: 22,544,143,440  Cardinality: 214,706,128 
      4 HASH JOIN  Cost: 536,708  Bytes: 2,357,000,550  Cardinality: 29,835,450 
      2 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 153,008  Bytes: 1,163,582,550  Cardinality: 29,835,450 
      3 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 307,314  Bytes: 1,193,526,000  Cardinality: 29,838,150 
      5 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 132,951  Bytes: 3,123,197,116  Cardinality: 120,122,966 
      21 FILTER 
      20 SORT GROUP BY  Cost: 2,975,360  Bytes: 877,447,353  Cardinality: 9,045,849 
      19 HASH JOIN  Cost: 998,323  Bytes: 17,548,946,769  Cardinality: 180,916,977 
      13 VIEW VIEW AR.index$_join$_027 Cost: 108,438  Bytes: 867,771,256  Cardinality: 78,888,296 
      12 HASH JOIN 
      10 INDEX RANGE SCAN INDEX AR.RA_CUSTOMER_TRX_N15 Cost: 58,206  Bytes: 867,771,256  Cardinality: 78,888,296 
      11 INDEX FAST FULL SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 62,322  Bytes: 867,771,256  Cardinality: 78,888,296 
      18 HASH JOIN  Cost: 748,497  Bytes: 3,281,713,302  Cardinality: 38,159,457 
      14 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 154,993  Bytes: 402,499,500  Cardinality: 20,124,975 
      17 HASH JOIN  Cost: 519,713  Bytes: 1,969,317,900  Cardinality: 29,838,150 
      15 TABLE ACCESS FULL TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 302,822  Bytes: 716,115,600  Cardinality: 29,838,150 
      16 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 149,847  Bytes: 1,253,202,300  Cardinality: 29,838,150 
      25 TABLE ACCESS FULL TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 157,552  Bytes: 5,158,998,615  Cardinality: 46,477,465 
      41 SORT GROUP BY  Bytes: 75  Cardinality: 1 
      40 FILTER 
      39 MERGE JOIN CARTESIAN  Cost: 11  Bytes: 75  Cardinality: 1 
      35 NESTED LOOPS  Cost: 8  Bytes: 50  Cardinality: 1 
      32 NESTED LOOPS  Cost: 5  Bytes: 30  Cardinality: 1 
      29 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_LINES_ALL Cost: 3  Bytes: 22  Cardinality: 1 
      28 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_LINES_U1 Cost: 2  Cardinality: 1 
      31 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 2  Bytes: 133,114,520  Cardinality: 16,639,315 
      30 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1  Cardinality: 1 
      34 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3  Bytes: 20  Cardinality: 1 
      33 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 2  Cardinality: 1 
      38 BUFFER SORT  Cost: 9  Bytes: 25  Cardinality: 1 
      37 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3  Bytes: 25  Cardinality: 1 
      36 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2  Cardinality: 1 
      49 SORT GROUP BY  Bytes: 48  Cardinality: 1 
      48 FILTER 
      47 NESTED LOOPS 
      45 NESTED LOOPS  Cost: 7  Bytes: 48  Cardinality: 1 
      43 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 4  Bytes: 20  Cardinality: 1 
      42 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N6 Cost: 3  Cardinality: 1 
      44 INDEX RANGE SCAN INDEX AR.RA_CUST_TRX_LINE_GL_DIST_N1 Cost: 2  Cardinality: 1 
      46 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUST_TRX_LINE_GL_DIST_ALL Cost: 3  Bytes: 28  Cardinality: 1 
    As per oracle , they had suggested multiple patches but that has not been helpful. Please suggest how do i tune this query. I dont have much clue of query tuning.
    Regards

    Hi Paul, My bad. I am sorry I missed it.
    Query as below :
    UPDATE RA_CUST_TRX_LINE_GL_DIST LGD SET (AMOUNT, ACCTD_AMOUNT) = (SELECT /*+ index(rec1 RA_CUST_TRX_LINE_GL_DIST_N6) ordered */ NVL(LGD.AMOUNT, 0) - ( SUM(LGD2.AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * CTL.EXTENDED_AMOUNT ) ), NVL(LGD.ACCTD_AMOUNT, 0) - ( SUM(LGD2.ACCTD_AMOUNT) - ( DECODE(LGD.GL_DATE, REC1.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(CT.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX CT, RA_CUST_TRX_LINE_GL_DIST LGD2, RA_CUST_TRX_LINE_GL_DIST REC1 WHERE CTL.CUSTOMER_TRX_LINE_ID = LGD2.CUSTOMER_TRX_LINE_ID AND CTL.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND LGD.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD2.ACCOUNT_SET_FLAG = 'N' AND REC1.CUSTOMER_TRX_ID = CT.CUSTOMER_TRX_ID AND REC1.ACCOUNT_CLASS = 'REC' AND REC1.LATEST_REC_FLAG = 'Y' AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD2.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, REC1.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, CT.EXCHANGE_RATE ), PERCENT = (SELECT /*+ index(rec2 RA_CUST_TRX_LINE_GL_DIST_N6) */ DECODE(LGD.ACCOUNT_CLASS || LGD.ACCOUNT_SET_FLAG, 'SUSPENSEN', LGD.PERCENT, 'UNBILLN', LGD.PERCENT, 'UNEARNN', LGD.PERCENT, NVL(LGD.PERCENT, 0) - ( SUM(NVL(LGD4.PERCENT, 0)) - DECODE(REC2.GL_DATE, NVL(LGD.GL_DATE, REC2.GL_DATE), 100, 0) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD4, RA_CUST_TRX_LINE_GL_DIST REC2 WHERE LGD.CUSTOMER_TRX_LINE_ID = LGD4.CUSTOMER_TRX_LINE_ID AND REC2.CUSTOMER_TRX_ID = LGD.CUSTOMER_TRX_ID AND REC2.CUSTOMER_TRX_ID = LGD4.CUSTOMER_TRX_ID AND REC2.ACCOUNT_CLASS = 'REC' AND REC2.LATEST_REC_FLAG = 'Y' AND LGD4.ACCOUNT_SET_FLAG = LGD.ACCOUNT_SET_FLAG AND DECODE(LGD4.ACCOUNT_SET_FLAG, 'Y', LGD4.ACCOUNT_CLASS, LGD.ACCOUNT_CLASS) = LGD.ACCOUNT_CLASS AND NVL(LGD.GL_DATE, TO_DATE( 2415021, 'J') ) = NVL(LGD4.GL_DATE, TO_DATE( 2415021, 'J') ) GROUP BY REC2.GL_DATE, LGD.GL_DATE ), LAST_UPDATED_BY = :B1 , LAST_UPDATE_DATE = SYSDATE WHERE CUST_TRX_LINE_GL_DIST_ID IN (SELECT /*+ index(rec3 RA_CUST_TRX_LINE_GL_DIST_N6) */ MIN(DECODE(LGD3.GL_POSTED_DATE, NULL, LGD3.CUST_TRX_LINE_GL_DIST_ID, NULL) ) FROM RA_CUSTOMER_TRX_LINES CTL, RA_CUSTOMER_TRX T, RA_CUST_TRX_LINE_GL_DIST LGD3, RA_CUST_TRX_LINE_GL_DIST REC3 WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = CTL.CUSTOMER_TRX_ID AND (CTL.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE' ) OR (CTL.LINE_TYPE = 'LINE' AND CTL.ACCOUNTING_RULE_ID IS NULL )) AND LGD3.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID AND LGD3.ACCOUNT_SET_FLAG = 'N' AND REC3.CUSTOMER_TRX_ID = T.CUSTOMER_TRX_ID AND REC3.ACCOUNT_CLASS = 'REC' AND REC3.LATEST_REC_FLAG = 'Y' AND NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) = DECODE(:B4 , 'INV', -1, 'REGULAR_CM', T.PREVIOUS_CUSTOMER_TRX_ID, NVL(T.PREVIOUS_CUSTOMER_TRX_ID, -1) ) GROUP BY CTL.CUSTOMER_TRX_LINE_ID, LGD3.GL_DATE, REC3.GL_DATE, CTL.EXTENDED_AMOUNT, CTL.REVENUE_AMOUNT, T.EXCHANGE_RATE HAVING ( SUM(NVL(LGD3.AMOUNT, 0)) <> CTL.EXTENDED_AMOUNT * DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) OR SUM(NVL(LGD3.ACCTD_AMOUNT, 0)) <> DECODE(LGD3.GL_DATE, REC3.GL_DATE, 1, 0) * DECODE(:B2 , NULL, ROUND( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1), :B3 ), ROUND( ( CTL.EXTENDED_AMOUNT * NVL(T.EXCHANGE_RATE,1) ) / :B2 ) * :B2 ) ) UNION SELECT /*+ index(rec5 RA_CUST_TRX_LINE_GL_DIST_N6) INDEX (lgd5 ra_cust_trx_line_gl_dist_n6) index(ctl2 ra_customer_trx_lines_u1) */ TO_NUMBER( MIN(DECODE(LGD5.GL_POSTED_DATE||LGD5.ACCOUNT_CLASS|| LGD5.ACCOUNT_SET_FLAG, 'REVN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'REVY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'TAXY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'FREIGHTY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESN', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'CHARGESY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNEARNY', LGD5.CUST_TRX_LINE_GL_DIST_ID, 'UNBILLY', LGD5.CUST_TRX_LINE_GL_DIST_ID, NULL ) ) ) FROM RA_CUST_TRX_LINE_GL_DIST LGD5, RA_CUST_TRX_LINE_GL_DIST REC5, RA_CUSTOMER_TRX_LINES CTL2, RA_CUSTOMER_TRX T WHERE T.REQUEST_ID = :B5 AND T.CUSTOMER_TRX_ID = REC5.CUSTOMER_TRX_ID AND CTL2.CUSTOMER_TRX_LINE_ID = LGD5.CUSTOMER_TRX_LINE_ID AND REC5.CUSTOMER_TRX_ID = LGD5.CUSTOMER_TRX_ID AND REC5.ACCOUNT_CLASS = 'REC' AND REC5.LATEST_REC_FLAG = 'Y' AND (CTL2.LINE_TYPE IN ( 'TAX','FREIGHT','CHARGES','SUSPENSE') OR (CTL2.LINE_TYPE = 'LINE' AND (CTL2.ACCOUNTING_RULE_ID IS NULL OR LGD5.ACCOUNT_SET_FLAG = 'Y' ))) GROUP BY LGD5.CUSTOMER_TRX_LINE_ID, LGD5.GL_DATE, REC5.GL_DATE, LGD5.ACCOUNT_SET_FLAG, DECODE(LGD5.ACCOUNT_SET_FLAG, 'N', NULL, LGD5.ACCOUNT_CLASS) HAVING SUM(NVL(LGD5.PERCENT, 0)) <> DECODE( NVL(LGD5.GL_DATE, REC5.GL_DATE), REC5.GL_DATE, 100, 0) )
    I understand that this could be a seeded query but my attempt is to tune it.
    Regards

  • Simple APD is taking too much time in running

    Hi All,
    We have one APD created on our developement system which is taking too much time in running.
    This APD is fetching data from a Query having only 1200 records and directly putting into a master attribute.
    The Query is running fine in RSRT transaction and giving output within 5 seconds but in APD if I do display data over Query it is taking too much time.
    The APD is taking arrount 1.20 hours in running.
    Thanks in advance!!

    Hi,
    When a query runs in APD it normally takes much, much longer than it takes in RSRT. Run times such as what you are saying (5secs in RSRT and >1.5 hrs in APD) are quite normal; I've seen them in some of my queries running for several hours in APD as well.
    You just have to wait for it to complete.
    Regards,
    Suhas

  • PRS600 ereader taking too much time to load all the books

    Whenever I open My PRS600 from complete shutdown it takes too much time to load all the files which are present in Memory card. Is anyone else facing same issue?

    Hello,
    Recently, I have been having some serious issues when it comes to my Sony PRS-600. I am running on a Windows 7 64-bit, and an updated Reader Library 3.3
    The issue comes when transferring books to the e-reader from the library, and from the e-reader to a collection. The problem is that the software becomes intolerably slow while it's processing the command. The Reader Library software window grays out, displays (Not Working) and if clicked on it, shades to a white color and displays either "cancel operation" or "wait until program responds". If I do close the operation, it appears as if the e-reader doesn't follow the operation and still displays "Do not disconnect". Since I do not see any other way to disconnect (other than the eject option), I remove the USB plug which causes a bit more issues with the reader (such as removing all of my collections, for example!).
    But anyway, that's not the main issue here. The main issue is that the book transferring is really slow. I need to wait a couple of minutes (or even more) just for the software to process. Moving just 1 MB of data requires so much time as if it's 1 GB. Sometimes it's random and does it fast, and sometimes the application is better not to be dealt with at all while it's processing the command. If I would inspect My Computer, the simple loading of the e-reader storage icons and information would make the Windows Explorer to "crash" (e.g, close all windows and then reopen them). It just happens that in all randomness even the creation of a collection makes the software slow.
    So to recap: the reader software is slow when adding and moving books.
    I hope someone will help me resolve this annoyance.
    Thank you,
    KQ

  • The queries takes too much time

    Hi,
    I want to compare time of queries between Spatial and PostGIS, but Spatial take too much time. I created spatial indexes in both databases so I have no idea where can be the problem :-(. I'm actually student and I try this for my essay, but I start with Oracle by myself so I'll appreciate some help.
    I have table with cca 7000 polygons and I want to create new table like dissolve these polygons by attribute katuze.
    Query in Spatial takes 8506 s :
    CREATE TABLE dp_diss_p AS
    SELECT KATUZE_KOD, SDO_AGGR_UNION(
    MDSYS.SDOAGGRTYPE(a.geom, 0.005))GEOM
    FROM dp_plochy a GROUP BY a.KATUZE_KOD;
    Query in PostGIS takes 10.149 s :
    CREATE TABLE plochy_diss AS
    SELECT st_union(geom) AS geom
    FROM plochy
    GROUP BY katuze_kod;
    I really don't know what to do, please give somebody me some advice :-(. Please excuse my English.
    Thx Eva

    I tryed use the sdo_aggr_set_union like this:
    CREATE OR REPLACE FUNCTION get_geom_set (table_name VARCHAR2,
    column_name VARCHAR2,
    predicate VARCHAR2 := NULL)
    RETURN SDO_GEOMETRY_ARRAY DETERMINISTIC AS
    type cursor_type is REF CURSOR;
    query_crs cursor_type ;
    g SDO_GEOMETRY;
    GeometryArr SDO_GEOMETRY_ARRAY;
    where_clause VARCHAR2(2000);
    BEGIN
    IF predicate IS NULL
    THEN
    where_clause := NULL;
    ELSE
    where_clause := ' WHERE ';
    END IF;
    GeometryArr := SDO_GEOMETRY_ARRAY();
    OPEN query_crs FOR ' SELECT ' || column_name ||
    ' FROM ' || table_name ||
    where_clause || predicate;
    LOOP
    FETCH query_crs into g;
    EXIT when query_crs%NOTFOUND ;
    GeometryArr.extend;
    GeometryArr(GeometryArr.count) := g;
    END LOOP;
    RETURN GeometryArr;
    END;
    CREATE TABLE dp_diss_p AS
    SELECT KATUZE_KOD, sdo_aggr_set_union (get_geom_set ('dp_plochy', 'geom','CTVUK_KOD <>''1'''), .0005 ) geom
    FROM dp_plochy a GROUP BY a.KATUZE_KOD;
    It takes 21.721 s, but it return type of collection and I can't export it to *.shp because Georaptor export it like 'unknown' . Has anybody idea what can I do with it to export it like polygon?
    Eva

  • Take too much time to render the OAF page from JDeveloper

    Hi Gurus:
    It takes over 30 or 40 minutes to run a OAF page from JDeveloper, my JDeveloper is 10.1.3.3.0, and connect to the remote the DB server. I don't use the VPN. And one of my colleague in the same office take about 8 minutes to run a OAF page. It's painful to take too much time, Anyone can shed some light on this?
    Regards,
    Jiang

    Thanks John.
    I will move my question to Technology - OA Framework
    Regards,
    Flywin Jiang

Maybe you are looking for

  • Set The Secured Files, Not allow printing and saving to another location

    Dear Sir / Madam, Please help... Possible to set the security permission for share files, not allow end users to printing and saving a PDF to another location. Thanks, Alex Tai

  • Keys getting dirty look--paint wearing off!  What does Apple do for this?

    This is dismaying. I received a brand new MacBook Pro (early 2008 model) from Apple only a few months ago and the silver paint on several keys is already wearing off! They're getting that dirty look. See http://forums.macrumors.com/showthread.php?t=4

  • Disable Save and Save As in Web Analysis

    I have some standard Web Analysis reports that I want to share amongst users. I've created a common folder in Workspace and saved the reports (each is just a simple grid) to the folder. If my users are provisioned as Viewers they can drill up and dow

  • Cannot start dbca

    After getting problems with the DBCA when enabling it to run immediately after install I tried reinstalling 11g without configuring a db after install and everything went fine, but when I start dbca from a terminal I get the same crash: [oracle@local

  • Disks spinning lots after upgrade

    I have a Mac Pro desktop which I just upgraded to OS X Lion.   The disks are spinning a lot, there's a lot of activity -- which is making the system slow.   The version I have is: 11.0.0 Darwin Kernel Version 11.0.0: Sat Jun 18 12:57:44 PDT 2011; roo