Querry reporting performance with exit varibles

Hi friends,
In the reporting I am making use of around three  customer exit variables.
Basically, I am finding out the role of a logged in user and his org unit. Straightforward ABAP !
Would it help me increasing on performance ?
Thanks,

Shreya,
Customer exit variables will not add to the performance.Infact they take considerable amount of time
based on its complexity.
As long as the performance is nt bad and the requirement
has to be met and theres no other go than customer exit variables , it shud be ok
-Doodle

Similar Messages

  • Report  performance with Hierarchies

    Hi
    How to improve query performance with hierarchies. We have to do lot of navigation's in the query and the volume of data size very big.
    Thanks
    P G

    HI,
    chk this:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Query Performance – Is "Aggregates" the way out for me?
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    ° the OLAP cache is architected to store query result sets and to give all users access to those result sets.
    If a user executes a query, the result set for that query’s request can be stored in the OLAP cache; if that same query (or a derivative) is then executed by another user, the subsequent query request can be filled by accessing the result set already stored in the OLAP cache.
    In this way, a query request filled from the OLAP cache is significantly faster than queries that receive their result set from database access
    ° The indexes that are created in the fact table for each dimension allow you to easily find and select the data
    see http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6473e07211d2acb80000e829fbfe/content.htm
    ° when you load data into the InfoCube, each request has its own request ID, which is included in the fact table in the packet dimension.
    This (besides giving the possibility to manage/delete single request) increases the volume of data, and reduces performance in reporting, as the system has to aggregate with the request ID every time you execute a query. Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request IDs and, logically, you must be absolutely certain that the data loaded into the InfoCube is correct.
    see http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/content.htm
    ° by using partitioning you can split up the whole dataset for an InfoCube into several, smaller, physically independent and redundancy-free units. Thanks to this separation, performance is increased when reporting, or also when deleting data from the InfoCube.
    see http://help.sap.com/saphelp_nw04/helpdata/en/33/dc2038aa3bcd23e10000009b38f8cf/content.htm
    Hope it helps!
    tHAK YOU,
    dst

  • Report Performance with Bind Variable

    Getting some very odd behaviour with a report in APEX v 3.2.1.00.10
    I have a complex query that takes 5 seconds to return via TOAD, but takes from 5 to 10 minutes in an APEX report.
    I've narrowed it down to one particular bind. If I hard code the date in it returns in 6 seconds, but if I let the date be passed in from a parameter it takes 5+ minutes again.
    Relevant part of the query (an inline view) is:
    ,(select rglr_lect lect
    ,sum(tpm) mtr_tpm
    ,sum(enrols) mtr_enrols
    from ops_dash_meetings_report
    where meet_ev_date between to_date(:P35_END_DATE,'DD/MM/YYYY') - 363 and to_date(:P35_END_DATE,'DD/MM/YYYY')
    group by rglr_lect) RPV
    I've tried replacing the "to_date(:P35_END_DATE,'DD/MM/YYYY') - 363" with another item which is populated with the date required (and verified by checking session state). If I replace the :P35_END_DATE with an actual date the performance is fine again.
    The weird thing is that a trace file shows me exactly the same Explain Plan as the TOAD Explain where it runs in 5 seconds.
    Another odd thing is that another page in my application has the same inline view and doesn't hit the performance problem.
    The trace file did show some control characters (circumflex M) after each line of this report's query where these weren't anywhere else on the trace queries. I wondered if there was some sort of corruption in the source?
    No problems due to pagination as the result set is only 31 records and all being displayed.
    Really stumped here. Any advice or pointers would be most welcome.
    Jon.

    Don't worry about the Time column, the cost and cardinality are more important to see whther the CBO is making different decisions for whatever reason.
    Remember that the explain plan shows the expected execution plan and a trace shows the actual execution plan. So what you want to do is compare the query with bind variables from an APEX page trace to a trace from TOAD (or sqlplus or whatever). You can do this outside APEX like this...
    ALTER SESSION SET EVENTS '10046 trace name context forever, level 1';Enter and run your SQL statement...;
    ALTER SESSION SET sql_trace=FALSE;This will create a a trace file in the directory returned by...
    SELECT value FROM v$parameter WHERE name = 'user_dump_dest' Which you can use tkprof to format.
    I am assuming that your not going over DB links or anything else slightly unusual?
    Cheers
    Ben

  • BEx Report Performance with selection-screen input

    Hello Gurus,
    My Bex report is working fine when the report had run with out PLANT filter in the selection-screen but when report had run with plant in the selection-screen , report running for forever.
    Please let me know what I need to do improve the performance.
    Saleem.

    Hi Saleem, Just a few thoughts;
    1. Check the M-table in RSD1 for 0PLANT. In Table View edit any blank or null values. Run the same restrictions you apply in the query at Info provider level > Display Data. If there's any lapse; you can judge where exactly the problem lies.
    2. If you are using Infocube & that your master is >20% fact; you can declare the Info object as 'Line Item Dimension'.
    3. Create Variants. Esp. if you are running the query for same set of data. Try Variable Preselection: You can restrict both the values + varaiables in the filter level. When you execute the values will be visibly pre-selected in selection screen.
    4. As discussed in previous messages, running a SQL trace using RSRT may prove useful.

  • SLOW report performance with bind variable

    Environment: 11.1.0.7.2, Apex 4.01.
    I've got a simplified report page where the report runs slowly compared to running the same query in sqldeveloper. The report region is based on a pl/sql function returning a query. If I use a bind variable in the query inside apex it takes 13 seconds to run, and if I hard code a string it takes only a few hundredths of a second. The query returns one row from a table which has 1.6 million rows. Statistics are up-to-date and the columns in the joins and where clause are indexed.
    I've run traces using p_trace=YES from Apex for both the bind variable and hard coded strings. They are below.
    The sqldeveloper explain plan is identical to the bind variable plan from the trace, yet the query runs in 0.0x seconds in sqldeveloper.
    What is it about bind variable syntax in Apex that is causing the bad execution plan? Apex Bug? 11g bug? Ideas?
    tkprof output from Apex trace with bind variable is below...
    select p.master_id link, p.first_name||' '||p.middle_name||' '||p.last_name||' '||p.suffix personname,
    p.gender||' '||p.date_of_birth g_dob, p.master_id||'*****'||substr(p.ssn,-4) ssn, p.status status
    from persons p
    where
       p.person_id in (select ps.person_id from person_systems ps where ps.source_key  like  LTRIM(RTRIM(:P71_SEARCH_SOURCE1)))
    order by 1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.01          0          1         27           0
    Fetch        2     13.15      13.22      67694      72865          0           1
    total        4     13.15      13.23      67694      72866         27           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 62  (ODPS_PRIVACYVAULT)   (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT ORDER BY (cr=72869 pr=67694 pw=0 time=0 us cost=29615 size=14255040 card=178188)
          1   FILTER  (cr=72869 pr=67694 pw=0 time=0 us)
          1    HASH JOIN RIGHT SEMI (cr=72865 pr=67694 pw=0 time=0 us cost=26308 size=14255040 card=178188)
          1     INDEX FAST FULL SCAN IDX$$_0A300001 (cr=18545 pr=13379 pw=0 time=0 us cost=4993 size=2937776 card=183611)(object id 68485)
    1696485     TABLE ACCESS FULL PERSONS (cr=54320 pr=54315 pw=0 time=21965 us cost=14958 size=108575040 card=1696485)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (ORDER BY)
          1    FILTER
          1     HASH JOIN (RIGHT SEMI)
          1      INDEX   MODE: ANALYZED (FAST FULL SCAN) OF
                     'IDX$$_0A300001' (INDEX)
    1696485      TABLE ACCESS   MODE: ANALYZED (FULL) OF 'PERSONS' (TABLE)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                       1276        0.00          0.16
      db file sequential read                       812        0.00          0.02
      direct path read                             1552        0.00          0.61
    ********************************************************************************Here's the tkprof output with a hard coded string:
    select p.master_id link, p.first_name||' '||p.middle_name||' '||p.last_name||' '||p.suffix personname,
    p.gender||' '||p.date_of_birth g_dob, p.master_id||'*****'||substr(p.ssn,-4) ssn, p.status status
    from persons p
    where
       p.person_id in (select ps.person_id from person_systems ps where ps.source_key  like  LTRIM(RTRIM('0b')))
    order by 1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.02       0.04          0          0          0           0
    Execute      1      0.00       0.00          0          0         13           0
    Fetch        2      0.00       0.00          0          8          0           1
    total        4      0.02       0.04          0          8         13           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 62  (ODPS_PRIVACYVAULT)   (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT ORDER BY (cr=10 pr=0 pw=0 time=0 us cost=9 size=80 card=1)
          1   FILTER  (cr=10 pr=0 pw=0 time=0 us)
          1    NESTED LOOPS  (cr=8 pr=0 pw=0 time=0 us)
          1     NESTED LOOPS  (cr=7 pr=0 pw=0 time=0 us cost=8 size=80 card=1)
          1      SORT UNIQUE (cr=4 pr=0 pw=0 time=0 us cost=5 size=16 card=1)
          1       TABLE ACCESS BY INDEX ROWID PERSON_SYSTEMS (cr=4 pr=0 pw=0 time=0 us cost=5 size=16 card=1)
          1        INDEX RANGE SCAN IDX_PERSON_SYSTEMS_SOURCE_KEY (cr=3 pr=0 pw=0 time=0 us cost=3 size=0 card=1)(object id 68561)
          1      INDEX UNIQUE SCAN PK_PERSONS (cr=3 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 68506)
          1     TABLE ACCESS BY INDEX ROWID PERSONS (cr=1 pr=0 pw=0 time=0 us cost=2 size=64 card=1)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (ORDER BY)
          1    FILTER
          1     NESTED LOOPS
          1      NESTED LOOPS
          1       SORT (UNIQUE)
          1        TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                       'PERSON_SYSTEMS' (TABLE)
          1         INDEX   MODE: ANALYZED (RANGE SCAN) OF
                        'IDX_PERSON_SYSTEMS_SOURCE_KEY' (INDEX)
          1       INDEX   MODE: ANALYZED (UNIQUE SCAN) OF 'PK_PERSONS'
                      (INDEX (UNIQUE))
          1      TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                     'PERSONS' (TABLE)

    Patrick, interesting insight. Thank you.
    The optimizer must be peeking at my bind variables with it's eyes closed. I'm the only one testing and I've never passed %anything as a bind value. :)
    Here's what I've learned since my last post:
    I don't think that sqldeveloper is actually using the explain plan it says it is. When I run explain plan in sqldeveloper (with a bind variable) it shows me the exact same plan as Apex with a bind variable. However, when I run autotrace in sqldeveloper, it takes a path that matches the hard coded values, and returns results in half a second. That autotrace run is consistent with actually running the query outside of autotrace. So, I think either sqldeveloper isn't really using bind variables, OR it is using them in some other way that Apex does not, or maybe optimizer peeking works in sqldeveloper?
    Using optimizer hints to tweak the plan helps. I've tried both /*+ FIRST_ROWS */ and /*+ index(ps pk_persons) */ and both drop the query to about a second. However, I'm loath to use hints because of the very dynamic nature of the query (and Tom Kyte doesn't like them either). The hints may end up hurting other variations on the query.
    I also tested the query by wrapping it in a select count(1) from ([long query]) and testing the performance in sqldeveloper and in Apex. The performance in that case is identical with both bind variables and hard coded variables for both Apex and SqlDeveloper. That to me was very interesting and I went so far as to set up two bind variable report regions on the same page. One region wrapped the long query with select count(1) from (...) and the other didn't. The wrapped query ran in 0.01 seconds, the unwrapped took 15ish seconds with no other optimizations. Very strange.
    To get performance up to acceptable levels I have changed my function returning query to:
    1) Set the equality operator to "=" for values without wildcards and "like" for user input with wildcards. This makes a HUGE difference IF no wildcard is used.
    2) Insert a /*+ FIRST_ROWS */ hint when users chose the column that requires the sub-query. This obviously changes the optimizer's plan and improves query speed from 15 seconds to 1.5 seconds even with wildcards.
    I will NOT be hard coding any user supplied values in the query string. As you can probably tell by the query, this is an application where sql injection would be very bad.
    Jeff, regarding your question about "like '%' || :P71_SEARCH_SOURCE1 || '%'". I've found that putting wildcards around values, particularly at the beginning will negate any indexing on the column in question and slows performance even more.
    I'm still left wondering if there isn't something in Apex that is breaking the optimizer "peeking" that Patrick describes. Perhaps something in the way it switches contexts from apex_public_user to the workspace schema?

  • Interactive Report Performance With Conditional Link

    Apex 3.2
    I have a interactive report.
    The underlying sql would return 127000 rows
    The sql is
    select
      lde.ods_system,
      lde.ldekey,
      msg.sendersystem, 
      msg.messagetype,
      msg.messageversion,
      msg.msgseqnumber,
      msg.alternatekey,
      msg.crudmarker,
      msg.clrbookdate,
      msg.clrbookresult,
      lower('udf_'||msg.messagetype) button,
      lde.ldekey||'.'||msg.alternatekey||'.'||msg.msgseqnumber udm_key
    from
      clr_esbmessageheader msg,
      clr_adm_systemmessage adm,
      udm_lde lde
    where
      adm.ldeid = lde.ldeid and
      msg.sendersystem = adm.system and
      msg.messagetype = adm.messagetype and
      msg.messageversion = adm.messageversion and
      msg.receiversystem = 'SCIPS'
    order by msg.clrbookdate desc
    This report only takes 1 second to display.
    I need to add a conditional link to another page, so I used
    case
    when lower('udf_'||msg.messagetype) = 'udf_distreceipt' then
    '<a class="type" href="' || apex_util.prepare_url('f?p='||:APP_ID||':52:'||:APP_SESSION||'::'||:DEBUG||':RIR'||':IR_MSG_KEY,P52_PG:'|| lde.ldekey||'.'|| msg.alternatekey ||'.'|| msg.msgseqnumber ||','|| 50, null, 'SESSION') || '"title="Go to udf_distreceipt Report">udf_distreceipt</a>'
    else 'no link' end table_link
    The sql seems to be ok, because the report accepted it, but selecting the new column and saving the report takes forever (over 2 mins)
    Now the report takes over 2 minutes to run and I still need to add more conditions.
    Have I coded the link incorrectly ?
    Gus

    Hi Gus,
    Are you wanting to put the link in the query for a specific reason?
    I had to do a similar thing in the past and just completed the column link section for the column.
    Why not just have the following in the query:
    case
    when lower('udf_'||msg.messagetype) = 'udf_distreceipt' then
    udf_distreceipt
    else null END table_link
    Then do the linking using column link section:
    You would specify your link text as #TABLE_LINK# which should then be conditionally displayed due to the case statement, then add in all the page item and values to pass across using a normal link column.
    Thanks
    Paul

  • TS2570 After performing a "disk repair" a message in red indicates "Error: The underlying task reported failure on exit.

    Can someone help me with this issue. I can't install disk and restart my Mac it indicates verify permissions for Mac OSx install disc 1, the red error message states the underlying task reported failure on exit. I cannot start up my Mac and I am concerned that I lost my files.

    Just check the steps you are making are the same according to this help page.
    Resolve startup issues and perform disk maintenance with Disk Utility and fsck

  • Apex report performance is very poor with apex_item.checkbox row selector.

    Hi,
    I'm working on a report that includes some functionality to be able to select multiple records for further processing.
    The report is based on a view that contains a couple of hundred thousand records.
    When i make a selection from this view in sqlplus , the performance is acceptable but the apex report based on the same view performes very poorly.
    I've noticed that when i omit the apex_item.checkbox from my report query, performance is on par with sqlplus. (factor 10 or so quicker).
    Explain plan appears to be the same with or without checkbox function in the select.
    My query is:
    select apex_item.checkbox(1,tan_id) Select ,
    brt_id
    , tan_id
    , message_id
    , conversation_id
    , action
    , to_acn_code
    , information
    , brt_created
    , tan_created
    from (SELECT brt.id brt_id, -- view query
    MAX (TAN.id) tan_id,
    brt.message_id,
    brt.conversation_id,
    brt.action,
    TAN.to_acn_code,
    TAN.information,
    brt.created brt_created,
    TAN.created tan_created
    FROM (SELECT brt_id, id, to_acn_code, information, created
    FROM xxcjib_transactions
    WHERE tan_type = 'DELIVER' AND status = 'FINISHED') TAN,
    xxcjib_berichten brt
    WHERE brt.id = TAN.brt_id
    GROUP BY brt.id,
    brt.message_id,
    brt.conversation_id,
    brt.action,
    TAN.to_acn_code,
    TAN.information,
    brt.created,
    TAN.created)
    What could be the reason for the poor performance of the apex report?
    And is there another way to select multiple report records without the apex_item.checkbox function?
    I'm using apex 3.2 on oracle 10g database.
    Thanks,
    Niels Ingen Housz
    Edited by: user11986529 on 19-mrt-2010 4:06

    Thanks for your reply.
    Unfortunately changing the pagination doesnt make much of a difference in this case.
    Without the checkbox the query takes 2 seconds.
    With checkbox it takes well over 30 seconds.
    The second report region on this page based on another view seems to perform reasonably well with or without the checkbox.
    It has about the same number of records but with a different view query.
    There are also a couple of filter items in the where clause of the report queries (same for both reports) based on date and acn_code and both reports have a selectlist item displayed in their regions based on a simple lov. These filter items don't seem to be of influence on the performance.
    I have also recreated the report on a seperate page without any other page items or where clause and the same thing occurs.
    With the checkbox its very very slow (more like 20 times slower).
    Without it , the report performs well.
    And another thing, when i run the page with debug on i don't see the actual report query:
    0.08: show report
    0.08: determine column headings
    0.08: activate sort
    0.08: parse query as: APEX_CMA_ONT
    0.09: print column headings
    0.09: rows loop: 30 row(s)
    and then the region is displayed.
    I am using databaselinks in the views b.t.w
    Edited by: user11986529 on 19-mrt-2010 7:11

  • Performances with Crystal Reports (based on BW queries)

    Hi,
    I've created some Crystal reports based on BW queries, and I'm really interested in performances purposes. The BW queries I created are quick (with neither free characteristics nor hierarchies), and when I use them in a Crystal report, I identify a loss of time in the data extraction (BW --> Crystal), in the Crystal management of the layout and then in the publication to BOE.
    To be able to use Crystal on BW queries is very useful, but if the response times are too much important, that's not good...
    Is there any customizing to perform to get better performance ? Are there some analysis already performed with Performance point of view ?
    If you have any best practice or piece of advice to shorten the response time in Crystal reports, I'm really interested in.
    Thanks for your replies,
    Best regards
    Jonathan

    Hi,
    there are many SDN threads or forums dealing with the integration BW and Crystal.
    You should have a look to the forums created by Ingo Hilgefort (SAP expert on the BW-BO integration).
    You can start with that one :
    /people/ingo.hilgefort/blog/2008/09/17/businessobjects-and-sap--installation-and-configuration-part-1-of-4
    Best regards,
    Jonathan

  • The underlying task reported failure on exit (-9972)

    my lacie has worked in the past
    now it will not show up on my osx desktop
    i can get to it in os9
    and it shows up in the disk utility as grey
    when i try to repair remissions it says
    "The underlying task reported failure on exit (-9972)"
    does anyone else know how i can get it back?

    Hello nooneever,
    You might find the following useful in dealing with your -9972 error (depending on the text which accompanies the error):
    The following was contributed by Fumiaki Kawashima. Edited by Kappy.
    The error message, "Error: The underlying task reported failure on exit (-9972)" is a serious filesystem error in the Mac OS X Core Foundation. The problem can also lead to other critical errors such as "Keys Out of Order," "Invalid node structure" and/or "Invalid sibling link." The causes and scenarios vary. Troubleshooting a solution may depend upon computer configuration and whether the -9972 error is accompanied by other critical errors. This issue can also lead up to a kernel panic. If the error occurs when an external FireWire device is connected, disconnect it until you verify the device's compatibility.
    Symptoms:
    In most cases, you are unable to restart from Mac OS X.
    * A volume is grayed out or not mounted with or without a kernel panic.
    * A folder with a flashing question mark may appear.
    * A bad partition map may be reported.
    * A target disk mode solution may not work.
    * Most likely, Disk Utility, TechTool and DiskWarrior cannot fix the issues.
    * The high level disk format (Standard format) may unable to perform.
    * You may unable to re-initialize the hard drive.
    * A disk physically malfunctions in the worst case.
    Example of an accompanied error message:
    DiskWarrior normally fixes 1 to 6, but cannot fix errors 7 to 10 if the symptoms are very bad. There is no definite case.
    01. Volume check failed
    02. Invalid B-tree Header
    03. Invalid map node
    04. Invalid extents entry
    05. Invalid clump size
    06. Incorrect block count file
    07. Invalid node structure
    08. Overlapped extent allocation
    09. Keys Out of Order
    10. Invalid sibling link
    Possible causes:
    * Third-party FireWire device or enclosure, or other peripheral devices.
    * Third-party mass storage drives or PCI card issues.
    * Incompatible third-party kernel extensions.
    * Mac OS X installer disc is improperly treated.
    HTH
    Jeff
    Mini 1.25, 512 AP/BT; 12 Al PB 1.5, 512 AP/BT   Mac OS X (10.3.8)   Wireless KB/Mouse, AEBS, 80 GB OWC FW HD

  • Error: The underlying task reported failure on exit (-9972)

    After much debackle that I posted a few days ago I finally was able to run scans with disk utility. I got this:
    Verifying volume “ROVER”
    ** /dev/disk3s2
    ** Phase 1 - Read FAT
    Unable to read FAT (Input/output error)
    Error: The underlying task reported failure on exit (-9972)
    1 volume checked
         0 HFS volumes verified
         1 volume failed verification
    Repairing disk for “ROVER”
    ** /dev/disk3s2
    ** Phase 1 - Read FAT
    Unable to read FAT (Input/output error)
    Error: The underlying task reported failure on exit (-9972)
    Repair attempted on 1 volume
         0 HFS volumes repaired
         1 volume could not be repaired
    Of course I have no idea what it means so I'm turning to here.
    FYI: The disk is named ROVER and it was formated for PC (FAT). I was unable to upload all my music onto the disk and eventually unable to mount it to the desktop ir iTunes.

    Hi Charles,
    This indicates that your disk volume has issues that Disk Utility cannot fix, you have two options:
    # Back up as much of your important data and files as you can, then try using a third-party disk utility to repair the drive. Be sure that you use one that works with your version of Mac OS X.
    # Back up as much of your important data and files as you can, then perform an Erase and Install installation of Mac OS X on the affected volume.
    Important: This option completely erases the destination volume. You should always back up important files on the target volume before performing an Erase and Install installation. You can then restore your backed up files afterwards.

  • "verify disk permissions" and "Underlying task reported failure on exit"

    This is more of a "How I fixed this problem, so others can benefit" posting than a current problem.
    I put my MBP to sleep yesterday, and then when I tried to come out, it just sat there. After about 15 minutes, I figured I'd just force it quit and then restart.
    After hitting the power button, it came up with the gray screen and apple logo with spinning gear and basically sat there for 20 minutes. Several reboot attempts were failures; sometimes I got a blue screen with spinning gears, but basically not much else.
    I tried rebooting with the install disk, holding down the C key, but it gave me nothing, tried rebooting in verbose mode, tried doing CommandOption+PR, nothing, etc. So, I tried rebooting while holding down the option key, and I could see the Disk Utility.
    I ran the Disk Utility to verify the disk, and that all came back fine. So, I ran Verify Disk Permissions, and i spit out "Underlying Task Reported Failure on Exit". The phone support people suggested I try reinstall with archive.
    So, I tried running install with Archiving, and that failed as soon as it verified the DVD was ok - basically it couldn't access the HD.
    So, I rooted around and did the following. From the Install disk, I selected "Terminal". Then, I did:
    cd /Volumes
    ls -lt
    There I noticed that for some odd reason, Macintosh HD was listed as the following permissions:
    rwxrwx---
    So, admin and group had read-write-exec access to the Mac HD, but others had nothing, which I thought odd. It seems like you should at least be able to boot the computer to the login screen, and if a user is not admin or admin group, you cannot login, but I guess that's not the way it's set up.
    So, I tried:
    chmod o+r Macintosh HD
    Then, I reran verify disk permissions. Again, I got the same "Underlying..." error. So I tried
    chmod o+x Macintish HD
    I then reran verify disk permissions, and it ran fine and the suggested that the Macintosh HD volume permissions should be set to:
    rwxrwxr-t
    So, I had the Disk Utility repair it to the way it should be, and now it all seems fine.
    This suggestion basically did not come from any Apple Support people or any forums, and it seemed like such an easy fix that I'm surprised nobody (at least nobody I've found) has had the same problem or needed the same fix.
    Further, I think figured out what happened. I took my computer to my local tech support guys because I wanted to turn off sharing for this MBP. So, under Macintosh HD, somebody changed the permissions under "Everyone" from "Read Only" to "No Access".
    It turns out that doing this basically creates the whole problem above by making the Macintosh HD unmountable or inaccessible.
    I find it truly odd that it would be so easy to remove access to the computer for mounting purposes, but this is the only thing I can think of. The tech guy realized his error and apologized and was also surprised it was so easy to hose the computer.
    Anyway, this is partially a warning to everyone as well as a possible fix for the blue screen problem, especially since it was a solution I could not find anywhere.
    Maybe Apple Support will see this and for its next update at least make a warning box when people try to remove "Read Only" access for "Everyone".
    Good luck.

    If your Mac still boots and runs most applications correctly, but applying Software Update packages fails and you decide to run Disk Utility to Verify Permissions and it fails almost immediately with the "underlying task reported failure on exit" explore this posting. If you see other errors, have slow/varied disk performance, or hear nasty grinding/clicking sounds you have a much more serious problem this solution isn't for you.
    Use Applications, Utility, Console and check the messages appearing there each time you try Disk Utility. If you see:
    Failed to open database on '/'. Error 14, 13, Permission denied.
    You may have had your permissions reset on the receipts db. Which is found in
    /Library/Receipts/db/a.receiptdb
    At this point you have to enter the dark world of the Terminal and use a unix shell (command line).
    If you are not comfortable doing find a Unix/Linux power user.
    The simple fix for me was to open Applications, Utilities, Terminal and do
    sudo chmod 755 /Library/Receipts/db/a.receiptdb
    This allowed Disk Utility to read the file and move on to repairing permissions.
    When things generate error messages in the GUI part of Mac OS X ...always go check the messages in the logs via the Console in Utilities. These messages are 'down a level' and get you much closer to the real problem.
    Note that I did follow the instructions in Apple kb TS1901
    http://support.apple.com/kb/TS1901
    which did not help at all. Same problem each time:
    xxxx-imac:Volumes user$ diskutil verifyPermissions /
    Started verify/repair permissions on disk disk0s2 Macintosh HD
    $<3>Error -9972: The underlying task reported failure on exit
    $<3>[ + 0%..10%..20%..30%..40%..50%..60%..70%..80%..90%..100% ]
    Finished verify/repair permissions on disk disk0s2 Macintosh HD
    Error detected while verifying/repairing permissions on disk0s2 Macintosh HD: The underlying task reported failure on exit (-9972)
    Now I wonder what changed the permissions on the receipts database file? I am off to use find and date fields to see what changed the permissions file last (my is dated sometime in November!).
    flatiswhereitsat

  • Report Performance degradation

    hi,
    We are using around 16 entities in crm on demand R 16which includes both default as well as custom entites.
    Since custom entities are not visible in the historical subject area , we decided to stick to the real time reporting.
    Now the issue is , we have total 45 lakh record in these entites as a whole.We have reports where we need to retrieve the data across all the enties in one report.Intially we tested the reports with lesser no of records...the report performance was not that bad....but gradually it has degraded as we loaded more n more data over a period of time.The reports now takes approx 5-10 min and then finally diaplays an error msg.Infact after creating a report structure in Step 1 - Define Criteria......n moving to Step 2 - Create Layout it takes abnormal amount of time to display.As far as reports are concerned, we have built them using the best practice except the "Historical Subject Area Issue".
    Ideally for best performance how many records should be there one entity?
    What cud be the other reasons for such a performance?
    We are working in a multi tenant enviroment
    Edited by: Rita Negi on Dec 13, 2009 5:50 AM

    Rita,
    Any report built over the real-time subject areas will timeout after a period of 10 minutes. Real-time subject areas are really not suited for large reports and you'll find running them also degrades the application performance.
    Things that will degrade performance are:
    * Joins to other dimensions
    * Custom calculations
    * Number of records
    * Number of fields returned
    There are some things that just can't be done in real-time. I would look to remove joins from other dimensions e.g. Accounts/Contacts/Opportunities all in the same report. Apply more restrictive filters, e.g. current week/month to reduce the number of records required. Alternatively have very simple report, extract to excel and modify from there. Hopefully in R17 this will be added as a feature but it seems like you're stuck till then
    Thanks
    Oli @ Innoveer

  • Report performance while creating report on BEx

    All all!
    I am creating a report on BOE 4.0 on top of BEx connection as a source. I have developed reports on top of universe in the past and i know that if we keep calculations on reporting end it hampers the report performance. Is this the same case with BEx? if we are following the best practices is it ok to say that we should keep all heavy calculations/ aggregation on BEx or backend for better report performance.
    Can you guys please provide your opinion based on your experiance and knowledge.  Any feedbacks will help! Thanks.

    Hi,
    Definitely  best-practice to delegate a maximum of CKF to the Cube where possilble,  put RKF in the BEx query, and Filters too.
    also, add Default Values to your Variables (this will speed up generation of the bics transient universe)
    also, since Patch2.10, we are seeing some significant performance improvements  reducing 'document initialization' and  'time to prompts'  by up to 50% (step such as these often took 1.5 minutes, even on sized systems)
    Also, make sure you have BW corrections like this implemented:  1593802    Performance optimization when loading query views 
    In the BusinessObjects landscape - especially with BI 4.0 - it's all about Sizing and Tuning . Here is your bible the 'sizing companion' guide : http://service.sap.com/~form/sapnet?_SHORTKEY=01100035870000738725&_OBJECT=011000358700000307202011E
    Pay particular attention to BICSChunkSize registry settings
    Also, the  -Xmx JVM Heap Size for the Adaptive Processing Server  that is running the DSL_Bridge service.
    Regards,
    H

  • Bad reporting performance after compressing infocubes

    Hi,
    as I learned, we should compress requests in our infocubes. And since we're using Oracle 9.2.0.7 as database, we can use partitioning on the E-facttable to still increase reporting performance. So far all theory...
    After getting complaints about worse reporting performance we tested this theory. I created four InfoCubes (same datamodel):
    A - no compression
    B - compression, but no partitioning
    C - compression, one partition for each year
    D - compression, one partition for each month
    After loading 135 requests and compressing the cubes, we get this amount of data:
    15.6 million records in each cube
    Cube A: 135 partitions (one per request)
    Cube B:   1 partition
    Cube C:   8 partitions
    Cube D:  62 partitions
    Now I copied one query on each cube and with this I tested the performance (transaction rsrt, without aggregates and cache, comparing the database time QTIMEDB and DMTDBBASIC). In the query I selected always one month, some hierarchy nodes and one branch.
    With this selection on each cube, I expected that cube D would be fastest, since we only have one (small) partition with relevant data. But reality shows some different picture:
    Cube A is fastest with an avg. time of 8.15, followed by cube B (8.75, +8%), cube C (10.14, +24%) and finally cube D (26.75, +228%).
    Does anyone have an idea what's going wrong? Are there same db-parameters to "activate" the partitioning for the optimizer? Or do we have to do some other customizing?
    Thanks for your replies,
    Knut

    Hi Björn,
    thanks for your hints.
    1. after compressing the cubes I refreshed the statistics in the infocube administration.
    2. cube C ist partitioned using 0CALMONTH, cube D ist partitioned using 0FISCPER.
    3. here we are: alle queries are filtered using 0FISCPER. Therefor I could increase the performance on cube C, but still not on D. I will change the query on cube C and do a retest at the end of this week.
    4. loaded data is joined from 10 months. The records are nearly equally distributed over this 10 months.
    5. partitioning was done for the period 01.2005 - 14.2009 (01.2005 - 12.2009 on cube C). So I have 5 years - 8 partitions on cube C are the result of a slight miscalculation on my side: 5 years + 1 partion before + 1 partition after => I set max. no. of partitions on 7, not thinking of BI, which always adds one partition for the data after the requested period... So each partition on cube C does not contain one full year but something about 8 months.
    6. since I tested the cubes one after another without much time between, the system load should be nearly the same (on top: it was a friday afternoon...). Our BI is clustered with several other SAP installations on a big unix server, so I cannot see the overall system load. But I did several runs with each query and the mentioned times are average times over all runs - and the average shows the same picture as the single runs (cube A is always fastest, cube D always the worst).
    Any further ideas?
    Greets,
    Knut

Maybe you are looking for

  • No software discs with new Macbook

    I did not receive any software discs with my new MacBook. I understand I should have had 2 discs. How can I remedy this?

  • Have new computer want to install CS5 and deactivate from old. but need an older verison as cs5 is an update

    I have a new computer , want to install CS5 and deactivate from old. Problem is my CS5 is an upgrade and needs a older version of Elements which I no longer have. Any suggestions?

  • Portal Login Broke after Db Upgrade to 9.0.1.3

    Hi -- My portal web page login doesn't work after upgrading my portal database version from 8.1.7.1 to 9.0.1.3. All the scripts I ran (Note 159657.1 and Chap. 7 of 9i Database Migration Manual) ran ok. I also applied whatever patches/fixes required t

  • Missing transport requests

    Hi Sap-experts! in our system is a gap btw. transport request XZ1K900500 and XZ1K904000! the requests seem to be deleted or number range was manipulated! where can I find these missing records  ? there's no track in table E070/1... greetings Andreras

  • Please help ... Photos won't load

    When I open iPhoto, I get the spinning wheel and the message Loading photos, but they don't load. I tried restarting, still not loading. Help please! power mac g5   Mac OS X (10.4.7)