Performance issues related to logging (ForceSingleTraceFile option)

Dear SDN members,
I have a question about logging.
I like to place my logs/traces for every application in different log files. By doing this you have to set the ForceSingleTraceFile option to NO (in the config tool).
But in a presentation of SAP, named SAP Web Application Server 6.40; SAP Logging and Tracing API, is stated:
- All traces by default go to the default trace file.
     - Good for performance
          - On production systems, this is a must!!!
- Hard to find your trace messages
- Solution: Configure development systems to pipe traces and logs for applications to their own specific trace file
But I want the logs/traces also by our customers (production systems) in separate files. So my question is:
What are the performance issues we face, if we turn the ForceSingleTraceFile option to NO by our customers?
and
If we turn the ForceSingleTraceFile to NO will the logs/traces of the SAP applications also go to different files? If so, then I can imagine that it will be difficult to find the logs of the different SAP applications.
I hope that someone can clarify the working of the ForceSingleTraceFile setting.
Kind regards,
Marinus Geuze

Dear Marinus,
The performance issues with extensive logging are related to high memory usage (for concatenation/generation of the messages which are written to the log files) and as result increased garbare collection frequency, as well as high disk I/O and CPU overhead for the actual logging.
Writing to same trace file, if logging is extensive can become a bottleneck.
Anyway it is not related to if you should write the logs to the default trace of a standard location. I believe that the recommendation in the documentation is just about using the standard logging APIs of the SAP Java Server, because they are well optimized.
Best regards,
Sylvia

Similar Messages

  • Performance issues -- related to printing

    Hi All,
    I am haviing production system performance issues related to printing. endusers are telling the printing is slow for almost printers. We are having more that 40 to 50 printers in landscape.
    As per my primary investigation I didnt find any issues in TSP01 & TSP02 tables. But I can see the table TST01 and TST03 table having many number of entries (more that lakh). I dont have idead about this table. Is ther eany thing related to this table where the print causes slowness or any other factors also makes this printing issue .. Please advice ..
    thanks in advance

    Hai,
    Check the below link...
    http://help.sap.com/saphelp_nw70/helpdata/en/c1/1cca3bdcd73743e10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/fc/04ca3bb6f8c21de10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/86/1ccb3b560f194ce10000000a114084/content.htm
    TemSe cannot administer objects that require more than two gigabytes of storage space, regardless of whether the objects are stored in the database or in the file system. Spool requests of a size greater than two gigabytes must therefore be split into several smaller requests.
    It is enough if you perform the regular background jobs and Temse consistency checks for the tables.
    This will help in controlling the capacity problems.
    If you change the profile parameter rspo/store_location parameter value to 'G' this will make the performance better. The disadvantages are TemSe data must be backed up and restored separately from the database using operating system tools, In the event of problems, it can be difficult to restore consistency between the data held in files and the TemSeu2019s object management in the database. Also you have take care of the Hard disk requirements because in some cases, spool data may occupy several hundred megabytes of disk storage. If you use the G option, you must ensure that enough free disk space is available for spool data.
    Regards,
    Yoganand.V

  • Performance issue related to OWM? Oracle version is 10.2.0.4

    The optimizer picks hash join instead of nested loop for the queries with OWM tables, which causes full table scan everywhere. I wonder if it happens in your databases as well, or just us. If you did and knew what to do to solve this, it would be great appriciated! I did log in a SR to Oracle but it usually takes months to reach the solution.
    Thanks for any possible answers!

    Ha, sounded like you knew what I was talking about :)
    I thought the issue must've had something to do with OWM because some complicate queries have no performance issue while they're regular tables. There's a batch job which took an hour to run now it takes 4.5 hours. I just rewrote the job to move the queries from OWM to regular tables, it takes 20 minutes. However today when I tried to get explain plans for some queries involve regular tables with large amount of data, I got the same full table scan problem with hash join. So I'm convinced that it probably is not OWM. But the patch for removing bug fix didn't help with the situation here.
    I was hoping that other companies might have this problem and had a way to work around. If it's not OWM, I'm surprised that this only happens in our system.
    Thanks for the reply anyway!

  • Performance ISSUE related to AGGREGATE

    hi Gem's can anybody give the list of issue which we can face related to AGGREGATE maintananece in support project.
    Its very urgent .plz.........respond to my issue.its a urgent request.
    any link any thing plz send me
    my mail id is
        [email protected]

    Hi,
    Try this.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    Check   SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT 
    http://help.sap.com/saphelp_nw04/helpdata/en/74/e8caaea70d7a41b03dc82637ae0fa5/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    /people/juergen.noe/blog/2007/12/13/overview-important-bi-performance-transactions
    /people/prakash.darji/blog/2006/01/26/query-optimization
    Cube Performance
    /thread/785462 [original link is broken]
    Thanks,
    JituK

  • VC - Compile and Deploy performance issues related to UserID

    Dear Guru's,
    I'm currently working at a customer where a small team of 4 is working with VC 7.0.
    One user has very long Compile and Deploy times. We first thought that it was related to his workstation.
    Then one of the other guys logged in on his PC and run the compile + deploy ant then it suddenly takes seconds again.
    So we created a new userID for this user who has the issues "<oldUI>+test" and suddenly all is back to normal for him.
    But, now here it comes that we deleted his old userID and created it again, but the issue is still there.
    So my assumption is that there is some kind of faulty record or index or something other strange linked to his userID.
    What can this be and how can we solve it?
    Thanks in advance!
    Benjamin

    Hi Anja,
    We use VC on 7.0 and we do not have any integration with the DTR.
    So in other words we use the default way of working with VC.
    The user had his models in his Personal folder then moved it to the Public folder so that other colleagues could see/try them as well. It doesn't matter where the model is stored (public or personal) as long if this specific UID is used compiling/deploying goes very slow... the log files do not give much info, why this happens...
    Cheers,
    Benjamin

  • Performance Issue with application logs

    Hi
    Has any one come across or aware how to address a issue in Event Management on R/3 as well as SCM side where the application log tables (BALDAT) are loaded causing peformance issue to the system?I have checked the SAP notes in this connection but its not of any help.Any event change is causes lot of logs registration.
    Thanks in advance
    Anders.

    Hello Anders,
    on R/3 side you can disable logs in transaction /SAPTRX/ASC0AO on AO Type and Event type level.
    In the EM/SCM system you have per Event Handler Type options to control the log in transaction /SAPTRX/TSC0TT.
    Logs for Event Messages can be disabled in transaction /SAPTRX/TSC0MBF.
    Recommendation is that you disable all logging in big production environments and only enable it when you have any problems.
    Best regards,
    Steffen

  • Performance issue related to EP 6.0 SP2 Patch 5

    We have implemented SAP EP6.0 SP2 Patch5.We have also configured IIS 6.0 to access our portal from internet.
    When we access the portal from the internet,it is very slow.Sometime,pages are taking 5-10 minutes to load.
    I am using the cacheing technique for the iview.I wanted to know whether it is a good idea to use cacheing coz it is taking lot of time to load the iview.
    I would really appreciate any coments or suggestions.

    Paritosh,
    I think you need to analyze the issue step by step as the response time seems to be very high. Here are a few suggestions. Response time high could be due to many factors - Server Side, Network and Client Browser setting. Let us analysis the case step by step.
    1) Do a basic test to access the EP within the same network (LAN) to make sure we eliminate network to verify everything works fine within the LAN.
    2) If performance is not acceptable within LAN, then accessing over WAN or Internet will not be better anyway. If LAN performance is not acceptable (it requires you know the acceptable response time, say 5 sec or something), you need to find out whether you have large contents in the page you are accessing. You need to know how many iViews you have in the page. What kind of iViews are they - are they going to backend system? If they are going to the backend, how is the going? Are they using ITS or JCo-RFC? If it goes through ITS, how about accessing the same page directly via ITS? Do you get the same problem? If you are using via JCo, have you monitor RFC traffic (size of data and number of round trips using ST05).
    There could be many other potential issues. Have you done proper tuning of EP for JVM parameters, threads, etc? Are you using keep-alive settings in the dispatcher, firewall, and load balancer (if any)? Are you using compression enabled in the J2EE Server? Do you use content expirations at the J2EE Server? How is your browser setting for browser cache?
    In summary, we like to start with EP landscape with all components. We need to make sure that response time is acceptable within LAN. If we are happy, we can look into Network part for WAN/Internet performance.
    Hope it will give you a few starting points. Once you provide more information, we can follow-up.
    Thanks,
    Swapan

  • Performance issue related to Wrapper and variable value retrievel

    If I have a array of int(primitive array) and on the other hand if I have an array of it's corresponding Wrapper class , while dealing is there is any performance difference betwen these 2 cases . If in my code I am doing to conversion from primitive to wrapper object , is that affecting my performnace as there is already a concept of auto-boxing.
    Another issue is that if I acces the value of a variable name (defined in in superclass) in subclass by ' this.getName() ' rather than ' this.name ' . is there ne performance diffreance between the 2 cases.

    If I have a array of int(primitive array) and on the
    other hand if I have an array of it's corresponding
    Wrapper class , while dealing is there is any
    performance difference betwen these 2 cases . If in
    my code I am doing to conversion from primitive to
    wrapper object , is that affecting my performnace as
    there is already a concept of auto-boxing.I'm sure there is. It's probably not worth worrying about until you profile your application and determine it's actually an issue.
    Another issue is that if I acces the value of a
    variable name (defined in in superclass) in subclass
    by ' this.getName() ' rather than ' this.name ' .
    is there ne performance diffreance between the 2
    cases.Probably, but that also depends on what precisely getName() is doing doesn't it? This is a rather silly thing to be worrying about.

  • Performance Issue related to SAP EP

    Hi All,
    Performance wise, I would like to know which one is better SAP EP or SAP GUI?
    Also how good is SAP EP for handling Large Scale Data Entry Transactions and Printing Jobs.

    Paritosh,
    I think you need to analyze the issue step by step as the response time seems to be very high. Here are a few suggestions. Response time high could be due to many factors - Server Side, Network and Client Browser setting. Let us analysis the case step by step.
    1) Do a basic test to access the EP within the same network (LAN) to make sure we eliminate network to verify everything works fine within the LAN.
    2) If performance is not acceptable within LAN, then accessing over WAN or Internet will not be better anyway. If LAN performance is not acceptable (it requires you know the acceptable response time, say 5 sec or something), you need to find out whether you have large contents in the page you are accessing. You need to know how many iViews you have in the page. What kind of iViews are they - are they going to backend system? If they are going to the backend, how is the going? Are they using ITS or JCo-RFC? If it goes through ITS, how about accessing the same page directly via ITS? Do you get the same problem? If you are using via JCo, have you monitor RFC traffic (size of data and number of round trips using ST05).
    There could be many other potential issues. Have you done proper tuning of EP for JVM parameters, threads, etc? Are you using keep-alive settings in the dispatcher, firewall, and load balancer (if any)? Are you using compression enabled in the J2EE Server? Do you use content expirations at the J2EE Server? How is your browser setting for browser cache?
    In summary, we like to start with EP landscape with all components. We need to make sure that response time is acceptable within LAN. If we are happy, we can look into Network part for WAN/Internet performance.
    Hope it will give you a few starting points. Once you provide more information, we can follow-up.
    Thanks,
    Swapan

  • Performance Issue: Wait event "log file sync" and "Execute to Parse %"

    In one of our test environments users are complaining about slow response.
    In statspack report folowing are the top-5 wait events
    Event Waits Time (cs) Wt Time
    log file parallel write 1,046 988 37.71
    log file sync 775 774 29.54
    db file scattered read 4,946 248 9.47
    db file parallel write 66 248 9.47
    control file parallel write 188 152 5.80
    And after runing the same application 4 times, we are geting Execute to Parse % = 0.10. Cursor sharing is forced and query rewrite is enabled
    When I view v$sql, following command is parsed frequently
    EXECUTIONS PARSE_CALLS
    SQL_TEXT
    93380 93380
    select SEQ_ORDO_PRC.nextval from DUAL
    Please suggest what should be the method to troubleshoot this and if I need to check some more information
    Regards,
    Sudhanshu Bhandari

    Well, of course, you probably can't eliminate this sort of thing entirely: a setup such as yours is inevitably a compromise. What you can do is make sure your log buffer is a good size (say 10MB or so); that your redo logs are large (at least 100MB each, and preferably large enough to hold one hour or so of redo produced at the busiest time for your database without filling up); and finally set ARCHIVE_LAG_TARGET to something like 1800 seconds or more to ensure a regular, routine, predictable log switch.
    It won't cure every ill, but that sort of setup often means the redo subsystem ceases to be a regular driver of foreground waits.

  • Performance issue related to BSIS table:pls help

    Theres a select statement which fetches data from BSIS table.
    As in the where clause the only key field used is BUKRS its consuming more time.Below is the code.
    Could you please tell me how to improvise this piece of code.
    I tried to fecth first from BKPF table based on the selection screen paramater t001-bukrs and then for all entries in BKPF fetched from BSIS.But it didnt worked.
    your help would be very much appreciated.Thanks in advance.
      SELECT bukrs waers ktopl periv
             FROM t001
             INTO TABLE i_ccode
             WHERE bukrs IN s_bukrs.
    SELECT bukrs hkont gjahr belnr buzei bldat waers blart monat bschl
    shkzg mwskz dmbtr wrbtr wmwst prctr kostl
               FROM bsis
               INTO TABLE i_bsis
               FOR ALL ENTRIES IN i_ccode
               WHERE bukrs EQ i_ccode-bukrs
               AND   budat IN i_date.
    Regards
    Akmal
    Moved by moderator to the correct forum
    Edited by: Matt on Nov 6, 2008 4:10 PM

    Dnt go for FOR ALL ENTRIES  it will not help in this case .Do like below , you can see a lot of performance improvement.
    SELECT bukrs waers ktopl periv
             FROM t001
             INTO TABLE i_ccode
             WHERE bukrs IN s_bukrs.
    sort i_ccode by bukrs.
    LOOP AT i_ccode.
       SELECT bukrs hkont gjahr belnr buzei bldat waers blart         monat bschl shkzg mwskz dmbtr wrbtr wmwst prctr kostl
             FROM bsis
            APPENDING TABLE i_bsis
            WHERE bukrs EQ i_ccode-bukrs
            AND   budat IN i_date.
      ENDLOOP.
    I dnt know why perform is good for the above query than "bulrs in so_bukrs" .This willl help , i m sure. this approach helped me.
    Edited by: Karthik Arunachalam on Nov 6, 2008 8:52 PM

  • Performance Issue related to RFC

    Hi All,
    Iam moving attachments from CRM to R/3 .For this Iam using RFC.If Iam attaching multiple files at a time.Do I need to call the RFC in Loop or should I call it Once for all attachments.Which gives better PERFORMANCE.
    One more thing If I call the RFC in SYNCHRONOUS  mode what happends if the server is down the other side for two to three days.
    If I call the RFC in ASYNCHRONOUS  mode I need to work on the return values of the RFC.How to handle this situation.
    Plz give me the reply as early as possible.
    Thanks,
    Saritha

    Hi,
    If an RFC Channel already exists between the client and server the same channel will be used between the systems. Hence, even calling in a loop should not be a problem. But in here the data needs to go one by one through the channel. Try to send them in a table as this goes as one chunk of data.
    In case of ASYNCHRONOUS also if  you want to receive results then the called system should be up. for this the syntax is
    CALL FUNCTION 'FM' STARTING NEW TASK DESTINATION <dest>
    PERFORMING <form> ON END OF TASK
    EXPORTING
    EXCEPTIONS
    FORM <form> USING taskname.
      RECEIVE RESULTS FROM <fm>
      IMPORTING
    ENDFORM.
    But in any case, the called system should be open for connections.
    Try if possible tRFC calls.
    Regards,
    Goutham

  • Query on Performance issues relating to a report

    Hi Group,
    I have an issue while running a report which was creating Business Partners for (both Company and the Contact person and as well as relationship b/w them).
    This report was having BAPI( for creating Business Partners ) and also for creating relationships and the report was taking too much of response time.
    I was thinking it to be the reason for calling BAPIs. But, I want to know from you that is that the real cause or it might be the other cause.
    So please kindly let me know inputs from your side on this.
    thanks in advance.
    Regards,
    Vishnu.

    Hi
    I think it's always better to use the provided standard fm's and bapi's to make changes to the data in the system instead of directly placing them in the tables.
    One thing you can do is try to use parallel processing. E.g 10.000 BP's should be created. In that case schedule 4 jobs to create the Bp's instead of 1 job creating the whole lot.
    Kind regards, Rob Dielemans

  • Heaviest performance issue impacting business Free Pct showing 9.41%

    Dears,
    We are facing heaviest performance issue on our database and and Application, Our Key users Oracle Techno functional consultant complaining that when "Free Pct" was 18% performance was excellent but now the "Free Pct" has reached to 9.41%.
    Am wondering if any performance tuning steps can be done to come back to its 18%, your valuable help is required.
    Following are the details:-
    Tablespace | Size Mgs | Free Mgs |Used Mgs |**Free Pct** | Used Pct |Max Mb
    APPS_TS_TX_IDX | 84780.88 | 7973.75 |76807.13 |**9.41** | 90.41 | 84780.88
    APPS_TS_TX_DATA | 120540.25 | 16301.88 | 104238.38 |*13.52* | 86.48 | 149276.23
    Do I have to rebuild index tablespace? If yes, while doing this what are the precautions can be done like, do I need downtime from the business?
    will the command below if triggered resolves the issue:-
    alter index apps. APPS_TS_TX_IDX rebuild;
    Kind regards,
    Mohammed
    Edited by: user9007339 on 28/01/2013 04:22 ص
    Edited by: user9007339 on 28/01/2013 04:52 ص
    Edited by: user9007339 on 28/01/2013 05:48 ص

    Hi Helios,
    I shall certainly update, here are the details and feedback of our SR:-
    ======================================================================================================
    1, SR 3-6701198251 : Oracle Application PRODUCTION SHUTDOWN THREE TIMES
    === Data Collected ===
    Findings and Recommendations
    Finding 1: Commits and Rollbacks
    Impact is 4.76 active sessions, 52.92% of total activity.
    Waits on event "log file sync" while performing COMMIT and ROLLBACK operations
    were consuming significant database time.
    Recommendation 1: Host Configuration
    Estimated benefit is 4.76 active sessions, 52.92% of total activity.
    Action
    Investigate the possibility of improving the performance of I/O to the
    online redo log files.
    Rationale
    The average size of writes to the online redo log files was 1788 K and
    the average time per write was 2766 milliseconds.
    Symptoms That Led to the Finding:
    Wait class "Commit" was consuming significant database time.
    Impact is 4.76 active sessions, 52.92% of total activity.
    Finding 2: Top SQL by DB Time
    Impact is 3.86 active sessions, 42.9% of total activity.
    SQL statements consuming significant database time were found.
    Recommendation 1: SQL Tuning
    Estimated benefit is 2.29 active sessions, 25.44% of total activity.
    Action
    Tune the PL/SQL block with SQL_ID "5t39uchjqpyfm". Refer to the "Tuning
    PL/SQL Applications" chapter of Oracle's "PL/SQL User's Guide and
    Reference".
    Related Object
    SQL statement with SQL_ID 5t39uchjqpyfm.
    BEGIN xla_accounting_pkg.unit_processor_batch(:errbuf,:rc,:A0,:A1,:A2
    ,:A3,:A4,:A5,:A6,:A7,:A8,:A9,:A10,:A11,:A12,:A13,:A14); END;
    log_buffer     10485760     
    log_checkpoint_interval     100000     
    log_checkpoint_timeout     1200     
    log_checkpoints_to_alert     TRUE
    === Action Plan ===
    Mohammed,
    AWR and ADDM reports clearly point to performance issues around redo logs. Top waits were :
    op 5 Timed Foreground Events
    Event     Waits     Time(s)     Avg wait (ms)     % DB time     Wait Class
    log file sync     1,517     13,265     8744     45.55     Commit
    log buffer space     9,048     8,218     908     28.22     Configuration
    buffer busy waits     6,519     3,743     574     12.85     Concurrency
    DB CPU     2,177     7.48     
    db file sequential read     54,769     540     10     1.85     User I/O
    You have two options here :
    1> Increase size of log_buffer . Set to 15M , unset other parameters ( log_checkpoint_interval , log_checkpoint_timeout )
    2> Increase size and number of online redo log files. Make sure that these are on fast disks.
    3> Run redo generating jobs , like XLAACCUP, during off hours where end-users are not in the system.
    ======================================================================================================
    Dear Helios,
    We are waiting for the RAM to come from our vendor to increase from 16GB to 32GB on Application Server it will take 3 weeks from now.
    Secondly, Our key users especially "Oracle Techno Functional Consultants and "Oracle Application Developers" are compalining about the APPS_TS_TX_IDX tablespace tat has been decreased by 50% from 18% to 9.41%................I was attempting to find if there any possibiities to tune this tablespace..........Your suggestion is appreciated.
    Regards,
    Mohammed
    Edited by: user9007339 on 29/01/2013 03:02 ص

  • CPU Performance Issue

    We are facing a performance issue related to cube refresh. Even a small cube consumes around 70% of the CPU while refreshing.
    Details of the cube: This cube has 10 dimensions and 46 straight forward measures (mix of sum and avg as aggregation algorithm). No compression. Cube is partitioned (48 partitions). Main source of the data is a materialized view which is partitioned in the same way as the cube.
    Data Volume: 1200 records in the source to be processed daily (almost evenly distributed across the partition)
    Cube is refreshed using: DBMS_CUBE.BUILD(<<cube_name>>,'SS',true,5,false,true,false);
    Environment - Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 (on AIX 6.1 64 bit), AWM - awm11.2.0.2.0A
    Hardware Configuration
    4.2 GHz, 3.5 CPU Core 32 GB RAM
    Has anyone faced similar kind of issue? Is there any server level / database level parameter that needs tweaking to control this behaviour?

    Here is another trick to combine sum and average measures in the same cube. It relies on the AGGCOUNT function in OLAP DML, which you can look up in the reference guide.
    Suppose that you have a cube with two measures, A and B, and that you want to aggregate A using SUM and aggregate B using AVG.
    Step 1: Make the cube be compressed and aggregate all measures (A and B) using SUM.
    If you do this using AWM, then the solve specification should include a MAINTAIN COUNT clause. To double check, look at the user_cubes view for your cube (TEST in my example).
    select consistent_solve_spec from user_cubes where cube_name = 'TEST';
    CONSISTENT_SOLVE_SPEC
    SOLVE
      SUM
        MAINTAIN COUNT
        ALLOW OVERFLOW
        ALLOW DIVISION BY ZERO
        IGNORE NULLS OVER ALL
    )You can hand edit the XML for the cube if this hasn't happened. Here is what you want to see in the XML for the cube.
        <ConsistentSolve>
          <![CDATA[SOLVE
      SUM
        MAINTAIN COUNT
         OVER ALL
    )]]>Don't worry about the slight difference in syntax -- this is due to different printing routines in the client Java and the server c code.
    Step 2: Verify that the cube's VARIABLE has the WITH AGGCOUNT option.
    My cube is named TEST, so the variable is named TEST_STORED, and my cube is dimensioned by TIME and PRODUCT. You can run this in the OLAP Worksheet.
    dsc test_stored
    DEFINE TEST_STORED VARIABLE LOCKDFN NUMBER WITH NULLTRACKING WITH AGGCOUNT CHANGETRACKING <TEST_PRT_TEMPLATE <TEST_MEASURE_DIM TIME PRODUCT>>
    Step 3: Define a new Calculated Measure, B_AVG, in the cube to get the average for measure B.
    Select "OLAP DML Expression" as the "Calculation Type" and enter the following expression. Obviously you need to adjust for the cube and measure names. I am putting new lines into this for readability.
    QUAL(
      NVL2(AGGCOUNT(TEST_STORED), TEST_STORED / AGGCOUNT(TEST_STORED), TEST_STORED),
    TEST_MEASURE_DIM 'B')
    Step 4: Specify the $LOOP_VAR on the new measure
    Execute the following in the OLAP Worksheet. (Again, correct for measure and cube names.) It instructs the server to loop the cube sparsely. If you don't do this, you will get dense looping and poor query performance. You only need to do this once (per calculated measure). If you save the AW to XML after this, then the LOOP_VAR value will be saved in the XML itself.
    call set_property('$LOOP_VAR' 'TEST' 'B_AVG' 'TEST_STORED')For reporting purposes you should look at measures A and B_AVG.

Maybe you are looking for

  • Item not selected in document

    Dear All, while doing MIGO(GR) the message comes "item not selected in document". Please suggest what i can do to process. Thanks and Regards, Baiju

  • File Upload in a Form

    I have a file upload in a form of mine. I can't style the upload to match the rest of my form. I have tried to target the id="FileAttachment" to no avail. I have tried adding classes and nothing responds. In Dreamweaver it shows as styled until it ge

  • 10G RAC OCFS on Itanium Linux-64

    I've spent the last week setting up four Itanium 64-bit Linux servers with OCFS. No problems...however, when I try to install Oracle 10G, (installing CRS First as described in the manual) I can't get past the Public/Private nodes listing. CRS does No

  • Is there another way to open a different iPhoto library?

    Is there another way to open a different iPhoto library besides using the option key and clicking on the iPhoto icon? It takes me upwards of a dozen times to get it to work, so I get that screen where I choose which library I want to open - the optio

  • What  are the usual date field validations

    Hi all, can you please tell me what  are the usual date field validations in selection Screen Thanks and regards , Madhavi pilla