SAPXMLTOOLKIT Performance issue on XSLT conversion

Hi,
I am runnning a large XSLT transformation (source XML ~
16MB - relatively flat format record structure from a POS
system converting to IS Retail IDOC WPUBON  IDOC -
around5MB of converted XML).
The aim is that this is portable bewteen SAP Business
Connector and SAP XI (both using the SAP XML TOOKIT). It
was looking good (around 80 seconds conversion time on my
laptop) and then I got a sapxmltoolkit.jar update (then I
tried it on SAP BC 4.7 then tracked it down to versions
of the sapxmltoolkit).
I then looked at the command line performance on this for
various different versions of sapxmltoolkit (used to be
command line from inqmyxml.jar) - we use xmlspy for XSLT
development and you can hook in a command line call to
use the library... sample call:
C:jdk1.3.1_15jrebinjava -ms384M -mx1024M -cp D:testingsapxmltoolkit.jar;
C:jdk1.3.1_15jrelibrt.jar;D:testingSCTmapping.jar
com.sap.engine.lib.xsl.xslt.CommandLine %1 %2 %3 %4
LOOKING AT THE PROPERTIES file from the jar file the
following version works well. Run time around 80 seconds
completing in 500M of memory
version=630.20030828104710.0000   GOOD
the next two versions I have work very badly - run time
greater than 10 minutes, system pages itself to death
uses all 1GB of java memory (and my laptop only has 1GB)
version=630.20050221090341.0000       BAD
version=630.20040429124817.0000       BAD
I see the latest version available still seems to be
stamped 630 (even in 640 netweaver shipment). Who
develops this... My samples to replicate this problem are
rather large... but happy to post them to site for
interested party
Even on a 140K sized sample XML the same ratio exists
goes from a few seconds to convert to minutes.... just on
a change in sapxmltoolkit versions
Regds Doug.
mailto:[email protected]

Thanks Alexander,
Have logged as message 0120025231 / 0000618401 / 2005
My short term fix is to run an IDE with known version of sapxmltoolkit (and invoke this separately) and avoid the built in sapxmltookit.jar...

Similar Messages

  • Help with Video Performance Issues using Flash

    Asking on behalf of a customer who has been unable to get any answers so far - are you able to help?
    Background:
    We have a port of our Game Development Kit which allows us to recompile all our games using Crossbridge (http://adobe-flash.github.io/crossbridge/) into SWF without any code modifications.
    Overview:
    Our framework is using OpenGL for rendering and we have successfully ported it along with the audio and video to run in Flash.
    We are experiencing performance issues using Video. We cannot use image sequence as some of the video animations are too long and would increase the download to an unacceptable size.  Assets vary between 256x256 - 1024x1024 videos.

    Here's the rest of the story.  Let me know if you can see any resolution, and I will connect him to the forums.  Thank you.
    Current Video Solution:
    We create an instance of NetConnection, NetStream, and Video according to most samples out there, and invoke draw to rasterize the Video DisplayObject into a BitmapData instance.
    The BitmapData instance has a fixed color component layout which is not compatible with Stage3D texture and is therefore has to be reformatted before uploaded to Stage3D Texture (See Code Listing below).
    Our Problems:
    Performance issues with RGBA conversion (either using copyChannel or manually reformatting is not fast enough) natively in as3; this required for stage3d texture.Copying each channel individual using bitmapdata.Copychannel seems faster, but not significantly faster.
    Cannot detect when video frame has been updated, therefore we may copy pixels that are not needed in enterframe (processpixel).
    Looping video, our current solution uses the NET_STATUS event Buffer empty; Is there a better way to loop videos than checking buffer and seeking to 0.
    Stepping video, loading FLV or MP4 side by side assets from HTTP or embedded does not support stepping? Is there another way?
    ActionScript Code Listing:
    video_nc = new NetConnection();
    video_nc.addEventListener(NetStatusEvent.NET_STATUS , onConnect);
    video_nc.addEventListener(AsyncErrorEvent.ASYNC_ERROR , trace);
    video_nc.connect(null);
    // OnConnect Event:
    this.ns = new NetStream(e.target as NetConnection);
    eventclient = new Object();
    eventclient.onMetaData = onMetaData;
    this.ns.client = eventclient;
    ns.play(flvfile);
    ns.pause();
    //onMetaData event:
    this.width = infoObject.width;
    this.height = infoObject.height;
    this.textureWidth = NextPowerOfTwo(this.width);
    this.textureHeight = NextPowerOfTwo(this.height);
    cliprect = new Rectangle(0, 0, this.width ,this.height);
    cliprect.x = 0;
    cliprect.y = 0;
    cliprect.width = this.textureWidth;
    cliprect.height = this.textureHeight;
    totalFrames = infoObject.duration * infoObject.fps;
    this.hasAlpha = true;
    if(infoObject.videocodecid == 5)
    this.hasAlpha = true;
    this.bitmapData = new BitmapData(this.textureWidth, this.textureHeight, hasAlpha, 0xff000000);
    this.video = new Video(this.width, this.height);
    this.video.attachNetStream(ns);
    this.video.addEventListener(Event.ENTER_FRAME, processPixels);
    // processPixel method:
    BitmapData.draw(video);
    GLAPI.instance.glBindTexture(GLAPI.GL_TEXTURE_2D,this.textureId);
    var fmt:uint = GLAPI.GL_ARGB;
    // converting pixels using copychannel or loop through pixels
    GLAPI.instance.glBindTexture(GLAPI.GL_TEXTURE_2D,this.textureId);
    GLAPI.instance.glTexImage2D(GLAPI.GL_TEXTURE_2D, 0, fmt, this.textureWidth, this.textureHeight, 0,fmt, GLAPI.GL_UNSIGNED_BYTE, 0, convBitmapData.getPixels(cliprect));

  • Performance issue: Java and XSLT

    I have a performance issue concerning Java and XSLT: my goal is to transform an xml file (source.xml)
    by using a given xsl file (transformation.xsl). As result I would like to get a String object, in which the result
    of the transformation (html-code) is in, so that I can display it in a browser. The problem is the long time
    it takes for the code below to run through.
    xml = new File("C:\\source.xml");
    xmlSource = new StreamSource(xml);
    xslt = new File("C:\\transformation.xsl");
    StreamSource xsltSource = new StreamSource(xslt);
    TransformerFactory transFact = TransformerFactory.newInstance();
    trans = transFact.newTransformer(xsltSource);
    StringWriter stringWriter = new StringWriter();
    StreamResult streamResult = new StreamResult(stringWriter);
    trans.transform(xmlSource, streamResult);
    String output = stringWriter.toString();
    stringWriter.close();
    Before, I made the same transformation in an xml development environment, named Cooktop
    (see http://xmlcooktop.com/). The transformation took about 2 seconds. With the code above in Java it
    takes about 20 seconds.
    Is there a way to make the transformation in Java faster?
    Thanks in advance,
    Marcello
    Oldenburg, Germany
    [email protected]

    I haven't tried it but the if you can use java 6, you could try the new stax (StAX) with the XML stream loading..
    Take a look at:
    http://javaboutique.internet.com/tutorials/staxxsl/
    Then, you could cache the xslt in templates:
    ---8<---
    templates = transformerFactory.newTemplates( xsltSource );
    Transformer transformer = templates.newTransformer();
    (here you could probobly also cache the Transformer object but I think it's it's not thread safe so it's a little tricker..)
    StreamResult result = new StreamResult( System.out );
              transformer.transform(xmlSource, result);
    And, don't transform your result to a string, use a Stream or something, then the transformer could start pumping out html while working, and if you get a out of memory error it looks like you have a pretty big xml file...
    If you use jsp you could try the build in jsp taglib for xml which I think is rather good and they have support for varReader which implements the StreamSource iirc.
    /perty

  • Performance issues with FDK in large XML documents

    In my current project with FrameMaker 8 I'm experiencing severe performance issues with some FDK API calls.
    The documents are about 3-8 MBytes in size. Fortmatted they cover 150-250 pages.
    When importing such an XML document I do some extensive "post-processing" using FDK. This processing happens in Sr_EventHandler() during the SR_EVT_END_READER event. I noticed that some FDK functions calls which modify the document's structure, like F_ApiSetAttribute() or F_ApiNewElementInHierarchy(), take several seconds, for the larger documents even minutes, to complete one single function call. I tried to move some of these calls to earlier events, mostly to SR_EVT_END_ELEM. There the calls work without a delay. Unfortunately I can't rewrite the FDK client to move all the calls that are lagging to earlier events.
    Does anybody have a clue why such delays happen, and possibly can make a suggestion, how to solve this issue? Thank you in advance.
    PS: I already thought of splitting such a document in smaller pieces by using the FrameMaker book function. But I don't think, the structure of the documents will permit such an automatic split, and it definitely isn't an option to change the document structure (the project is about migrating documents from Interleaf to XML with the constraint of keeping the document layout identical).

    FP_ApplyFormatRules sounds really good--I'll give it a try on Monday. Wonder how I could miss it, as I already tried FP_Reformatting and FP_Displaying at no avail?! By the way, what is actually meant with FP_Reformatting (when I used it I assumed it would do exactly what FP_ApplyFormatRules sounds to do), or is that one another of Lynne's well-kept secrets?
    Thank's for all the helpful suggestions, guys. On Friday I already had my first improvements in a test version of my client: I did some (not all necessary) structural changes using XSLT pre-processing, and processing went down from 8 hours(!) to 1 hour--Yeappie! I was also playing with the idea of writing a wrapper to F_ApiNewElementInHierarchy() which actually pastes an appropriate element created in a small flow on the reference pages at the intended insertion location. But now, with FP_ApplyFormatRules on the horizon, I'm quite confident to get even the complicated stuff under control, which cannot be handled by the XSLT pre-processing, as it is based on the actual formatting of the document at run-time and cannot be anticipated in pre-processing.
    --Franz

  • Performance issues when creating a Report / Query in Discoverer

    Hi forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
    Please see attached the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
    Many thanks,
    Lance

    Hi Rod,
    I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
    I’ve been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
    Lance
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),

  • Performance Issue in Oracle EBS

    Hi Group,
    I am working in a performance issue at customer site, let me explain the behaviour.
    There is one node for the database and other for the application.
    Application server is running all the services.
    EBS version is 12.1.3 and database version is: 11.1.0.7 with AIX both servers..
    Customer has added memory to both servers (database and application) initially they had 32 Gbytes, now they have 128 Gbytes.
    Today, I have increased memory parameters for the database and also I have increased JVM's proceesses from 1 to 2 for Forms and OAcore, both JVM's are 1024M.
    The behaviour is when users are navigating inside of the form, and they push the down button quickly the form gets thinking (reloading and waiting 1 or 2 minutes to response), it is no particular for a specific form, it is just happening in several forms.
    Gathering statistics job is scheduled every weekend, I am not sure what can be the problem, I have collected a trace of the form and uploaded it to Oracle Support with no success or advice.
    I have just send a ping command and the reponse time between servers is below to 5 ms.
    I have several activities in mind like:
    - OATM conversion.
    - ASM implementation.
    - Upgrade to 11.2.0.4.
    Has anybody had this behaviour?, any advice about this problem will be really appreciated.
    Thanks in advance.
    Kind regards,
    Francisco Mtz.

    Hi Bashar, thank you very much for your quick response.
    If both servers are on the same network then the ping should not exceed 2 ms.
    If I remember, I did a ping last Wednesday, and there were some peaks over 5 ms.
    Have you checked the network performance between the clients and the application server?
    Also, I did a ping from the PC to the application and database, and it was responding in less than 1 ms.
    What is the status of the CPU usage on both servers?
    There aren't overhead in the CPU side, I tested it (scrolling getting frozen) with no users in the application.
    Did this happen after you performed the hardware upgrade?
    Yes, it happened after changing some memory parameters in the JVM and the database.
    Oracle has suggested to apply the latest Forms patches according to this Note: Doc ID 437878.1
    Thanks in advance.
    Kind regards,
    Francisco Mtz.

  • SAP BW OLAP Universe performance issue

    Hi,
    Based on BO recommendation, I created a universe on top of a big BEX query which included all characteristics and key figures from a cube. I then created a webi report with 5 characteristics and 1 key figure from this universe. I also create the same report in BEX query designer(same number of objects). I ran both. The Bex query completed under a minute and the webi report took more than 12 minutes to run. I did a bunch of other test with different number of objects combination and saw similar differential in query performance.
    I did a trace using 'sm50' and saw the open SQL submitted to BW from BEx was very different from what got submitted from the webi report. Here is what I saw in pseudo code.
    Bex:
    select dimension1.blah, dimension2.blah, dimension3.blah, dimension..... sum(measure)
    from dimension1, dimension2, dimension3, dimension..... factTable
    where dimension1.SID = factTable.d1SID
    and dimension2.SID = factTable.d2SID
    and ......
    and (query filters)
    OLAP Universe
    select dimension1,blah, dimension1.SID
    from dimension1, factTable
    where dimension1.SID = factTable.d1SID
    select dimension2,blah, dimension2.SID
    from dimension2, factTable
    where dimension2.SID = factTable.d2SID
    select dimension3,blah, dimension3.SID
    from dimension3, factTable
    where dimension3.SID = factTable.d3SID
    It seems the OLAP universe was querying the BW cube one dimension at a time and then somehow piece the result sets together to form the final result set. Dimension tables join to the fact table without any filter definitely causes performance issue. Beside, I have no idea why the query requests are submitted to the BW system like that.
    I looked in varies forums and found no similar issues posted by others. Has anyone had any performance problem with there OLAP universe? Is this a matter of configurations?
    My envrionment:
    SAP BW 3.5
    BOXI 3.0
    ORACLE DB (version ??)

    Hi,
    You cannot compare a BEx query and a universe query by just comparing the trace.
    BEx query makes ABAP calls and universe query makes MDX calls;
    Moreover there is only one MDX call sent to SAP, what you have noticed is that there is one MDX call to retrieve member for a given characteristic whereas a prompt has been set on.
    Last, Web Intelligence consume only flattened data (row sets) wheras BEx consume data sets.
    That means that there is a flattening operation between SAP OLAP engine and the data sent to the Web Intelligence document.
    A fix have been developed for a customer to improve performance on Web Intelligence queries, this fix will be available for all customers in SP2 planned in June 09.
    Here is a a brief summary of what the fix provide:
      -  Provide data directly from SAP server to Web Intelligence document. Avoid unnecessary conversion steps: row set --> data set --> row set
      -  Eliminate redundant sort operations
      -  Other optimization operations
    Didier
    Edited by: Didier Mazoue on Nov 20, 2008 8:38 AM

  • Performance issue with Loop under Loop

    Hi,
    I have an issue with the performance, as per the requirement I have to use the Loop under Loop logic.
    I have sorted the Where condition field in Parent Loop and also in Child Loop like below :
    SELECT GUID_PRCTSC   "Primary Key as GUID in "RAW" Format
             GUID_PR       "Primary Key as GUID in "RAW" Format
             STCTS         "Numbering Scheme for Customs Tariff System
             DATAB         "Definitive Date (Valid-From Time)
             GUID_CTSNUMC  "Primary Key as GUID in "RAW" Format
             FROM /SAPSLL/PRCTSC
             INTO TABLE T_PRCTSC
             WHERE STCTS IN S_STCTS.
      IF T_PRCTSC IS INITIAL.
        MESSAGE : I007(ZMSSG) WITH 'Data not available for this entry'.
        STOP.
      ENDIF.
    SORT T_PRCTSC BY GUID_PR.
      SELECT GUID_PRGEN  "Primary Key as GUID in "RAW" Format "+  DG1K902277
             GUID_PR     "Primary Key as GUID in "RAW" Format
             ATTR05A     "Materail Type
             ATTR05B     "Sub-Family
             ATTR05C     "SPEC BUS + DG1K902190
             ATTR10A     "Materail Group
             ATTR20A     "SUBSTANCE ID
             FROM /SAPSLL/PRGEN
             INTO TABLE T_PRGEN
             FOR ALL ENTRIES IN T_TCOATV20
             WHERE ATTR20A EQ T_TCOATV20-ATTRV20
               AND ATTR05A IN S_ATR5A " +DG1K902168
               AND ATTR05C IN S_ATR5C. " + DG1K902190
      IF T_PRGEN IS INITIAL.
        MESSAGE : I007(ZMSSG) WITH 'Data not available for this entry'.
        STOP.
      ENDIF.
    *N-23
    SORT T_PRGEN BY GUID_PR.
    There are 90,000 records available in the table T_PRGEN.
    LOOP AT T_PRGEN INTO WA_PRGEN.
        IF SY-SUBRC = 0.
           WA_FINAL-ATTR05A       = WA_PRGEN-ATTR05A.
           WA_FINAL-ATTR05B       = WA_PRGEN-ATTR05B.
           WA_FINAL-ATTR05C       = WA_PRGEN-ATTR05C. " + DG1K902190
           WA_FINAL-ATTR10A       = WA_PRGEN-ATTR10A.
           WA_FINAL-ATTR20A       = WA_PRGEN-ATTR20A.
        ENDIF.
        READ TABLE T_V_TCAV201 INTO WA_V_TCAV201 WITH KEY ATTRV20 = WA_PRGEN-ATTR20A.
        IF SY-SUBRC = 0.
          WA_FINAL-TEXT1   = WA_V_TCAV201-TEXT1.    "SUBID-TEXT1
        ENDIF.
        READ TABLE T_PNTPR INTO WA_PNTPR WITH KEY GUID_PR = WA_PRGEN-GUID_PR.
        IF SY-SUBRC = 0.
           WA_FINAL-PRVSY  = WA_PNTPR-PRVSY.   "PROD NO
           WA_FINAL-GRVSY  = WA_PNTPR-GRVSY.   "LOGICAL SYS GROUP
        ENDIF.
    * TO Remove the Leading Zeros from prvsy
        SHIFT WA_FINAL-PRVSY LEFT DELETING LEADING '0'.
        READ TABLE T_CORSTA INTO WA_CORSTA WITH KEY GUID_MOBJ = WA_PNTPR-GUID_PR.
        IF SY-SUBRC = 0.
          WA_FINAL-QUAL_STA  = WA_CORSTA-QUAL_STA.
        ENDIF.
        READ TABLE T_PR INTO WA_PR WITH KEY GUID_PR = WA_PNTPR-GUID_PR.
    *& IN THE PROD MASTER PRODUCT CHANGED ON VALUE HAS BEEN   &
    *& MAINTAINED AS SINGLE '0', THIS WILL CAUSE ISSUES WHILE &
    *&  USING CONVERSION EXIT TO DISPLAY THE DATE FORMAT IN   &
    *&  MM/DD/YYYY HH:MM:SEC                                  &
        IF SY-SUBRC = 0.
          IF WA_PR-CHTSP = '0'.
            WA_FINAL-CRTSP   = WA_PR-CRTSP.
            wa_final-chtsp   = W_DATE.
          ENDIF.
          IF WA_PR-CHTSP NE '0'.
            WA_FINAL-CRTSP   = WA_PR-CRTSP.
            wa_final-chtsp   = WA_PR-CHTSP.
          ENDIF.
        ENDIF.
        READ TABLE T_PRT INTO WA_PRT WITH KEY GUID_PR = WA_PR-GUID_PR.
        IF SY-SUBRC = 0.
          WA_FINAL-PRTXT   = WA_PRT-PRTXT.
        ENDIF.
    LOOP AT T_PRCTSC INTO WA_PRCTSC WHERE GUID_PR = WA_PRGEN-GUID_PR. "+DG1K902258  - Performance issue
            IF SY-SUBRC = 0.
    *& TO FILL ATTR20A,PRVSY FOR DIFF STCTS AND CCNGN FOR     |
    *&  EACH LOOP                                             |
              IF WA_FINAL-ATTR20A  IS INITIAL
              AND  WA_FINAL-PRVSY IS INITIAL.
                IF WA_PRGEN-GUID_PR = WA_PNTPR-GUID_PR. "This condition is to fill up all the rows for the same
                                                        " Subid which have multiple stcts fields.
               IF SY-SUBRC = 0.
                  WA_FINAL-ATTR05A       = WA_PRGEN-ATTR05A.
                  WA_FINAL-ATTR05B       = WA_PRGEN-ATTR05B.
                  WA_FINAL-ATTR05C       = WA_PRGEN-ATTR05C. " + DG1K902190
                  WA_FINAL-ATTR10A       = WA_PRGEN-ATTR10A.
                  WA_FINAL-ATTR20A       = WA_PRGEN-ATTR20A.
                 ENDIF.
                  IF SY-SUBRC = 0.
                    WA_FINAL-TEXT1   = WA_V_TCAV201-TEXT1.    "SUBID-TEXT1
                  ENDIF.
                  IF SY-SUBRC = 0.
                    WA_FINAL-PRVSY  = WA_PNTPR-PRVSY.   "PROD NO
                    WA_FINAL-GRVSY  = WA_PNTPR-GRVSY.   "LOGICAL SYS GROUP
                  ENDIF.
    * TO Remove the Leading Zeros from prvsy
                  SHIFT WA_FINAL-PRVSY LEFT DELETING LEADING '0'.
                  IF SY-SUBRC = 0.
                    WA_FINAL-QUAL_STA  = WA_CORSTA-QUAL_STA.
                  ENDIF.
                  IF SY-SUBRC = 0.
                    IF WA_PR-CHTSP = '0'.
                      WA_FINAL-CRTSP   = WA_PR-CRTSP.
                      WA_final-chtsp   = W_DATE.
                    ENDIF.
                    IF WA_PR-CHTSP NE '0'.
                      WA_FINAL-CRTSP   = WA_PR-CRTSP.
                      WA_final-chtsp   = WA_PR-CHTSP.
                    ENDIF.
                  ENDIF.
                  IF SY-SUBRC = 0.
                    WA_FINAL-PRTXT   = WA_PRT-PRTXT.
                  ENDIF.
                ENDIF.
              ENDIF.
          IF SY-SUBRC = 0. " + DG1K902198
                WA_FINAL-GUID_PR      = WA_PRCTSC-GUID_PR.
                WA_FINAL-STCTS        = WA_PRCTSC-STCTS.
                WA_FINAL-DATAB        = WA_PRCTSC-DATAB.       " + DG1K902198
              ENDIF.
             READ TABLE T_CTSNUMC INTO WA_CTSNUMC WITH KEY GUID_CTSNUMC = WA_PRCTSC-GUID_CTSNUMC.
              IF SY-SUBRC = 0.
                WA_FINAL-CCNGN        = WA_CTSNUMC-CCNGN.
              ENDIF.
              APPEND WA_FINAL TO T_FINAL.
              CLEAR WA_FINAL.
            ENDIF.
        ENDLOOP.
        APPEND WA_FINAL TO T_FINAL.
        CLEAR WA_FINAL.
      ENDLOOP.
    Any suggestions to improve the performance will be appreciated!
    Thanks & Regards,
    Kittu
    Edited by: Kittu on Mar 23, 2009 4:03 PM

    Hi
    Instead of using the LOOP under LOOP directly. use the read statement for second loop and from that index u start the loop.
    eg:
    LOOP AT ITAB1.
    read table itab2 with key field1 = itab1 - field1.
    if sy-subrc = 0.
    loop at itab2 index sy-tabix.
    if  itab2-field1 = itab1 - field1.
    < do u  r operations >
    else.
      exit.
    endif.
    ENDLOOP.
    this will increase the performance

  • Reporting Services Unicode Parameters Cause Performance Issues

    When I create a report using string parameters,  reporting services sends the SQL to SQL Server with an N prefix on the string parameters.  This is the behavior even when the underlying data table has no unicode datatypes.  This causes SQL Server to do a scan instead of a seek on these queries.  Can this behavior be modified to send the parameters as non-unicode text?

    Work around to overcome SSRS report performance due to UNICODE conversion issue:
    I have used a new parameter (of type Internal) which collects/duplicates the original parameter values as comma separated in string.
    In the report Dataset query, parse the comma separated string into  a list into a vairable table using XML trick.
    Use the variable table in WHERE IN clause
    Steps:
    Create a new Internal parameter (call it InternalParameter1)
    Under Default Values -> Specify values : Add Exp : =join( Parameters!OrigParameter1.Value,",")
    Pass/Use the InternalParameter1 in your dataset query.
    Example code
    DECLARE @InternalParameter1 NVARCHAR(MAX)
    SET @InternalParameter1 = '100167600,
    100167601,
    4302853605,
    4030753556,
    4026938411
    --- Load comma separated string to a temp variable table ---
    SET ARITHABORT ON
    DECLARE @T1 AS TABLE (PARALIST VARCHAR(100))
    INSERT @T1 SELECT Split.a.value('.', 'VARCHAR(100)') AS CVS FROM
    ( SELECT CAST ('<M>' + REPLACE(@InternalParameter1, ',', '</M><M>') + '</M>' AS XML) AS CVS ) AS A CROSS APPLY CVS.nodes ('/M') AS Split(a)
    --- Report Dataset query ---
    SELECT CONTRACT_NO, report fields… FROM mytable
    WHERE CONTRACT_NO IN (SELECT PARALIST FROM @T1) -- Use temp variable table in where clause
    Mahesh

  • Sun JVM Performance Issue in Sun Solaris 10 (SPARC)

    Hi,
    Issue : Performance issue after the migration of a Java application from IBM-AIX 5 to Sun Solaris 10 (SPARC)
    I am facing performance issue after the migration of a Java application from IBM-AIX 5.3 to Sun Solaris 10 (SPARC).
     Normally the application takes less than 1 hour to complete the process in AIX, but after migration in Solaris the application is taking 4+ hours.
    The Java version of IBM AIX is ,
    java version "1.5.0"
    Java(TM) 2 Runtime Environment, Standard Edition (build pap32dev-20051104)
    IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 AIX ppc-32 j9vmap3223-20051103 (JIT enabled)
    The Java version of Solaris 10 is,
    Java(TM) Platform, Standard Edition for Business (build 1.5.0_17-b04)
    Java HotSpot(TM) Server VM (build 1.5.0_17-b04, mixed mode)
    Description of Application
    The application merges 2 XML files of size 300 MB each using DOM Parser and generates flat file according to certain business logic.No remote files are using for the file generation. There are two folders and around 200 XML file in each folders of similar names. The application loads 2 similar XML file at a time from each folder and Processes. Same way, the application processes all the 200 XML file pairs using loop.
    The JVM Parameters are given below.
    /usr/java5/bin/java -cp $CLASSPATH -Xms3072m -Xmx3072M com.db.mcc.creditderiv.GCDXMLTransProc
    Here the extended swap memory in AIX is 3072 (3GB). After copying the same tode to Solaris, the
    application started throwing java.lang.OutofMemoryError. So that we have increased the swap memory up to 12 GB.
    Since 32bit Java allows maximum 4 GB extended memory we started using 64 Bit Java in Solaris using -d64 argument.
    The Current JVM Parameter in Solaris is given below.
    java -d64 -cp $CLASSPATH -Xms8192m -Xmx12288m com.db.mcc.creditderiv.GCDXMLTransProc ( 64 GB Swap Memory is available in the System)
    We have tried the following options
    1.       Extended heap size up to 12 GB using -xms and -xmx parameters and tried multiple -XX options. Earlier the application was working fine in AIX with 3.5 GB extended heap size. ( 64 GB Swap Memory is available in the System)
    2.       Downloaded and installed the Solaris SPARC Patches from the website,
         http://java.sun.com/javase/downloads/index_jdk5.jsp
    4.   Downloaded and installed XML and XSLT patch from sun website
    5.       Tried to run the Java in server mode using -server option.

    A 64 bit VM is not necessarily faster than a 32 bit one. I remember at least on suggestion that it could be slower.
    Make sure you use the -server option.
    As a guess IBM isn't necessarily a slouch when it comes to Java. It might simply be that their VM was faster. Could have used a different dom library as well.
    Could be an environment problem of course.
    Profiling the application and the machine as well might provide information.

  • OBIEE  Performance Issues

    I am experiencing Performance issues with queries generated by OBIEE. The query generated by OBIEE run 2+ hours. Looking at the generated SQL, the execution plan is not utilizing indexes on the FACT table.
    We have dimension table linked to a partitioned FACT table. We have created local bitmap indexes on all dimension keys. The execution plan generated for the OBIEE generated SQL statement does not use indexes, it executes a FULL table scan on our FACT table which has approximately 260 million rows. When I extract out the SELECT portion retrieving the information from the tables, the execution plan changes and indexes are used. Does anyone know what would cause oracle not to execute the same execution plan for the OBIEE generated SQL?
    OBIEE generated SQL
    WITH SAWITH0
    AS ( SELECT SUM (T92891.DEBIT_AMOUNT) AS c1,
    SUM (T92891.CREDIT_AMOUNT) AS c2,
    T91932.COMPL_ACCOUNT_NBR AS c3,
    T92541.APPROP_SYMBOL AS c4,
    T92541.FUND_CODE AS c5,
    T91992.ACCOUNT_SERIES_NAME AS c6,
    T91932.ACCOUNT_NBR AS c7
    FROM DW_FRR.DIM_FUND_CODE_FISCAL_YEAR T92149,
    DW_ICE.DIM_FUND T92541,
    DW_FRR.DIM_ACCOUNT T91932,
    DW_FRR.DIM_ACCOUNT_SERIES T91992,
    DW_ICE.FACT_GL_TRANSACTION_DETAIL T92891
    WHERE (T91932.ACCOUNT_SID_PK = T92891.ACCOUNT_SID_FK
    AND T91932.ACCOUNT_SERIES_SID_FK =
    T91992.ACCOUNT_SERIES_SID_PK
    AND T92149.FUND_CODE_FISCAL_YEAR_SID_PK =
    T92891.FUND_CODE_FISCAL_YEAR_SID_FK
    AND T92541.FUND_SID_PK = T92891.FUND_SID_FK
    AND T92149.FISCAL_YEAR >= :"SYS_B_0")
    GROUP BY T91932.ACCOUNT_NBR,
    T91932.COMPL_ACCOUNT_NBR,
    T91992.ACCOUNT_SERIES_NAME,
    T92541.FUND_CODE,
    T92541.APPROP_SYMBOL),
    SAWITH1 AS (SELECT DISTINCT :"SYS_B_1" AS c1,
    D1.c3 AS c2,
    D1.c4 AS c3,
    D1.c5 AS c4,
    D1.c2 AS c5,
    D1.c1 AS c6,
    D1.c6 AS c7,
    D1.c7 AS c8
    FROM SAWITH0 D1)
    SELECT D1.c1 AS c1,
    D1.c2 AS c2,
    D1.c3 AS c3,
    D1.c4 AS c4,
    D1.c5 AS c5,
    D1.c6 AS c6
    FROM SAWITH1 D1
    ORDER BY c1,
    c3,
    c2,
    c4
    Execution PLan
    Plan
    SELECT STATEMENT ALL_ROWS Cost: 1 M
         29 PX COORDINATOR
              28 PX SEND QC (ORDER) PARALLEL_TO_SERIAL SYS.:TQ10005 :Q1005 Cost: 1 M Bytes: 1019 M Cardinality: 11 M
                   27 SORT GROUP BY PARALLEL_COMBINED_WITH_PARENT :Q1005 Cost: 1 M Bytes: 1019 M Cardinality: 11 M
                        26 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1005 Cost: 972 K Bytes: 1019 M Cardinality: 11 M
                             25 PX SEND RANGE PARALLEL_TO_PARALLEL SYS.:TQ10004 :Q1004 Cost: 972 K Bytes: 1019 M Cardinality: 11 M
                                  24 HASH JOIN PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 972 K Bytes: 1019 M Cardinality: 11 M
                                       4 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Bytes: 3 K Cardinality: 179
                                            3 PX SEND BROADCAST PARALLEL_TO_PARALLEL SYS.:TQ10002 :Q1002 Cost: 2 Bytes: 3 K Cardinality: 179
                                                 2 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1002 Cost: 2 Bytes: 3 K Cardinality: 179
                                                      1 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT DW_ICE.DIM_FUND :Q1002 Cost: 2 Bytes: 3 K Cardinality: 179
                                       23 HASH JOIN PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 972 K Bytes: 843 M Cardinality: 11 M
                                            20 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 9 Bytes: 54 K Cardinality: 962
                                                 19 PX SEND BROADCAST PARALLEL_TO_PARALLEL SYS.:TQ10003 :Q1003 Cost: 9 Bytes: 54 K Cardinality: 962
                                                      18 HASH JOIN PARALLEL_COMBINED_WITH_PARENT :Q1003 Cost: 9 Bytes: 54 K Cardinality: 962
                                                           15 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1003 Cost: 6 Bytes: 814 Cardinality: 22
                                                                14 PX SEND BROADCAST PARALLEL_TO_PARALLEL SYS.:TQ10001 :Q1001 Cost: 6 Bytes: 814 Cardinality: 22
                                                                     13 MERGE JOIN CARTESIAN PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 6 Bytes: 814 Cardinality: 22
                                                                          9 BUFFER SORT PARALLEL_COMBINED_WITH_CHILD :Q1001
                                                                               8 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 2 Bytes: 16 Cardinality: 2
                                                                                    7 PX SEND BROADCAST PARALLEL_FROM_SERIAL SYS.:TQ10000 Cost: 2 Bytes: 16 Cardinality: 2
                                                                                         6 TABLE ACCESS BY INDEX ROWID TABLE DW_FRR.DIM_FISCAL_YEAR Cost: 2 Bytes: 16 Cardinality: 2
                                                                                              5 INDEX RANGE SCAN INDEX (UNIQUE) DW_FRR.UNQ_DIM_FISCAL_YEAR_IDX Cost: 1 Cardinality: 2
                                                                          12 BUFFER SORT PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 4 Bytes: 319 Cardinality: 11
                                                                               11 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1001 Cost: 2 Bytes: 319 Cardinality: 11
                                                                                    10 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT DW_FRR.DIM_ACCOUNT_SERIES :Q1001 Cost: 2 Bytes: 319 Cardinality: 11
                                                           17 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1003 Cost: 2 Bytes: 10 K Cardinality: 481
                                                                16 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT DW_FRR.DIM_ACCOUNT :Q1003 Cost: 2 Bytes: 10 K Cardinality: 481
                                            22 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1004 Cost: 971 K Bytes: 4 G Cardinality: 207 M Partition #: 28 Partitions accessed #1 - #12
                                                 21 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT DW_ICE.FACT_GL_TRANSACTION_DETAIL :Q1004 Cost: 971 K Bytes: 4 G Cardinality: 207 M Partition #: 28 Partitions accessed #1 - #132
    Inner SQL Statement without the OBIEE wrap around SQL
    SELECT SUM (T92891.DEBIT_AMOUNT) AS c1,
    SUM (T92891.CREDIT_AMOUNT) AS c2,
    T91932.COMPL_ACCOUNT_NBR AS c3,
    T92541.APPROP_SYMBOL AS c4,
    T92541.FUND_CODE AS c5,
    T91992.ACCOUNT_SERIES_NAME AS c6,
    T91932.ACCOUNT_NBR AS c7
    FROM DW_FRR.DIM_FUND_CODE_FISCAL_YEAR T92149,
    DW_ICE.DIM_FUND T92541,
    DW_FRR.DIM_ACCOUNT T91932,
    DW_FRR.DIM_ACCOUNT_SERIES T91992,
    DW_ICE.FACT_GL_TRANSACTION_DETAIL T92891
    WHERE (T91932.ACCOUNT_SID_PK = T92891.ACCOUNT_SID_FK
    AND T91932.ACCOUNT_SERIES_SID_FK =
    T91992.ACCOUNT_SERIES_SID_PK
    AND T92149.FUND_CODE_FISCAL_YEAR_SID_PK =
    T92891.FUND_CODE_FISCAL_YEAR_SID_FK
    AND T92541.FUND_SID_PK = T92891.FUND_SID_FK
    AND T92149.FISCAL_YEAR >= :"SYS_B_0")
    GROUP BY T91932.ACCOUNT_NBR,
    T91932.COMPL_ACCOUNT_NBR,
    T91992.ACCOUNT_SERIES_NAME,
    T92541.FUND_CODE,
    T92541.APPROP_SYMBOL
    Execution Plan
    Plan
    SELECT STATEMENT ALL_ROWS Cost: 25 K Bytes: 79 M Cardinality: 728 K
         28 PX COORDINATOR
              27 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ10002 :Q1002 Cost: 25 K Bytes: 79 M Cardinality: 728 K
                   26 HASH GROUP BY PARALLEL_COMBINED_WITH_PARENT :Q1002 Cost: 25 K Bytes: 79 M Cardinality: 728 K
                        25 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1002 Cost: 25 K Bytes: 79 M Cardinality: 728 K
                             24 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10001 :Q1001 Cost: 25 K Bytes: 79 M Cardinality: 728 K
                                  23 HASH GROUP BY PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 25 K Bytes: 79 M Cardinality: 728 K
                                       22 HASH JOIN PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 12 K Bytes: 190 M Cardinality: 2 M
                                            4 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 2 Bytes: 319 Cardinality: 11
                                                 3 PX SEND BROADCAST PARALLEL_TO_PARALLEL SYS.:TQ10000 :Q1000 Cost: 2 Bytes: 319 Cardinality: 11
                                                      2 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1000 Cost: 2 Bytes: 319 Cardinality: 11
                                                           1 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT DW_FRR.DIM_ACCOUNT_SERIES :Q1000 Cost: 2 Bytes: 319 Cardinality: 11
                                            21 NESTED LOOPS PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 12 K Bytes: 142 M Cardinality: 2 M
                                                 6 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1001
                                                      5 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT DW_FRR.DIM_ACCOUNT :Q1001 Cost: 2 Bytes: 12 K Cardinality: 481
                                                 20 VIEW PUSHED PREDICATE VIEW PARALLEL_COMBINED_WITH_PARENT SYS.VW_GBC_17 :Q1001 Bytes: 660 Cardinality: 11
                                                      19 SORT GROUP BY PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 10 K Bytes: 376 K Cardinality: 5 K
                                                           18 HASH JOIN PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 10 K Bytes: 2 M Cardinality: 36 K
                                                                7 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT DW_ICE.DIM_FUND :Q1001 Cost: 2 Bytes: 7 K Cardinality: 179
                                                                17 NESTED LOOPS PARALLEL_COMBINED_WITH_PARENT :Q1001
                                                                     15 NESTED LOOPS PARALLEL_COMBINED_WITH_PARENT :Q1001 Cost: 10 K Bytes: 1 M Cardinality: 36 K
                                                                          8 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT DW_FRR.DIM_FISCAL_YEAR :Q1001 Cost: 2 Bytes: 16 Cardinality: 2
                                                                          14 PARTITION LIST ALL PARALLEL_COMBINED_WITH_PARENT :Q1001 Partition #: 22 Partitions accessed #1 - #11
                                                                               13 PARTITION LIST ALL PARALLEL_COMBINED_WITH_PARENT :Q1001 Partition #: 23 Partitions accessed #1 - #12
                                                                                    12 BITMAP CONVERSION TO ROWIDS PARALLEL_COMBINED_WITH_PARENT :Q1001
                                                                                         11 BITMAP AND PARALLEL_COMBINED_WITH_PARENT :Q1001
                                                                                              9 BITMAP INDEX SINGLE VALUE INDEX (BITMAP) PARALLEL_COMBINED_WITH_PARENT DW_ICE.FK_ACCOUNT_GLTRANS_IDX :Q1001 Partition #: 23 Partitions accessed #1 - #132
                                                                                              10 BITMAP INDEX SINGLE VALUE INDEX (BITMAP) PARALLEL_COMBINED_WITH_PARENT DW_ICE.FK_FUNDCODE_FY_GLTRANS_IDX :Q1001 Partition #: 23 Partitions accessed #1 - #132
                                                                     16 TABLE ACCESS BY LOCAL INDEX ROWID TABLE PARALLEL_COMBINED_WITH_PARENT DW_ICE.FACT_GL_TRANSACTION_DETAIL :Q1001 Cost: 10 K Bytes: 401 K Cardinality: 18 K Partition #: 23 Partitions accessed #1
    Any and all help would be greatly appreciated.

    Have you gathered statistics in the data warehouse recently? That's one reason the optimizer might choose the wrong execution plan.
    Is the schema a star schema? If so do you have the init.ora parameter 'STAR_TRANSFORMATION_ENABLED' set to yes in the DW? This can drastically affect performance.
    Please test any changes you make in a test system before applying to live as altering things can have unrequired impacts/
    Thanks
    Robin

  • Heavy performance issues using Adobe Interactive Form PDFs generated by SAP BPM

    Dear experts,
    we use Adobe Interactive Form PDFs (generated with LiveCycle Designer) as Human Tasks within SAP BPM processes. The PDFs are generated and transmitted correctly, but when they are opened at the receivers PC, Windows freezes for 2-3 minutes, then the PDF opens and can be filled out and sent back. The next PDFs can be opened much faster, but when the PC is restarted, we get the same problem again. We use Adobe Reader XI (11.0.2) on our clients; is their any know performance issue?
    Please note, that we have this problem with EVERY Adobe Interactive Form PDF... I created a simple PDF containing just a field and the client PC still freezes. So it can't be in the form or the scripting. Normal static PDFs can be opend without any problems.
    Best regards,
    David

    They haven't really announced it, because there is no product to announce. Rather the opposite.
    There are no conversion tools, so far as I know.
    XFA forms are a non-starter if you want portability.
    AcroForms are a nightmare in themselves, because the functionality is limited in Adobe Reader and varies between absent and weird in other products. No idea about Blackberry support.
    You will not find a simple recommendation. Rather, you need to use Acroforms and carefully test everything (EVERYTHING: no assumptions) on every platform you intend to support.
    Yes, rather unsatisfactory, but until Adobe realise that the future is platform equivalence or irrelevance, this is where we are.

  • Performance issue: Calling a BAPI PO create in test mode to get error msgs

    Hi,
    We have an ALV report in which we display purchase orders that got created in SAP, but either got blocked due to not meeting PO Release strategy tolerances or have failed output messages. we are displaying the failed messages even.
    We are looping the internal table of eban(PR) & calling bapi po create in test mode to get failed messages.
    Now we are facing performance issue in production. What can be the alternate efficient way to get the error msgs with efficiency
    Regards,
    Ayub H.
    Moderator message: duplicate post (different ID, same company...), see below:
    Performance issue calling bapi po create in test mode to get error messages
    Edited by: Thomas Zloch on Mar 9, 2012

    Hi Suvarna,
    so you need to reduce the number of PO-simulations.
    - Likely you checked already, that all EBAN-entries should already be converted into POs. If there would be a large number of "new" EBAN-entries, they don't need to be simulated.
    - If it's a temporary problem: give aid to correct the problems (maintain prices or whatever the error-reasons are) Then the amount of not-converted purchase requisitions (PR) should drop, too
    - If it's likely, that your volume of open PR will stay high: create a Z-Table with key of EBAN and a counter, simulate (once a day) PO conversions and store the results in the Z-table. In your report you can use the results... if they are "new enough". From time to time new simulations should be done, missing master data might be available.
    Maybe users should be allowed to start this 2nd report manually (in background), too -> then they can update the messages after some data corrections themself, without waiting for the result (just check later in online report and do something different in between).
    And you might need to explain, PO simulation takes as long as PO creation... there is no easy or fast way around this.
    Best regards,
    Christian

  • Performance issue calling bapi po create in test mode to get error messages

    Hi,
    We have  a report which displays in alv the purchase orders that got created in SAP, but either got blocked due to not meeting PO Release Strategy tolerances or have failed output messages .We are displaying the failed messages too.
    We are looping the internal table of eban(purchase requisition) and calling bapi po create in test mode to get failed messages.
    Now we are facing performance issue in production.What can be the other effecient way to get the error messages without effecting performance.
    Regards,
    Suvarna

    Hi Suvarna,
    so you need to reduce the number of PO-simulations.
    - Likely you checked already, that all EBAN-entries should already be converted into POs. If there would be a large number of "new" EBAN-entries, they don't need to be simulated.
    - If it's a temporary problem: give aid to correct the problems (maintain prices or whatever the error-reasons are) Then the amount of not-converted purchase requisitions (PR) should drop, too
    - If it's likely, that your volume of open PR will stay high: create a Z-Table with key of EBAN and a counter, simulate (once a day) PO conversions and store the results in the Z-table. In your report you can use the results... if they are "new enough". From time to time new simulations should be done, missing master data might be available.
    Maybe users should be allowed to start this 2nd report manually (in background), too -> then they can update the messages after some data corrections themself, without waiting for the result (just check later in online report and do something different in between).
    And you might need to explain, PO simulation takes as long as PO creation... there is no easy or fast way around this.
    Best regards,
    Christian

  • Report Performance Issue - Activity

    Hi gurus,
    I'm developing an Activity report using Transactional database (Online real time object).
    the purpose of the report is to list down all contacts related activities and activities NOT related to Contact by activity owner (user id).
    In order to fullfill that requirment I've created 2 report
    1) All Activities related to Contact -- Report A
    pull in Acitivity ID , Activity Type, Status, Contact ID
    2) All Activities not related to Contact UNION All Activities related to Contact (Base report) -- Report B
    to get the list of activities not related to contact i'm using Advanced filter based on result of another request which is I think is the part that slow down the query.
    <Activity ID not equal to any Activity ID in Report B>
    Anyone encountered performance issue due to the advanced filter in analytic before?
    any input is really appriciated
    Thanks in advanced,
    Fina

    Fina,
    Union is always the last option. If you can get all record in one report, do not use union.
    since all records, which you are targeting, are in the activity subject area, it is not nessecery to combine reports. add a column with the following logic
    if contact id is null (or = 'Unspecified') then owner name else contact name
    Hopefully, this is helping.

Maybe you are looking for

  • How to delete the root DataObject?

    Hi, I guess it's something simple, but I wasn't able to make it work. In general - I fetch some DataObjects from DSP service and would like to delete them all, but all I receive is com.bea.sdo.impl.ReadOnlyException: Root DataObject can not be delete

  • Why my computer suddenly won't recognize my ipod

    Can anyone tell me why my ipod classic suddenly is not recognized by itunes? It no longer shows up in my devices.

  • Instance naming in RAC environment

    DB version : 11.1.0.7 OS : AIX 6.1 It has been few months since i've installed a RAC DB. After the GRID/CRS installation, while DB creation, when you specify the DB name in DBCA , say orcl, DBCA will automatically create instances orcl1 and orcl2,...

  • Setup of  approval for Miscellaneous issue transaction in Inventory

    Dear Experts, We have a requirement from client to setup approval for Miscellaneous issue/ receipt transaction in Inventory. Is it possible to fulfill the above requirement using Approval Management Engine (AME)?. If it is possible, Kindly explain me

  • Help with Listener.ora file location on Suse Linux

    Hi Ok, 2 scenarios Scenario 1 Installed Oracle 10.1.0.3.0 on Suse 9.2, the location of the listner is: /oracle/dborafiles/application_name/instance/admin/netadmin/ Which is great, as this is where i want the listener and tnsnames.ora files to be unde