Performance issue in correlation with hidden objects in reports

Hello,
is there a known performance issue in correlation with hidden objects in reports using conditional suppressions? (HFM version 11.1.2.1)
Using comprehensive reports, we have huge performance differences between the same reports with and without hidden objects. Furthermore we suspect that some trouble with our reporting server environment base on using these reports through enduser.
Every advice would be welcome!
Regards,
bsc
Edited by: 972676 on Nov 22, 2012 11:27 AM

If you said that working with EVDRE for each separate sheet is fin ethat's means the main problem it is related to your VB custom macro interdependecy.
I suggest to add a log (to write into a text file)for you Macro and you will se that actually that minute is staying to perform operations from custom Macro.
Kind Regards
Sorin Radulescu

Similar Messages

  • How to get around a performance issue when dealing with a lot of data

    Hello All,
    This is an academic question really, I'm not sure what I'm going to do with my issue, but I have some options.  I was wondering if anyone would like to throw in their two cents on what they would do.
    I have a report, the users want to see all agreements and all conditions related to the updating of rebates and the affected invoices. From a technical perspective ENT6038-KONV-KONP-KONA-KNA1.  THese are the tables I have to hit.  The problem is that when they retroactively update rebate conditions they can hit thousands of invoices, which blossoms out to thousands of conditions...you see the problem. I simply have too much data to grab, it times out.
    I've tried everything around the code.  If you have a better way to get price conditions and agreement numbers off of thousands of invoices, please let me know what that is.
    I have a couple of options.
    1) Use shared memory to preload the data for the report.  This would work, but I'm not going to know what data is needed to be loaded until report run time. They put in a date. I simply can't preload everything. I don't like this option much. 
    2) Write a function module to do this work. When the user clicks on the button to get this particular data, it will launch the FM in background and e-mail them the results. As you know, the background job won't time out. So far this is my favored option.
    Any other ideas?
    Oh...nope, BI is not an option, we don't have it. I know, I'm not happy about it. We do have a data warehouse, but the prospect of working with that group makes me whince.

    My two cents - firstly totally agree with Derick that its probably a good idea to go back to the business and justify their requirement in regards to reporting and "whether any user can meaningfully process all those results in an aggregate". But having dealt with customers across industries over a long period of time, it would probably be bit fanciful to expect them to change their requirements too much as in my experience neither do they understand (too much) technology nor they want to hear about technical limitations for a system etc. They want what they want if possible yesterday!
    So, about dealing with performance issues within ABAP, I'm sure you must be already using efficient programming techniques like using Hash internal tables with Unique Keys, accessing rows of the table using Field-Symbols and all that but what I was going to suggest to you is probably look at using [Extracts|http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9ed135c111d1829f0000e829fbfe/content.htm]. I've had to deal with this couple of times in the past when dealing with massive amount of data and I found it to be very efficient in regards to performance. A good point to remember when using Extracts that, I quote from SAP Help, "The size of an extract dataset is, in principle, unlimited. Extracts larger than 500KB are stored in operating system files. The practical size of an extract is up to 2GB, as long as there is enough space in the filesystem."
    Hope this helps,
    Cheers,
    Sougata.

  • Problem with hidden objects when moving layers, groups or artboards

    Hi there,
    I am using Illustrator CS5.5 on a Mac.
    Sometimes, after moving stuff around (groups, layers or artboards), I notice that hidden objects didn't move with everything else.
    Does somebody knows how this can be if no objects are locked ?
    I cannot make this happen on purpose with a simple example and it is really annoying not to be sure that everything was moved properly.
    Thanks
    R.

    Emil,
    A little misunderstanding here.
    Like you say, clicking on the target to select (targeting objects) : this is exactly what I do.
    So, SOMETIMES, when I select a group (not a layer) which contains hidden objects by clicking on its target in the layers panel, only the visible objects get selected (i.e. : only the visible objects get a full square at their right in the layers palette and the group gets a small square) AND the group is NOT targeted at all (i.e. : the target of the group is not highlighted, only the targets of the selected (visible) objects within the group are highlighted).
    Then, if I move the group, only the visible objects are moved because only those are selected.
    Problem is that I don't know why but when doing this (targeting objects), SOMETIMES this result in selecting everything within the group BUT SOMETIMES it result in selecting only the visible objects withtin the group.
    I don't know what triggers that difference in behavior.
    R.

  • Performance Issue Executing a BEx Query in Crystal Report E 4.0

    Dear Forum
    I'm working for a customer with big performance issue Executing a BEx Query in Crystal via transient universe.
    When query is executed directly against BW via RSRT query returns results in under 2 seconds.
    When executed in crystal, without the use of subreports multiple executions (calls to BICS_GET_RESULTS) are seen. Runtimes are as long as 60 seconds.
    The Bex query is based on a multiprovider without ODS.
    The RFC trace shows BICS connection problems, CS as BICS_PROV_GET_INITIAL_STATE takes a lot of time.
    I checked the note 1399816 - Task name - prefix - RSDRP_EXECUTE_AT_QUERY_DISP, and itu2019s not applicable because the customer has the BI 7.01 SP 8 and it has already
                domain RSDR0_TASKNAME_LONG in package RSDRC with the
                description: 'BW Data Manager: Task name - 32 characters', data
                type: CHAR; No. Characters: 32, decimal digits: 0
                data element RSDR0_TASKNAME_LONG in package RSDRC with the
                description 'BW Data Manager: Task name - 32 characters' and the
                previously created domain.
    as described on the message
    Could you suggest me something to check, please?
    Thanks en advance
    Regards
    Rosa

    Hi,
    It would be great if you would quote the ADAPT and tell the audience when it is targetted for a fix.
    Generally speaking, CR for Enteprise  isn't as performant as WebI,  because uptake was rather slow .. so i'm of the opinion that there is improvements to be gained.   So please work with Support via OSS.
    My onlt recommendations can be :
    - Patch up to P2.12 in bi 4.0
    -  Define more default values on the Bex query variables.
    - Implement this note in the BW 1593802    Performance optimization when loading query views 
    Regards,
    H

  • Webi Report Performance issue as compared with backaend SAP BW as

    Hi
    We are having Backend as SAP BW.
    Dashboard and Webi reports are created with Backend SAP BW.
    i.e thru Universe on top of Bex query and then webi report and then dashboard thru live office.
    My point is that when we have created webi reprts with date range as parameter(sometimes as mandatory variable which comes as prompt in webi) or sometimes taking L 01 Calender date from Bex and creating a prompt in webi,  we are facing that reports are taking lot of time to open. 5 minutes 10 minutes and sometimes 22 minutes  to open.
    This type of problem never ocurred when Backened was Oracle.
    Also when drilling in webi report,it takes lot of time .
    So can you suggest any solution?

    Hi Gaurav,
    We logged this issue with support already, it is acknowledged.
    What happens is that whenever you use an infoobject in the condition
    (so pull in the object into condition and build a condition there,
    or use that object in a filter object in the universe and then use the filter)
    this will result in that object being added to the result set.
    Since the query will retrieve a lot of different calendar days for the period you selected,
    the resultset will be VERY big and performance virtually non-existent.
    The workaround we used is to use a BEX variable for all date based selections.
    One optional range variable makes it possible to build various types of selections
    (less (with a very early startdate), greater (with a very far in the future enddate) and between).
    Because the range selection is now handled by BEX, the calendar day will not be part of the selection...
    Good luck!
    Marianne

  • Performance issues in bootcamp with Passmark benchmark test

    I wonder if anyone can shed some light on this
    I've got a Win XP 32bit SP3 installed on bootcamp. When i test this using the passmark benchmark test (latest version on a 30 day trial). I get poor CPU performance. I am running a 2.53 13" MacBook Pro. I can compare the results with other peoples results from the passmark database, by same CPU and also by other macbooks.
    My scores are nearly half what other similar macbooks are showing in CPU Math, which is quite a difference.
    So i wonder what's up, and how can i check it further? Half sounds like only one CPU running.
    I've compared VMFusion and Bootcamp as well and there's not much in them, both results are showing around half the performance. (This is Fusion running the bootcamp install. A free windows install in Fusion shows slightly better results but not that much)
    Any pointers or help would be great, for example is there some way under OS X i can check the performance of the macbook as a baseline?
    Is there something i've not done whilst installing Win XP through book camp?
    thanks in advance!
    paul

    Maybe until software catches up and is patched to work with Intel 3000 plus 6970M.
    Apple did get early jump on Sandy Bridge +
    Can't find the thread from someone else (thanks for making searching threads so clumbersome)
    AMD Mobility
    http://support.amd.com/us/gpudownload/windows/Pages/radeonmob_win7-64.aspx
    And for Nvidia 320M users:
    http://www.nvidia.com/object/notebook-win7-winvista-64bit-275.27-beta-driver.htm l
    3DMark06 will almost surely I suppose need a patch.
    I'd hit the web sites for PC laptop gaming if you can.
    http://www.bing.com/search?q=6970M+3DMark06
    http://www.notebookcheck.net/Intel-HD-Graphics-3000.37948.0.html
    This should be right now your alley -
    http://forums.macrumors.com/showthread.php?t=1101179&page=2
    http://www.bing.com/search?q=6970M+with+Intel+3000+graphics
    Most of this type stuff is at your fingertips with a little searching.

  • Performance issues home sharing with Apple TV2 from Mountain Lion Server

    I have a Mac Mini which I have just upgraded to Mountain Lion Server from Snow Leopard Server.
    I have noticed that the performance of Streaming a film using Home Sharing to an Apple TV2 has degraded compared to the Snow Leopard setup. In fact the film halts a number of times during play back which is not ideal.
    I have tested the network between the 2 devices and cannot find a fault.
    Has anyone come across this problem before.
    Are there any diagnostic tools I can use to measure the home sharing streaming service to the AppleTV2 device.
    Any help much appreciated

    Well, I tried a few other things and one worked but again just the first time I tried connecting to the desktop PC with iTunes. I flashed my router with the latest update and the ATV2 could see the iTunes library and I was able to play media. Later in the day I was going to show off to my daughter that I had fixed it and, to my dismay, no go. I tried opening the suggested ports but no luck.
    I then tried loading iTunes on a Win7 laptop and it works perfectly with the ATV2. Both the laptop and the ATV2 are connected to the router wirelessly while the Desktop is connected to the router by Ethernet. Not sure if this is part of the issue as it sounds like this didn't work for others. The only other difference between the laptop and desktop is that the desktop has Win7 SP1 loaded while the laptop does not; now I'm scarred to load it though I don't think that's the issue. All in all, a very vexing situation. Hopefully Apple comes up with a solution soon.

  • Performance issue while working with large files.

    Hello Gurus,
    I have to upload about 1 million keys from a CSV file on the application server and then delete the entries from a DB table containing 18 million entries. This is causing performance problems and my programm is very slow. Which approach will be better?
    1. First read all the data in the CSV and then use the delete statement?
    2. Or delete each line directly after reading the key from the file?
    And another program has to update about 2 million entries in a DB table containing  20 million entries. Here I also have very big performance problems(the program has been running for more the 14 hours). Which is the best way to work with such a large amount?
    I tried to rewrite the program so that it will run parallel but since this program will only run once the costs of implementing a aRFC parallization are too big. Please help, maybe someone doing migration is good at this
    Regards,
    Ioan.

    Hi,
    I would suggest, you should split the files and then process each set.
    lock the table to ensure it is available all time.
    After each set ,do a commit and then proceed.
    This would ensure there is no break in middle and have to start again by deleteing the entries from files which are already processed.
    Also make use of the sorted table and keys when deleting/updating DB.
    In Delete, when multiple entries are involved , use of  an internal table might be tricky as some records may be successfully deleted and some maynot.
    To make sure, first get the count of records from DB that are matching in Internal table set 1
    Then do the delete from DB with the Internal tabel set 1
    Again check the count from DB that are matching in Internal table set 1 and see the count is zero.
    This would make sure the entire records are deleted. but again may add some performance
    And the goal here is to reduce the execution time.
    Gurus may have a better idea..
    Regards
    Sree

  • Issue in using BIAccelerator with Business Objects WebIntelligence report

    Hi,
    I am trying to improve performance of Webi Reports on BW queries (with huge data load) with BI Accelerator.
    When I run one BW query (with millions of records) in Bex 7, am getting data within no time when I used BIA.
    When I run Webi report on universe which is developed on same BW query, report is running for long hours and finally giving message "No data to return Query1".
    Am not sure whether my report query is hitting BIA from WebI or not .
    Can we use BIA with WebI reports or not (to increase performance)?
    Any help on this please.
    Thanks,
    Dhana.

    Hi Dhana,
    Webi - like any tool that is based on BEx queries - will use BIA. However, for Webi the data has to be passed through the MDX interface which slows things down. Please make sure you apply all notes listed in SAP note <a href="http://service.sap.com/sap/support/notes/1142664">1142664 MDX: Composite SAP note about performance improvements</a>.
    Regards,
    Marc
    SAP NetWeaver RIG

  • Performance issues for iOS with high resolution.

    I made an app with a resolution of 480x320 for iOS. It works quite well.
    I then remade it with a resolution of 960x640. In AIR for iOS settings I set Resolution to "High".
    The app looked great, however there was a noticeable drop in performance.
    The app functioned the same way as the original lower resolution app. but it was lagging.
    Has anyone else had this problem?
    Am I doing something wrong?

    With my game, I had around 60fps on the 3GS, and around 50 on the iPhone 4, with the high settings. I got around 10 fps extra by using: stage.quality = StageQuality.LOW;
    Air 2.6. I tried with Air 2.7, but it seems like that command can't be used there (?)

  • Performance issues when working with huge lists

    I’ve got a script that reads a large CSV spreadsheet
    and parses the data into a list of the form [[A1,B1,C1],
    [A2,B2,C2], [A3,B3,C3]] and a second list of the form
    [#A1:B1,#A2:B2,#A3:B3] etc… The actual spreadsheet is about
    10 columns x 10,000 rows. Reading the file string goes fast enough,
    the parsing starts off fast but slows to a crawl after about 500
    rows (I put the row count on the stage to check progress). Does
    anyone know if the getaProp, addProp, and append methods are
    sensitive to the size of the list?
    A sample of one of the parsing loops is below. I’m
    aware all interactivity will stop as this is executed. This script
    is strictly for internal use, it crunches the numbers in two
    spreadsheets and merges the results to a new CSV file. The program
    is intended to run overnight and the new file harvested in the
    morning.

    > Does anyone know if the getaProp, addProp, and
    > append methods are sensitive to the size of the list?
    Is this a trick question? Sure they are. All of them.
    Addprop and append are quite fast (due to the list object
    scalable
    preallocating memory as required), so i doubt that they are
    the cause of
    the problem.
    GetAProp will search each item in the list, therefore, if you
    are
    searching for the last item, or if the item is not in the
    list, the more
    the items, the slower the command.
    Didn't go through all your code but I noticed
    - this: repeat with rowCount = 2 to file2string.line.count
    Big no-no! Line counting is a very slow operation for it to
    be a
    evaluated in a loop.
    - and this: myFile2data.append(myLineData)
    String operations like this require memory reallocation, and
    therefore
    are very slow. If you do conclude that such an operation
    causes the
    problem, consider using a preallocated buffer (create a big
    string in
    advance) and then use
    mydata.char.[currentoffset..(currentoffset+newstr.length)]=newstr
    This can make code run even hundreds times faster, compared
    to the
    append method.
    Applied CD wrote:
    > I?ve got a script that reads a large CSV spreadsheet and
    parses the data into a
    > list of the form [[A1,B1,C1], [A2,B2,C2], [A3,B3,C3]]
    and a second list of the
    > form [#A1:B1,#A2:B2,#A3:B3] etc? The actual spreadsheet
    is about 10 columns x
    > 10,000 rows. Reading the file string goes fast enough,
    the parsing starts off
    > fast but slows to a crawl after about 500 rows (I put
    the row count on the
    > stage to check progress). Does anyone know if the
    getaProp, addProp, and
    > append methods are sensitive to the size of the list?
    >
    > A sample of one of the parsing loops is below. I?m aware
    all interactivity
    > will stop as this is executed. This script is strictly
    for internal use, it
    > crunches the numbers in two spreadsheets and merges the
    results to a new CSV
    > file. The program is intended to run overnight and the
    new file harvested in
    > the morning.
    >
    > writeLine("File 2 Data Parsing" & RETURN)
    > myOrderColumn = myHeaders2.getOne("OrderNum")
    > myChargesColumn = myHeaders2.getOne("Cost")
    > myFile2data = []
    > mergedFedExCharges = [:]
    > repeat with rowCount = 2 to file2string.line.count
    > myLineData = []
    > repeat with i = 1 to
    file2string.line[rowCount].item.count
    > myItem = file2string.line[rowCount].item
    > if i = 1 then
    > myItem = chars(myItem,2,myItem.length)
    > end if
    > myLineData.append(myItem)
    > end repeat
    > if myLineData.count = myHeaders2.count then
    > myFile2data.append(myLineData)
    > myOrderSymbol = symbol("s" &
    myLineData[myOrderColumn])
    > myCurrentValue =
    getaProp(mergedFedExCharges,myOrderSymbol)
    > if voidP(myCurrentValue) then
    > mergedFedExCharges.addProp(myOrderSymbol,0.00)
    > end if
    > mergedFedExCharges[myOrderSymbol] =
    mergedFedExCharges[myOrderSymbol] +
    > value(myLineData[myChargesColumn])
    > writeUpdate(myLineData[1])
    > else
    > writeError("file 2 : " & string(myLineData) &
    RETURN)
    > end if
    > end repeat
    >

  • Performance issue on iPad4 with AIR SDK 3.9

    Hi!
    I have an app that I've created with Flex SDK 4.6.0. The first time I've  compiled the app with AIR SDK 3.1 and it runs with good performance on iPad4 (and little bit slow on oPad2). Then I've upgrated the AIR SDK to the version 3.9 and suddenly my app starts to run slow (but on iPad2 performance is good)
    Is it any known problem with AIR SDK 3.9  on iPad4? Or on iOS 6.1?
    Should I downgrade the AIR SDK back to 3.1 to get good performance on iPad4?
    Thanks in advance
    UPD: I've downgraded the AIR SDK to 3.1 and my app get back the good performance! (But there's some strange bugs)
    Message was edited by: yx

    Hi Nimit!
    1. I've upgraded AIR SDK to 4.0 beta and the problem is gone away.
    2. Unfortunately  I'm not sure I can share my app - it's not in the policy of the company I'm working for. I'll check it out with my boss
    Than you,
    Olga

  • Performance issue using Universe with SNC connection

    I had a dynamic dashboard using Live office > Webi report > Universe > Bex query which was working fine but we recently implemented SNC between Business Objects and SAP BW due to which the universe connection has been changed to use sign on connection. After changing the connection in the universe I am see a performance degradation refreshing the connections in xelsius dashboard. Earlier the connection refresh time was 6 sec now the refresh time is around 30 secs, Intersting thing is I have tryed refreshing the webi report the refresh time of the webi report did not change which is still less than 6 secs and I have also tryed refeshing the Live Office component directly in the design/spreedsheet mode of the xcelsius even here the refresh time remain same less than 6 secs.
    The connection refresh time is bad only when I am in preview mode or if I deploy the swf to the business object server.
    Xcelsius Version: 2008 (5.3)
    BO Version: 3.1 sp2 fixpack 2.8
    Thanks

    Anup,
    What will happen behind the screens when application restarts.
    what are the otherways of achieving the same Behavior,like getting the application state to initial state.

  • SSAS Strange Performance Issues (Long running with NO read or write activity) - UAT Test Environment

    Hi All,
    Im looking for some pointers, my team and I have drawn a blank as to what is going on here.
    Our UAT is a virtual machine.
    I have written a simple MDX query which on normal freshly processed cube executes in under 15 seconds, I can keep running the query.
    Run 1. 12 secs
    Run 2. 8 Secs
    Run 3. 8 Secs
    Run 4. 7 Secs
    Run 5. 8 Secs
    Run 6. 28 MINUTES!!
    This is on our test environment, I am on the only user connected and there is no processing active.
    Could anyone please offer some advice on where to look, or tips on what the issue may be.
    Regards,
    Andy

    Hi aown61,
    According to your description, you get long time processing after executing processing several times. Right?
    In this scenario, it's quite strange that a processing take long time. I suggest you using SQL Profiler to monitor the event during processing. It can track engine process events, such as the start of a batch or a transaction, you can replay the events captured
    on the Analysis Services instance to see exactly what happened. For more information, please refer to link below:
    Use SQL Server Profiler to Monitor Analysis Services
    If you have any question, please feel free to ask.
    Regards,
    Simon Hou
    TechNet Community Support

  • Solve Performance Issue... with multiplethread

    I made an application with a huge row indatabase table...
    every siingle process could process thousand rows
    Is it make sense to divide one process in single http request
    for example ( divide process by create ten application module )
    into multiple thread?
    has anyone have some problem with me?
    and how the best strategy to solve this problem?

    OK, this helps to understand the problem.
    We had a problem alike yours. What we end up doing with is to read the files into a temporary db table, committing every 500 rows to reduce memory usage. We do this without any validation, just to get hold of the data in the db.
    After all data from a file is in a db table we do the validation (you can even use pl/sql for this) and show all rows to the use which are not valid. This gives the user the change to correct the rows (or dismiss them).
    After that (now knowing that hte data should be processed without any error) we do the real work of inserting the data.
    All you ahve to do is to work in chunks (we use 500-1000 rows) before we commit the data already processed. Flags in the temporary table allow us to start the process again if something happens during processing the data.
    Working in chunks allows the framework to free and regain some memory used while doing the work.
    Timo

Maybe you are looking for

  • Text Data Processing 4.1 Training in February & April for APJ & EMEA

    We would like to announce upcoming training opportunities in Malaysia and Germany for partners and select customers on Text Data Processing within the upcoming Data Services 4.1. Get early hands-on access to software and participate in the review of

  • How can I create a mailinglist with the new ipad?

    I want send a email to diffrent persons. In the mailprogramm on the mac is this possible with the right mousclick. But how does this work on the Ipad?

  • Valuation Procedure to calculate Fair Market Value

    Hi Treasury Experts, In the Treasury Hedge Management component we are looking for a valuation procedure which can handle the calculation of the Fair Market Value of the financial transactions in line with the standard FAS133 accounting standard. We

  • Some hyperlinks do not function with FF 29.0.1 vista

    OS VISTA Firefox ver. 29.0.1 Some links on some websites (eg. www.change.org) do not respond when clicked. (Others do.) Some objects on these pages do not function (eg. "thermometer" progress bar and "Sign" button) Pages work under Internet Explorer.

  • J2EE Web services and JAX-RPC

    Hello.. While working with J2EE web services using JDeveloper and OC4J, I noticed that it really doesn't confirm to Sun's JAX-RPC standard. The Web Service interface class does not extend the "Remote" and all methods don't throw "RemoteException" as