How to Handle Large Dumps and Complex Calculations in OBIEE

Hi,
We are working on some reports in OBIEE and we are unable to optimize performace. Please consider following scenerios:
1) Detail Report
In this particular report we have essentaily data dump. That is there are 5 Dimentions and a Fact table and we are just selecting
fields from various Dimentions and facts. But the problem is there are over 5000 records. I guess that is making report very slow.
Another issue is when we click on PAGE Contols, they simply dont work, that is if we click on (>>) to see all records its takes lots
of time but show only first 25 records. Any idea what is the problem.
2) Calculated Report
We have one calculated report where there are 7 calculated columns. Because every column contains different Measures and we have to
use lots of Filter expressions
Eg: Filter (Measure1 USING {@VarDate} BETWEEN DIM1.FromDate and Dim1.Todate AND {@VarDate} BETWEEN DIM2.FromDate and Dim2.Todate AND <SOME OTHER FILTERS> )
The Granuality of data is at employee level and there are around 1388726 records in Fact table.
Is there is any way to optimize above except to create a summary table instead?
kindly guide us on above scenerios

Hi,
Thanks for your reply, I was trying things suggested, I found that the actual physical SQL (which I can get from "Manage Session" link) run in like 4-5 seconds in Query analyzer but: when I studied SQL carefully it does not contain some fiters which we I have applied in report.
From example i have a filter in report <Measure.Present = 1 > but in Physical SQL i cannot see that filter. However, the results it gives on dashboard are correct that is after it applies the fillter.
Any idea why is this happening?
My guess is because above filter is not being applied at physical sql level thats why query fetches like 20000 recors in server cache and then it applies <Measure.Present = 1 >. Thats why it might be taking time ?
Please give me your suggestions.
Thanks !

Similar Messages

  • How to handle monster fact and monster diamension in obiee

    Hi,
    what are ways to handle handle monster fact and monster diamension in obiee 10g.
    Thanks

    Hi,
    Thanks for your reply, I was trying things suggested, I found that the actual physical SQL (which I can get from "Manage Session" link) run in like 4-5 seconds in Query analyzer but: when I studied SQL carefully it does not contain some fiters which we I have applied in report.
    From example i have a filter in report <Measure.Present = 1 > but in Physical SQL i cannot see that filter. However, the results it gives on dashboard are correct that is after it applies the fillter.
    Any idea why is this happening?
    My guess is because above filter is not being applied at physical sql level thats why query fetches like 20000 recors in server cache and then it applies <Measure.Present = 1 >. Thats why it might be taking time ?
    Please give me your suggestions.
    Thanks !

  • How to handle large result set of a SQL query

    Hi,
    I have a question about how to handle large result set of a SQL query.
    My query returns more than a million records. However, the Query Template has a "row count" parameter. If I don't specify it, it by default returns only 100 lines of records in the query result. If I specify it, then it's limited to a specific number.
    Is there any way to get around of this row count issue? I don't want any restriction on the number of records returned by a query.
    Thanks a lot!

    No human can manage that much data...in a grid, a chart, or a direct-connected link to the brain. 
    What you want to implement (much like other customers with similar requirements) is a drill-in and filtering model that helps the user identify and zoom in on data of relevance, not forcing them to scroll through thousands or millions of records.
    You can also use a time-based paging model so that you only deal with a time "slice" at one request (e.g. an hour, day, etc...) and provide a scrolling window.  This is commonly how large datasets are also dealt with in applications.
    I would suggest describing your application in more detail, and we can offer design recommendations and ideas.
    - Rick

  • In  BDC how you handled header data and item data

    In  BDC how you handled header data and item data

    Raja,
    Can you be more clear ?
    Usually you load the header data one and then loop at the item data and then load the item data.
    This example should help you.
    http://www.sap-img.com/abap/bdc-example-using-table-control-in-bdc.htm
    Regards,
    Ravi
    Note - Please mark all the helpful answers

  • How to use SQL OVER and PARTITION BY in OBIEE Expression Builder??

    Hi there,
    I want to create a new logical coulmn with the following SQL query.
    SUM(Inventory Detail.Qty) OVER(PARTITION BY Inventory Detail.A,Inventory Detail.B,Item.C,Inventory Detail.D,MyDATE )/SUM(Inventory Detail.Qty) OVER(PARTITION BY Inventory Detail.A,Inventory Detail.B,Item.C )
    How to use the OVER and PARTITION BY in OBIEE Expression Builder??
    Thanks in Advance

    hi bipin,
    We cant use by in Expression builder(rpd) .But use the same formula like this in Fx of answers
    SUM(Inventory Detail.Qty) OVER(PARTITION BY Inventory Detail.A,Inventory Detail.B,Item.C,Inventory Detail.D,MyDATE )/SUM(Inventory Detail.Qty) >OVER(PARTITION BY Inventory Detail.A,Inventory Detail.B,Item.C )SUM(Inventory Detail.Qty by Detail,ITEM,Mydate)/SUM(qty by detail,item)
    First check the numerator whether it was giving correct results or not then go with denominator
    compare the results with sql that u have
    Let me know if that does work
    thanks,
    saichand.v
    Edited by: Saichand Varanasi on Jul 27, 2010 9:27 PM
    Edited by: Saichand Varanasi on Jul 27, 2010 9:28 PM

  • How to handle large heap requirement

    Hi,
    Our Application requires large amount of heap memory to load data in memory for further processing.
    Application is load balanced and we want to share the heap across all servers so one server can use heap of other server.
    Server1 and Server2 have 8GB of RAM and Server3 has 16 GB of RAM.
    If any request comes to server1 and if it requires some more heap memory to load data, in this scenario can server1 use serve3’s heap memory?
    Is there any mechanism/product which allows us to share heap across all the servers? OR Is there any other way to handle large heap requirement issue?
    Thanks,
    Atul

    user13640648 wrote:
    Hi,
    Our Application requires large amount of heap memory to load data in memory for further processing.
    Application is load balanced and we want to share the heap across all servers so one server can use heap of other server.
    Server1 and Server2 have 8GB of RAM and Server3 has 16 GB of RAM.
    If any request comes to server1 and if it requires some more heap memory to load data, in this scenario can server1 use serve3’s heap memory?
    Is there any mechanism/product which allows us to share heap across all the servers? OR Is there any other way to handle large heap requirement issue? That isn't how you design it (based on your brief description.)
    For any transaction A you need a set of data X.
    For another transaction B you need a set of data Y which might or might not overlap with X.
    The set of data (X or Y) is represented by discrete hunks of data (form is irrelevant) which must be loaded.
    One can preload the server with this data or do a load on demand.
    Once in memory it is cached.
    One can refine this further with alternative caching strategies that define when loaded data is unloaded and how it is unloaded.
    JEE servers normally support this in a variety of forms. But one can custom code it as well.
    JEE servers can also replicate cached data across server instances. Custom code can do this but it is more complicated than doing the custom caching.
    A load balanced system exists for performance and failover scenarios.
    Obviously in a failover situation a "shared heap" would fail completely (as asked about) because the other server would be gone.
    One might also need to support very large data sets. In that case something like Memcached (google for it) can be used. There are commercial solutions in this space as well. This allows for distributed caching solutions which can be scaled.

  • How to handle large images?

    Hi,
    Does anyone know how to handle big jpg images (1280*960) so that they could be presented in a midlet.
    The problem is that the images requires so much memory that they can't be decoded to an Image object with Image.createImage method. One solution would be to extract thumbnail image from exif headers. Unfortunately at least images taken with Nokia 6680 don't contain thumbnail in exif headers.
    So the only solution seems to be to decode the byte presentation of the image and resize it before creating an Image object.
    Do anybody know any library for this or tips where to start?
    Br, Ilpo

    Hi,
    I think it is not possible. My application contains a file browser (which uses jsr-75). User can use the browser to select an image either from phone memory or memory card. After the selection I would like to present the selected image for that user can be sure it is the right image. The selected image will be then sent to the server side with some additional data for further processing (but that is another story).
    Now the problem is that for example with Nokia 6680 user can take images as big as 1280*960 and I can't present them anymore because of the memory restrictions. With 640*480 image there is no problem because I can create an image object and then use a simple algorithm to resize the image for presentation.
    Br, Ilpo

  • How to handle large library, limited data plan

    I've been using Itunes Match for about six months now, and I'm having problems...
    I live rurally and so I use an AT&T hotspot with a 10 gig/month data plan for my phone, ipad, and desktop mac. My wireless connection speed is pretty good.
    I have about 15000 tracks in my Itunes library, most not purchased through Itunes.
    When I signed up for Match and went through the initial process, it took about 24 hours and used up about six gig, which caused a significant overage in my data plan. Consequently, I just use Match on my Mac and decided not to use it on my other devices because of the data limitations. My motivation to use it currently is as a back up for my music. I connect my phone to my mac manually to transfer some of my music to my phone.
    I figured that the data overuse problem was a one time deal if I didn't use my other devices.
    But recently, when I purchase a song from emusic or Amazon, the icloud processing image pops up after the download purchase is complete. Itunes will then start the process of sending data to Amazon, which was still going at 4 hours yesterday when I manually stopped it as I saw my data plan being used up. It seemed to restart the process of sending periodically and never got to the analyzing or returning data stages.
    Now my recently purchased music shows up with both the Icloud processing symbol and a faded download icloud icon, one for each separate track. I can play the "processing" track, but Itunes won't allow me to add it to a playlist. Further, if I'm online with my hotspot, sometimes the regular Itunes Icon is visible, but at other times the icon with the thunderbold through it. I'm guessing this means it's streaming with the first icon, playing from my harddrive with the second.
    So a couple of questions, and sorry for the length, I'm a first-time support user
    1) Does it make sense to use Match with such a large library, and limited data plan?
    2) "Where" is the music I've purchased - on my computer or in the cloud?
    3) Should it take all day to send data to itunes, when the only updates to my library are songs I've deleted and a handful of new albums I purchased.
    4) Am I using up my data plan when the regular Icloud Icon appears and I'm online? Is there a way to manually play from the hardrive to reduce use of my data plan? I turned off Match once and payed with a half-day sending and receiving session when I turned it back on.
    4) Will I lose music if I unsubscribe from Match?
    Thanks to anybody who has the inclination to respond to any of these questions!

    This datasource sends After images in delta loads which is compatible with loading to a Std DSO only. You cannot load from this datasource directly to a cube.
    You can check with business the number of years they would need history. If they need for 5 years, you could delete data older than that or Archive.
    For GL, there would not be any changes to years that are closed for posting. There may be adjustments carried out for the previous fiscal year. I guess there would not be any changes to years prior to that. So archiving old data should not affect delta.

  • How to export full dump and metadata of particular table

    let us consider mytest schema is having 6 tables
    tname tabtype
    myt table
    myaxpertlog table
    abb table
    ccc table
    ddd table
    xxx table
    now from this schema i want full dump and also from myaxpertlog table i required metadata only not records.
    c:> export mytest/log file=20130409mytest0904pm.dmp tables=(myaxpertlog) rows=n
    if i tried i am get only one table but it does have records.
    Not like this.

    Hello,
    I follow a smart way to do this:
    1-Create a parfile then put the conditions in.
    2-put in this parfile the conditions
    directory=DPUMP_DIR
    dumpfile=dumpfilename.dmp
    logfile=logfilename.log
    schemas=yourschema
    query=Yourschema.myaxpertlog:"where 1=2"In this example you will get all the tables you want with data inside the schema name you provided.
    The table myaxpertlog will come with metadata only because the condition 1=2 will never occur.
    Please note that this condition will apply only if you want to export those tables from the same schema.Kind Regards
    Mohamed ElAzab
    http://mohamedelazab.blogspot.com/

  • How to handle large data in file adapter

    We have a scenario Proxy -> PI -> File Sever using File adapter.
    File adapter is using FCC for conversion.
    recently we had wave 2 products live and suddenly for this interface we have increase in volume of messages, due to which File adapter is not performing well, PI goes slow or frequent disconnect from file server problem. Due to which either we will have duplicate records in file or file format created is wrong.
    File size is somewhere around 4.07 GB which I also think quite high for PI to handle.
    Can anybody suggest how we can handle such large data.
    Regards,
    Vikrant

    Check this Blog for Huge File Processing:
    Night Mare-Processing huge files in SAP XI
    However, you can take a look also to this Blog, about High Volume Messages:
    Step-by-Step Guide in Processing High-Volume Messages Using PI 7.1's Message Packaging
    PI Performance Tuning Best Practice:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/2016a0b1-1780-2b10-97bd-be3ac62214c7?QuickLink=index&overridelayout=true&45896020746271

  • How to handle Stored Procedure and Views

    Dear All
    While dealing with Oracle database related scenario. I came across Stored Procedures and Views. Which are complex in nature. Using SAP XI how we can handle them ?
    Is JDBC adaptor is capable of that.? Can you help me Data type structure for oracle.
    How max occur play important role in that. How to identify root and item level structure for oracle
    I am dealing with stored procedures while inserting data. and using views i need to get data from oracle database.
    What is the syntax of query we use to put while using JDBC adaptor?
    Please help and provide bit detail information over this so that i can execute scenario
    Thanks
    Gaurav

    1) jdbc:oracle:thin:@xxx.xxx.xxx.xxx:1521:sid
    2)Occurence==> o,1, >1 , Unbounded
    Occurrence=> ready to accept 0 / 1 / more than 1/ multiple record  (for source) and how it will be passed to target.
    http://help.sap.com/saphelp_erp2004/helpdata/en/b6/0b733cb7d61952e10000000a11405a/frameset.htm
    3)
    <StatementName5>
    <storedProcedureName action=” EXECUTE”>
        <table>realStoredProcedureeName</table>
    <param1 [isInput=”true”] [isOutput=true] type=SQLDatatype>val1</param1>
    </storedProcedureName >
      </StatementName5>
    refer
    http://help.sap.com/saphelp_nw04/helpdata/en/22/b4d13b633f7748b4d34f3191529946/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/2e/96fd3f2d14e869e10000000a155106/frameset.htm

  • How to handle large data sets?

    Hello All,
    I am working on a editable form document. It is using a flowing subform with a table. The table may contain up to 50k rows and the generated pdf may even take up to 2-4 Gigs of memory, in some cases adobe reader fails and "gives up" opening these large data sets.
    Any suggestions? 

    On 25.04.2012 01:10, Alan McMorran wrote:
    > How large are you talking about? I've found QVTo scales pretty well as
    > the dataset size increases but we're using at most maybe 3-4 million
    > objects as the input and maybe 1-2 million on the output. They can be
    > pretty complex models though so we're seeing 8GB heap spaces in some
    > cases to accomodate the full transformation process.
    Ok, that is good to know. We will be working in roughly the same order
    of magnitude. The final application will run on a well equipped server,
    unfortunately my development machine is not as powerful so I can't
    really test that.
    > The big challenges we've had to overcome is that our model is
    > essentially flat with no containment in it so there are parts of the
    We have a very hierarchical model. I still wonder to what extent EMF and
    QVTo at least try to let go of objects which are not needed anymore and
    allow them to be garbage collected?
    > Is the GC overhead limit not tied to the heap space limits of the JVM?
    Apparently not, quoting
    http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html:
    "The concurrent collector will throw an OutOfMemoryError if too much
    time is being spent in garbage collection: if more than 98% of the total
    time is spent in garbage collection and less than 2% of the heap is
    recovered, an OutOfMemoryError will be thrown. This feature is designed
    to prevent applications from running for an extended period of time
    while making little or no progress because the heap is too small. If
    necessary, this feature can be disabled by adding the option
    -XX:-UseGCOverheadLimit to the command line."
    I will experiment a little bit with different GC's, namely the parallel GC.
    Regards
    Marius

  • How to split Large mp4 and AVI video files to smaller scenes

    Hi All
    I’ve been looking into how to cut-up large files ready for import into CS4. So far all I can find are the usual suspects that only lets you cut a section from a file. Dose anyone use any software that enables you to split a large video file (MP4, AVI and so on) into say 20 sections all in one hit?
    I do a lot of HD onboard cams, so the video is set to fire and only shut off when the run has finished. I then end up with about 60 percent of the file (in diferent parts) I need to trash.
    Any help or advice would be much appreciated indeed!
    Xray

    function(){return A.apply(null,[this].concat($A(arguments)))}
    the_wine_snob wrote:
    Maybe, but maybe not. I use DigitalMedia Converter to convert to DV-AVI Type II's (I'm only doing SD), and it has a Split function. However, I have never used that, so do not know how well it might work for your needs, if at all. I just do not know. I believe that Deskshare has a user forum, and that might be a good place to try, after you've looked down their FAQ's.
    I hope that others will have a definitive answer for you, with iron-clad suggestions.
    Good luck,
    Hunt
    i don't know.

  • How to handle large number (7200+) identical HorizontalRule's

    I have an application where performance is becoming an issue. I have 30 VBoxes that each contain identical HorizontalRules, the number of
    HorizontalRules is unknown until render time but can easily be up to 240 per VBox. As there are 30 VBoxes this results in 7200 HorizontalRules being added to the stage. This results in large memory consumption and poor rendering time on lower specification machines. To speed this up I have tried using Sprites and graphics.draw to render the line, but I think it is the time taken to create the Object that is the problem.
    Is there any way to create one HorizontalRule and add it to the stage multiple times? I know that Flex will remove the child if it is added again with a different [x,y] coordinate so my attempts to do that failed.
    Thanks for any suggestions.

    The VBoxes are typically about 20 pixels wide and each has a line about 4 or 5 pixels apart all the way down their length. That is why there are so many of the little blighters.
    You are probably right, but I had a problem with graphics.draw in that they had to be added as rawChildren. The component is resizable and I found it impossible to move the drawn lines. I could not get a good reference to them, even if I stored each line in a seperate Array before adding it as a rawChild, and so I could not even delete them reliably. I know that the rawChildren and children/elements have things at different indexes, but I didn't manage to find a way to use that information.
    You have given me the idea that I shouldn't draw them all on the screen at once though. With the full 7,200 lines, only about 30% are on the screen, I will look into using an item renderer. Is that the correct way to do it?
    Does anyone know how to relibly access rawChildren so they can be moved? Maybe my app is just a little wierd. I will try building a test case and see if I can add and remove from the rawChild list freely in a simpler application.

  • Jdev 10.1.2 UIX: How to handle page event and also follow link destination

    Hello guys, I think this should be easy:
    I have a frame with a link that produces an event which is handled by an event handler. But this link also has destination and targetFrame attributes.
    The event handling just updates the visual state of the frame which contains the link, but I also need the link to load the destination into the targetFrame, but this is not happening, just the event is handled but the link doesn't work.
    How can I get both, the event handling and the destination, work?
    Here is the code for the link:
    <link text="${uix.current.title}" destination="${uix.current.destination}"                                                             
                    targetFrame="main">
      <primaryClientAction>
       <fireAction event="optionClick">
         <parameters>
           <parameter key="clickedIndex" value="${uix.current.index}" />
          </parameters>
        </fireAction>
      </primaryClientAction>
    </link>Thanks
    Fer

    Thanks to everyone who read it. I figured out how to solve this.
    The workaround was to set every link destination to one action which performs 2 things:
    1. Update the visual state of the first frame (setting attributes to a session object) and;
    2. Puts the destination that I wanted to be loaded into the second frame as a request attribute;
    besides all the above, the link had its targetFrame set to "_parent", so the Frameset page was reloaded and therefore reloaded both frames, this way the first frame renders updated (thanks to the changes made to the session object) and the second frame has its source attribute bound to ${requestScope.frameDestination} to load the desired page (actually a struts action).
    Hope this helps someone else.
    Bye.

Maybe you are looking for

  • A Choir Query - from FinalePM to GarageBand3 (and back?)

    The Challenge: A Mac-based filmmaker/video editor anxiously seeks advice from experienced users of Finale and GarageBand who know the inner mysteries of MIDI (and may also be proficient with the likes of LogicExpress). In a nutshell, I'm struggling t

  • How many DSA Cards can be simultaneo​usly Self Calibrated​?

    I have two PXI-1045 chassises connected to a single computer via a dual port MXI Express card. Each of the two chassises have 12 PXI-4462 4 channel DSA cards. I am using the "DAQmx Self Calibrate.vi", which takes about 156 seconds. It appears that if

  • How to append data from different import files?

    Dear experts, The customer is having different applications running that will output imports files BPC needs at a different time of process. Is it possible to append data which already exist in BPC from import? eg. BPC already has a record Factory1,

  • M35-S359 - need WXP Pro Recovery CD (or Drivers)?

    Does anyone have M35-S359 XP PRO (Service Pack 2) Recovery CD or drivers that I get? I have the Home Edition CD but it doesn't help with my screen brightness. I'm mainly interested in getting the Function Keys to work to make the screen brighter. All

  • Problem with space label missing

    Hi everyone, We have an application which update one database from another one. And we found a strange things with label. When we have for example this label "Test ", it seems that the blank part is cut by the dataset and then we have "Test" without