Delay in data retrieval

We have various users reporting issues where they are missing records in Discoverer reports. After posting a journal in Oracle General Ledger the Discoverer report displays 1 or more related journal lines, while other lines from the same journal are missing. After restarting Discoverer the data is displayed properly. Haven't been able to reproduce this, this apparently only occurs when a user tries to query data entered in EBS through an active Discoverer session. As all records are committed at exactly the same time when posting the journal I'd expect Discoverer to retrieve all or none at all, instead of random records. Has anyone run into anything like this before? Possibly a server issue?
Regards,
Arthur

Hi,
This can be happen if the report is running at the exactly same time the import process.
Since the discoverer runs a select statement then it will retrieve the data from the time you start running the report.
In case you use the Viewer...
The thing that you describe that when pressing the refresh the added data are not displayed can be due to
caching in memory. if the tables are already in the memory it will select from the memory.
You can define from the application server that no caching will be performed for discoverer viewer.
Tamir

Similar Messages

  • Data retrieval buffers - buffer size and sort buffer size

    Any difference to tune BSO and ASO on data retrieval buffers?
    From Oracle documentation, the buffer size setting is per database per Essbase user i.e. more physical memory will be used if there are lots of concurrent data access from users.
    However even for 100 concurrent users, default buffer size of 10KB (BSO) or 20KB (ASO) seems very small compare to other cache setting (total buffer cache is 100*20 = 2MB). Should we increase the value to 1000KB to improve the data retrieval performance by users? The improvement impact is the same for online application e.g. Hyperion Planning and reporting application e.g. Financial Reporting?
    Assume 3 Essbase plan types with 100 concurrent access:
    PLAN1 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    PLAN2 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    PLAN3 - 1000KB * 100 = 100MB 1000KB * 100 = 100MB (total sort buffer size)
    Total physical memory required is 600MB.
    Thanks in advance!

    256 samples Buffer size will always give you a noticable amount of latency. If you use Software Monitoring you should try setting your Buffer to 64 samples. With the recording delay slider in the preferences->Audio you can compensate for the latency (of course not in realtime) so that the Audio will be placed exactly there, where it should hae been recorded at. In your case set it to a -value. A loopback test (check the link below) will clarify the exact amount of Latency occuring on your system.
    http://discussions.apple.com/thread.jspa?threadID=1883662&tstart=0

  • Logical reporting cube - slow data retrieval

    OLAP 11.2
    I have seen discussion on this board recommending using a logical reporting cube to expose data if there are multiple physical cubes in the workspace.  The problem I am having is that the data retrieval seems very slow from the logical cube while it is fine if I select from the actual physical cubes. Here is my set up.
    Physical cube A - Dimensions x,y,z
    Physical Cube B - Dimension w,x
    Physical Cube C, Dimension y,z
    Logical Cube  D. Dimension w,x,y,z with calculated measures mapped to the measures from the cubes A,B and C as well as some other calculated measures based on these.
    Data retrieval from A, B and C is quick but the query hangs if I select certain measures from D. On some measures, it seems to work without issues.
    Is there some thing I am missing?
    Thanks,
    Usman

    Usman,
    Its probably because of LOOP_VAR  and/or LOOP_DENSE settings of the measures in your Logical reporting cube.
    I have posted on this topic in the past.   With this new Oracle-Forum interface its difficult to search old postings.
    (1). Execute the following:
    exec dbms_cube_log.enable(dbms_cube_log.TYPE_OPERATIONS, dbms_cube_log.TARGET_TABLE, dbms_cube_log.LEVEL_HIGH);
    (2). Run the olap query.
    (3). Query CUBE_OPERATIONS_LOG table
    select *
    from cube_operations_log
    where   upper(suboperation) like '%LOOP%' or upper(name) like '%LOOP%'
    order by time;
    Tell us what you see in the 'NAME  and  'VALUE'  columns, after your query execution.
    Also take a look at this post:   Delay when querying from CUBE_TABLE object, what is it?
    Nasar

  • BPC 10 - EPM data retrieval very slow!

    Hi BPCers,
    We are using an Excel EPM Input Schedules as a Resource Management tool - using VBA to provide the functionality we need.
    Performance is generally good, but quickly deteriorates when handling larger data sets - even 500-600 rows of transactional data is enough to slow data retrieval from our BPC Cube to EPM to an unusable speed. This is in relative terms pretty small so  there should be some option for optimisation.
    Does anybody have any experience with this? All suggestions welcome. We are operating on EPM Service Pack 7 Patch 1, but I'm not sure that EPM is necessarily the problem here.
    Thanks,
    Tom

    Thanks Gersh,
    Had a look through fiddler and have identified the job that is causing the delay - some rooting around in the ABAP debugger produced the answer as to why adding more data slows processing speed so dramatically.
    When we take data from the back end, we select a couple of parameters which limit the range of data that we are pulling through - a certain set of people, and a certain range of days. Once this is pulled through, allocations are made to any combination of person and day within this range, which generates and extra two properties - a project ID and a work status.
    This makes 4 properties, and when BPC pulls data it attempts to find every combination of every one of the properties that exist within this range - so the more allocations made the more this slows down as it dramatically increases the number of combinations.
    The result is that BPC runs through a couple of hundred thousand generated tables, most of which are nonsense.
    Not sure what to do from here. This is how BPC reads data so approaching a fix could be difficult.
    Tom

  • Query Error Information: Result set is too large; data retrieval ......

    Hi Experts,
    I got one problem with my query information. when Im executing my report and drill my info in my navigation panel, Instead of a table with values the message "Result set is too large; data retrieval restricted by configuration" appears. I already applied "Note 1127156 - Safety belt: Result set is too large". I imported Support Package 13 for SAP NetWeaver 7. 0 BI Java (BIIBC13_0.SCA / BIBASES13_0.SCA / BIWEBAPP13_0.SCA) and executed the program SAP_RSADMIN_MAINTAIN (in transaction SE38), with the object and the value like Note 1127156 says... but the problem still appears....
    what Should I be missing ??????  How can I fix this issue ????
    Thank you very much for helping me out..... (Any help would be rewarded)
    David Corté

    You may ask your basis guy to increase ESM buffer (rsdb/esm/buffersize_kb). Did you check the systems memory?
    Did you try to check the error dump using ST22 - Runtime error analysis?
    Edited by: ashok saha on Feb 27, 2008 10:27 PM

  • WAD : Result set is too large; data retrieval restricted by configuration

    Hi All,
    When trying to execute the web template by giving less restiction we are getting the below error :
    Result set is too large; data retrieval restricted by configuration
    Result set too large (758992 cells); data retrieval restricted by configuration (maximum = 500000 cells)
    But when we try to increase the number of restictions it is giving output. For example if we give fiscal period, company code ann Brand we are able to get output. But if we give fical period alone it it throwing the above error.
    Note : We are in SP18.
    Whether do we need to change some setting in configuration? If we yes where do we need to change or what else we need to do to remove this error
    Regards
    Karthik

    Hi Karthik,
    the standard setting for web templates is to display a maximum amount of 50.000 cells. The less you restrict your query the more data will be displayed in the report. If you want to display more than 50.000 cells the template will not be executed correctly.
    In general it is advisable to restrict the query as much as possible. The more data you display the worse your performance will be. If you have to display more data and you execute the query from query designer or if you use the standard template you can individually set the maximum amount of cells. This is described over  [here|Re: Bex Web 7.0 cells overflow].
    However I do not know if (and how) you can set the maximum amount of cells differently as a default setting for your template. This should be possible somehow I think, if you find a solution for this please let us know.
    Brgds,
    Marcel

  • Result set is too large; data retrieval restricted by configuration

    Hi,
    While executing query for a given period, 'Result set is too large; data retrieval restricted by configuration' message is getting displayed. I had searched in SDN and I had referred the following link:
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/d047e1a1-ad5d-2c10-5cb1-f4ff99fc63c4&overridelayout=true
    Steps followed:
    1) Transaction Code SE38
    2) In the program field, entered the report name SAP_RSADMIN_MAINTAIN and Executed.
    3) For OBJECT, entered the following parameters: BICS_DA_RESULT_SET_LIMIT_MAX
    4) For VALUE, entered the value for the size of the result set, and then executed the program:
    After the said steps, the below message is displayed:
    OLD SETTING:
    OBJECT =                                VALUE =
    UPDATE failed because there is no record
    OBJECT = BICS_DA_RESULT_SET_LIMIT_MAX
    Similar message is displayed for Object: BICS_DA_RESULT_SET_LIMIT_DEF.
    Please let me know as to how to proceed on this.
    Thanks in advance.

    Thanks for the reply!
    The objects are not available in the RSADMIN table.

  • Real tough data retrieval - assistance needed

    Late 2011 Macbook Pro with 500GB hard drive
    Lion 10.7
    One morning out of absolutely nowhere I get this grey screen with a flashing question mark folder. I take it to the geniuses at the Apple store and they tell me my hard drive has failed (no explanation). My mac is under warranty so they gave me a new hard drive for free, bagged my old hard drive and told me "good luck" retrieving the data.
    I'm on a mission to retrieve the data without paying for services. I have never retrieved data before but I've been doing a lot of forum reading and I have been getting protips from an IT friend who has saved my PC data before.
    As of now, I have been unable to even access the hard drive and so I am reaching out to the community to help me conquer this project.
    the problem is not the OS (according to Apple store)
    when hooking up with the dongle, neither Finder nor Disk Utility detect the bad drive
    the drive will spin when forced by the dongle (so I've ruled out the freezer method)
    I was advised (by friends and forums alike) to download so powerful data retrieval software:
    Data Rescue [did not detect bad external drive]
    Disk Drill [did not detect bad external drive]
    TestDisk (http://www.cgsecurity.org/wiki/TestDisk) [I can get it up and running but I have no idea how to use this software]
    So that seems to be the big problem, when I hook up my failed drive as an external hard drive, there is no acknowledgement from my MBP that it is connected and as such data recovery programs cannot access it. When I still had it installed in my computer, there was no clicking sound and it doesn't seem like any of the components are jammed up as it still spins.
    Where do I go from here? Please keep in mind that I am new to resolving my own technical problems but I'm willing to learn. Tired of being one of those people who look at computer parts and get anxious.
    NOTE: I haven't tried Target Mode as I do not have a firewire cable or access to another mac (yet). If you think I should try this, please let me know.

    A hard drive that will not divulge BOTH its Make&Model and a reasonable size/capacity to the likes of Disk Utility and data rescue programs has died, and connot be repaired with any software.
    Target Disk mode will not improve anything. The drive is read as a Mac Volume by software. If it won't mount under Mac OS X, it won't mount under Target Disk Mode.

  • Re: (forte-users) Delays in data transfer..server-to-client

    I would try using DOM (distributed object manager) traces. trc:do:20 will
    give you information on each messages sent from and received by the
    partition. Levels are 1, 2, 5, 7, and 8, and trc:do:*:8 is very
    verbose. trc:do:20:1 may tell you what you want to know. trc:do:1:1 will
    give you a basic 1-line-per DOM event trace that may also be all you need.
    Communications manager traces will tell you about network and socket-level
    activity, but not about the sizes of the messages themselves. In addition,
    the operating system makes decisions about physical packet size and
    send/receive timing, so CM activities only generally map to actual network
    activity.
    -tdc
    iPlanet Integration Server Engineering
    At 09:24 AM 5/1/01 -0700, you wrote:
    All,
    We are experiencing delays in object transfer between server and client. The
    delays are longer with large objects (a single object with an array of objects
    that reflect the rows returned in a database) than small (ie: 10 rows vs 400).
    Does anyone have any (actual) experience using the various Forte' flags in
    order
    to show the actual size of the object/packets being passed between the server
    and client?
    We are using input/output between client and server, input on all the SO's
    within a partition. Response on the server side is good, roughly 6 seconds or
    so. The round trip fare however from the time the client makes the SO call to
    the time that it completes is in the 25-30 second range, leaving roughly 20-25
    seconds unaccounted for. I have brought in the network guys who are
    requesting
    the data size and packet information. I did not see what I am looking for
    using
    the trc:cm:*:4 and trc:cm:*:8 flags. I will be trying the trc:cm:*:10
    flag, but
    Forte' indicates that this flag is very verbose, the systems group hates
    it when
    I use up all of THEIR disk space!
    Any ideas would be appreciated as always.

    Jeff,
    If the object you are passing does not require changes made to it in the
    server partition to be returned, pass the object as copy input (pass by
    value not reference). If it is necessary to pass the object as input, try
    to pass only the attributes that are required to the remote partition
    instead of the whole object.
    Input/Output is normaly used with scalar variables. When a scalar is passed
    to a remote partition, if the value is changed in that partition, the value
    is not returned to the calling partition unless Input/Output is used.
    Input/Output should not be used for object type parameters, if you need to
    pass a reference, use Input only. If you can pass by value, use Copy Input.
    You will notice a huge difference in performance changing from Input to Copy
    input when passing large objects.
    Hope this helps,
    Travis Foote
    Fortedeveloper.com Inc.
    ----- Original Message -----
    From: "Jeff Bennett" <[email protected]>
    To: <[email protected]>
    Sent: Tuesday, May 01, 2001 9:24 AM
    Subject: (forte-users) Delays in data transfer.. server-to-client
    >
    All,
    We are experiencing delays in object transfer between server and client.The
    delays are longer with large objects (a single object with an array ofobjects
    that reflect the rows returned in a database) than small (ie: 10 rows vs400).
    >
    Does anyone have any (actual) experience using the various Forte' flags inorder
    to show the actual size of the object/packets being passed between theserver
    and client?
    We are using input/output between client and server, input on all the SO's
    within a partition. Response on the server side is good, roughly 6seconds or
    so. The round trip fare however from the time the client makes the SOcall to
    the time that it completes is in the 25-30 second range, leaving roughly20-25
    seconds unaccounted for. I have brought in the network guys who arerequesting
    the data size and packet information. I did not see what I am looking forusing
    the trc:cm:*:4 and trc:cm:*:8 flags. I will be trying the trc:cm:*:10flag, but
    Forte' indicates that this flag is very verbose, the systems group hatesit when
    I use up all of THEIR disk space!
    Any ideas would be appreciated as always.
    -jeff
    For the archives, go to: http://lists.xpedior.com/forte-users and use
    the login: forte and the password: archive. To unsubscribe, send in a new
    email the word: 'Unsubscribe' to: [email protected]

  • Should I use a data retrieval service or software to recover data ?

    Please pardon the length of this post.
    Here's the setup for our office:
    Computer 1:
    10.4.8 OS X
    1 GHZ PowerPC G4: silver grey tower, grey apple
    1MB L3 Cache
    256 MB DDR SDRAM
    Computer 2:
    10.4.8 OS X
    Dual 450 MHZ PowerPC G4: blue grey tower, blue apple
    256 MB SDRAM
    Computer 3:
    10.4.8 OS X
    1 GHZ PowerPC G4 IMac Flat Screen:
    256 MB DDR SDRAM
    I have 2 LaCie Big Disk d2 Extremes daisy chained and connected to the IMac. We use the first to store all of our data to keep our local disks free. The second d2 is the backup to the first. The other 2 computers connect to the d2's via an ethernet hub. The d2's are each partitioned into 4 compartments.
    A couple of days ago I started the system up when I got in in the morning, and the main d2 would not open. I ran disk utility, but it said that the drive was damaged beyond it's ability to repair. I ran DiskWarrior, and it gave me this message:
    "The directory of disk 'G4' cannot be rebuilt. The disk was not modified. The original directory is too severely damaged. It appears another disk utility has erased critical directory information. (3004, 2176)."
    I contacted Disk Warrior tech support and after a series of exchanges that had me send him dated extracted from the terminal function, he said this:
    "It appears that the concatenated RAID inside your LaCie
    drive has failed (your 500GB drive is actually 2 250GB
    hard drives). That is why we only saw 2 partitions
    on "disk3". A possible cause could be a failed bridge
    in the case.
    You may be looking at sending this drive to a data recovery service.
    However, it is possible that we may be able to recover data
    from the partitions that we CAN see. What we would be doing would cause no damage to your data
    unless the hard drives were having a mechanical failure (ie, the
    head crashed and was contacting the platters, similar to scratching
    a record). But from what I've seen, I don't feel that is the case.
    I believe the piece of hardware that 'bridges' the two drives to
    make them act as one has failed. that's why we can only see data
    about 1 of the 2 drives in the case.
    We would only be attempting to gather data off the drive. Since
    data recovery services sometimes charge for amount of data retrieved,
    it's up to you how you want to proceed."
    Most of the data from the past 5 years for our business stands at being lost. Only some of it had been properly backed up on the second drive, due to some back up software issues. I want to do whatever I can to retrieve all of, or at least some of the data. From what the Alsoft technician said, do you think that the data recovery software available to the consumer is going to be robust enough to retrieve at least the data from the one disk in the drive that is recognizable (there are 2 250gig disks in the d2X. Only one is responding at all). If so, do these Softwares further damage the disks? Or should I just send the drive to a data recovery service?
    I'd like to try to extract some of it myself via over the counter retrieval software, but I don't know whether to trust these programs?
    Any advice would be greatly appreciated.
    Thanks in advance.
    Peter McConnell
    1 GHZ PowerPC G4 IMac Flat Screen   Mac OS X (10.4.8)   Posted

    Peter
    My 2 cents:
    I have used FileSalvage
    http://www.subrosasoft.com/OSXSoftware/index.php?mainpage=product_info&productsid=1
    to recover files from damaged disks. It works as advertised, within limits. Some files may be too damaged to recover. More importantly yo get to scan the disk before actually recovering, and it will give you a list of what it thinks it can recover.
    My experience was that it recovered approx 85% of the data.
    YMMV but they do have a trial.
    Regards
    TD

  • Report Developed in Webi Rich Client Consuming more time in Data Retrieval

    Dear All,
    I am a BO Consultant, recently in my project I have developed one report in Webi Rich Client., at the time of development and subsequent days the report was working fine (taking Data Retrieval time less than 1 minute), but after some days its taking much time (increasing day by day and now its taking more than 11 minutes).
    Can anybody point out what could be the reason?????
    We are using,
    1. SAP BI 7.0
    2. SAP BO XI 3.1 Edge
    3. Webi Rich Client Version :12.3.0 and Build 601
    This report is made on a Multiprovider (Sales).
    What are the important points that should be considered so that we can improve the performance of Webi Reports????
    Waiting for a suitable solution.....................
    Regards,
    Arun Krishnan.G
    SAP BO Consultant
    Edited by: ArunKG on Oct 11, 2011 3:50 PM

    Hi,
    Please come back here with a copy/paste of the 2 MDX statements from the MDA.log to compare the good/bad runtimes.
    & the 2 equivalent DPCOMMANDS clauses (good and bad) from the WebI trace logs.
    Can u explain what u really mean in the bold text above..................Actually I didn't get you..........
    Pardon, I have only 3 months experience in BO.
    Regards,
    Arun
    Edited by: ArunKG on Oct 11, 2011 4:28 PM

  • IDOC RSRQST is getting stuck frequently, it leads to delay in data loading

    hi,
    we are using SAP R/3 4.7 E and SAP Netweaver 2004s as BI server .
    On daily basis we are loading the data from R/3 to BI system . we are facing problems in data loading
    IDOC RSRQST is getting stuck frequently, which leads to delay in data loading..
    Please guide me to resolve this issue .
    Thanks in advance .

    Thanks for your reply kumarsen   . I have gone through the referred forum .
    In my case , I have checked the TRFC,  conncetivity between R/3 and BW  and Job scheduled as priority C .
    Still many Idocs are got stuck in R/3 [approx every 5 min once] . we are releasing the Idoc's through BD87 only .
    Please suggest for some permanent solution .

  • Data retrieval for Sony handycam with 60G harddrive.

    Does BestBuy do data retrieval?  I have a handycam with work video on it that accidentaly got formated.  I know with a standard harddrive you can still retrieve it if there has not been recording since the formatting but I am not sure about camera harddrives.  If BestBuy does not do it, does anyone know where it can be done and approximate cost.  I live in the New Orleans area.  
    I may try the same question in computers if nobody knows.  Thanks in advance for any guidance at all. 

    Best Buy sends out for data retrieval. You can have it sent out through geek squad to get the estimate of cost.
    Crystal
    Superuser
    Forum Guidelines | Terms & Conditions | Community Guidelines | What is a Superuser?
    *Remember to mark your questions solved and click the star to give kudos to show your thanks!*
    While I used to be a Best Buy Employee, I no longer have any affiliation with Best Buy.
    My opinions do not in any way shape or form represent Best Buy's Official decisions.

  • Data Retrieval Speed in Oracle Spatial vs. ESRI ArcSDE

    I would appreciate any opinions regarding data retrieval
    performance between Oracle Spatial and ESRI ArcSDE. Would an end-
    user (using ESRI software) experience significant differences in
    data retrieval speed depending on how the data were stored in
    Oracle (MDSYS.SDO_GEOMETRY verses ESRI Binary/Blob formats).
    Knowing that the ESRI binary formats are tailored to their
    software front-end apps (ArcGIS, ArcMap, ArcCatalog, and
    ArcInfo), wouldn't this be a "non-issue" until the spatial
    dataset gets "large", and even then, wouldn't performance be
    (almost) equal if the spatial indexes were created properly?
    Thanks for your inputs,
    Bruce

    John,
    You can't do that type of query in sql from sql*plus using
    SDEBINARY. HOwever, you can perform spatial queries in ArcMap
    if you are using SDEBINARY.
    You can use the query builder to perform point-in-polygon type
    queries.
    Hope that helps.
    For my two cents, I think SDO_GEOMETRY gives you a more robust
    database to work with, because you have the added power of
    Oracle Spatial functions. If you are using SDEBINARY you are
    limited to only what you can do thru ArcGIS.
    If you are concerned more about performance than accessibility,
    especially with a large number of users, then SDEBINARY might
    be the better choice.
    I love Oracle Spatial and am hoping that the performance issue
    will not be a serious one when we start putting ArcIMS developed
    apps into production.
    Dave

  • Passing values to subreport in SSRS throwing an error - Data Retrieval failed for the report, please check the log for more details.

    Hi,
    I have the subreport calling from the main report. The subreport is based on MDX query agianst the SSAS cube. some dimensions in cube has values 0 and 1.
    when I try to pass '0' to the sub report as the parameter value, it gives an error "Data Retrieval failed for the report, please check the log for more details".
    Actually I am using table for storing these parameter values. In the main report I am calling this table (dataset) and passing these values to subreport.
    so I have given like [0],[1] and this works fine. when I give only either [0] or [1] then it is throwing an error.
    Could you please advise on this.
    Appreciate all and any help.
    Thanks,
    Divya

    Hi Divya,
    Based on the current description, I understand that there is no issue if you pass two values from main report to subreport, while the issue occurs when passing one value to subreport.
    To narrow down the issue, I want to confirm whether the subreport can run if there is only [0] or [1] in the subreport. If so, it indicates the query statements exist error in the subreport. If it’s not the case, this shows the issue occurs during passing
    values from main report to subreport. To make further analysis, please post the details of query statements of the subreport to the forum.
    Regards,
    Heidi Duan
    Heidi Duan
    TechNet Community Support

Maybe you are looking for

  • Do I need to close ActiveX references?

    Hi, I am developing a library of VIs that use ActiveX to control APWin (for Audio Precision System Two) with LabVIEW where both are running on the same PC and I am wondering if I need to close ActiveX refnums when I'm finished along a particular heir

  • HP laserjet cm1312 MFP doesn't scan

    I have almost completed the installation of my new multifunction printer from HP and it prints and copies beautifully, but it doesn't scan. When I get to the point of the setup where I choose either USB or Network connection it doesn't show the print

  • How to add hard disc to time capsule

    Just got new Time Capsule which works well with my MacBook pro. it says a hard disc can be added by USB but when I plugged it in it is not recognised. Help please

  • ShowModalDialog and "Unauthorized Access" for Item Help Tips - APEX 2.2.0

    Hi, all, Well, I have a popup that is displayed using ShowModalDialog, and that popup has help tips for some items. When I view it on my computer, regardless of whether or not I'm logged into APEX or not, and regardless of what authentication scheme

  • Archiso-live-2009-12-08 live CD not usable

    I have recently downloaded the archiso-live-2009-12-08 live CD but I am unable to use it.  I did a md5 check of the iso, and that is ok.  It boots into something that remotely resembles a desktop.  I have a Nvidia 9600GT graphics card for which X ins