No Data to Fetch?

Hi, all and thanks for the forums!
I'm new to BOE and am trying to rebuild reports we have in Crystal Reports 9 (I'm told the Crystal Report import is not available to us).  We use the 3 tier structure per our corporate architecture and though a 3 tier reporting structure leaves me a little baffled to begin with, as I'm accustomed to reports just connecting via ODBC straight to the underlying database, the universe is built (using desktop designer) and I can view the data in my tables.  The database was recently migrated from SQL Server 2000 to SQL Server 2008 and I have no problems with it using the existing Crystal Reports 9 reports or with a VB.Net web application.  As well as viewing the table data in designer.
I have built a few reports now and have a couple questions. 
1. One of my reports uses a "between" statement in the where clause using a datetime field that prompts for the start and end dates for the report.  When I refresh the data, deski tells me there is no data to fetch--but I can copy the sql to SQL Server Management Studio and execute it (replacing the @variable for start and end with the appropriate date range) and return the expected number of records.  Why would I be getting "No Data to Fetch"?
2. Another report simply has a where clause without any prompts and after refreshing the report, all I see is the report structure--it never shows the data in the report, but I can click view data and see it there.  How do I get the report to display the data?
Would someone be so kind as to assist a BO newbie on why these 2 things are occuring?
Thanks,
ejowens

The View/Structure option was it for viewing the data in that particular report...do I feel a little silly?  Yes I do!  Thanks for helping with that!
For the date prompt, I figured it out...the prompt window puts the prompts backwards.  It asks for 'end date' first and then 'start date' so instead of my date range being 6/1/2011 through 6/30/2011, because deski is backwards the date range is 6/30/2011 through 6/1/2011.
So I guess the real question needs to be how do I get deski to put the prompts in an order that is intuitive so that my end users (as well as myself) do not get confused?
Regards,
ejowens
Edited by: ejowens on Jul 20, 2011 4:34 PM

Similar Messages

  • In oracle rac, If user query a select query and in processing data is fetched but in the duration of fetching the particular node is evicted then how failover to another node internally?

    In oracle rac, If user query a select query and in processing data is fetched but in the duration of fetching the particular node is evicted then how failover to another node internally?

    The query is re-issued as a flashback query and the client process can continue to fetch from the cursor. This is described in the Net Services Administrators Guide, the section on Transparent Application Failover.

  • How the data is fetched from the cube for reporting - with and without BIA

    hi all,
    I need to understand the below scenario:(as to how the data is fetched from the cube for reporting)
    I have a query, on a multiprovider connected to cubes say A and B. A is on BIA index, B is not. There are no aggregates created on both the cubes.
    CASE 1: I have taken RSRT stats with BIA on, in aggregation layer it says
    Basic InfoProvider     *****Table type      ***** Viewed at      ***** Records, Selected     *****Records, Transported
    Cube A     ***** blank ***** 0.624305      ***** 8,087,502      ***** 2,011
    Cube B     ***** E ***** 42.002653 ***** 1,669,126      ***** 6
    Cube B     ***** F ***** 98.696442 ***** 2,426,006 ***** 6
    CASE 2:I have taken the RSRT stats, disabling the BIA index, in aggregation layer it says:
    Basic InfoProvider     *****Table Type     *****Viewed at     *****Records, Selected     *****Records, Transported
    Cube B     *****E     *****46.620825     ****1,669,126****     6
    Cube B     *****F     ****106.148337****     2,426,030*****     6
    Cube A     *****E     *****61.939073     *****3,794,113     *****3,499
    Cube A     *****F     ****90.721171****     4,293,420     *****5,584
    now my question is why is here a huge difference in the number of records transported for cube A when compared to case 1. The input criteria for both the cases are the same and the result output is matching. There is no change in the number of records selected for cube A in both cases.It is 8,087,502 in both cases.
    Can someone pls clarify on this difference in records being selected.

    Hi,
    yes, Vitaliy could be guess right. Please check if FEMS compression is enabled (note 1308274).
    What you can do to get more details about the selection is to activate the execurtion plan SQL/BWA queries in data manager. You can also activate the trace functions for BWA in RSRT. So you need to know how both queries select its data.
    Regards,
    Jens

  • How the data is fetched from the cube for reporting

    hi all,
    I need to understand the below scenario:(as to how the data is fetched from the cube for reporting)
    I have a query, on a multiprovider connected to cubes say A and B. A is on BIA index, B is not. There are no aggregates created on both the cubes.
    CASE 1:  I have taken RSRT stats with BIA on, in aggregation layer it says
    Basic InfoProvider     *****Table type      ***** Viewed at          ***** Records, Selected     *****Records, Transported
    Cube A     *****             blank                 *****           0.624305         *****          8,087,502        *****             2,011
    Cube B     *****                     E   *****                      42.002653  *****                  1,669,126            *****                    6
    Cube B     *****                     F  *****                     98.696442    *****                  2,426,006       *****                    6
    CASE 2:I have taken the RSRT stats, disabling the BIA index, in aggregation layer it says:
    Basic InfoProvider     *****Table Type     *****Viewed at     *****Records, Selected     *****Records, Transported
    Cube B     *****E     *****46.620825     ****1,669,126****     6
    Cube B     *****F     ****106.148337****     2,426,030*****     6
    Cube A     *****E     *****61.939073     *****3,794,113     *****3,499
    Cube A     *****F     ****90.721171****     4,293,420     *****5,584
    now my question is why is here a huge difference in the number of records transported for cube A when compared to case 1. The input criteria for both the cases are the same and the result output is matching. There is no change in the number of records selected for cube A in both cases.It is 8,087,502 in both cases.
    Can someone pls clarify on this difference in records being selected.

    Hi Jay,
    Thanks for sharing your analysis.
    The only reason I could think logically is BWA is having information in both E and F tables in one place and hence after selecting the records, it is able to aggregate and transport the reords to OLAP.
    In the second case, since E and F tables are separate, aggregation might be happening at OLAP and hence you see more number of records.
    Our Experts in BWA forum might be able to answer in a better way, if you post this question over there.
    Thanks,
    Krishnan

  • Occasional message "No data to fetch in query" when viewing a webi report

    I  am currently working on an exising business Object 3.1 install in which the only user able to access the data in a universe is the Administrator. Permission on all universes and related connection grant data access and view objects permission to all users. Even members of the administrators group are unable to acess data in the webi reports.
    If  another member of the administrator imports the Universe with Designer and trie to view a list of values they get the message `No sdata to Fetch` but can see table values. Consequently the only user to be able to use the reports is the administrator.  I am wondeing if the universe were saved without the `save for all users` box ticked before exporting.
    I am now out of ideas and waiting to hear from someone with a few,
    Thanks for all help in advance.

    Hi PG, No allusers regardless of which group they belong to receive the same message "No data to fetch on query". We were led to believing that the problem is on te Universe level because all users except adminisrtator also get the message when trying to display a list of values, but they can see the data if they click on the "Show table Values". I should also mention that when users try to refresh a webi report they can see  and refresh lists of value from the prompt but still get that error message.
    Because of the contradictory nature of these symptoms, everyone here is totally confused and we are starting to suspect that it may be some kind of security setup at the database level.

  • Data to fetch from file

    Dear All,
    which parameter decides how data to fetch from file ?
    Regards

    I think either friend is asking you command (not parameter) to fetch the data from datafile; so it is SQL's Select command which will read the datafile as user process on the db server.
    Regards
    Girish Sharma

  • TS3899 i receiving my gmails with 15+ minutes delay. this is because no option for "push" data, only "fetch" for gmail account. How can fix that?

    I receiving my gmails with 15+ minutes delay. this is because no option for "push" data, only "fetch" for gmail account. How can fix that?

    Sign up for Google's paid Apps for Business service, you can then set things up using Exchange & you'll get push using Gmail. Otherwise, you can't get push using Gmail.

  • Read table data and fetch file from path specified in table column

    Hi,
    I have a situation where file path information is stored in database table along with other columns - I need to read data from SQL Server table row by row, go to the path identified by the record, check for existance of the file - if the file is available copy it to a staging area, then read the next record. Repeat this process until the last record is fetched from the table. At the end, zip all the files found and FTP to a different location. If the file is not found in the path specified, write the database record to a seperate text file.
    At end of the process, send an email with records for which files were not found at specified location.
    Thanks
    Edited by: user4566019 on Oct 5, 2008 4:32 AM

    Hi,
    That is not complex to solve at ODI.
    Try the following.
    1) Create a package
    2) create a variable to receive the path as parameter and drag and drop it at package as "Declared"
    3) drag and drop the ODI Tool "odiFileMove" using and let the variable as patch at ODI Tool parameter.
    Use the parameters -ACTION=MOVE -TIMEOUT=100 and -NOFILE_ERROR=NO (take a look at ODI Toos Reference for other parameters)
    3) Create a "KO" (error) flow to write the variable at a text file. That will allow the process knows what file is missing
    4) generate a scenario of this package
    5) create a procedure
    6) at first step, put the query from SQL Server (set the Logical Schema to the right connection)
    7) at Target tab, put the call to the scenario generated using the returned value from query parameter. Plus, let the -SYNC_MODE=1. That will call one scenario by time.
    8) create a new step to zip the files at stage directory
    9) create a step to make the ftp
    10) create a step to mail the txt generated with files not founded. If you wish, you can make a validation on the file to see if is necessary to send the mail, it means, if there is any file missing.
    Does it help you?

  • Ref Cursors / throwing data into a Ref Cursor as data is fetched

    I was wondering if anyone has executed an SQL statement and as each row is being fetched back from an SQL, doing some data checks and processing to see if the row is valid to return or not, based on the values being fetched in an SQL.
    For example, I'm taking an SQL statement and trying to do some tuning. I have an Exists clause in the Where statement that has a nested sub-query with some parameters passed in. I am attempting to move that statement to a function call in a package (which is called in the SELECT statement). As I fetch each row back, I want to check some values that are Selected and if the values are met, then, I want to execute the function to see if the data exists. If it does exist, then, I want the fetched row returned in the Ref Cursor. If the criteria is met and the row doesn't exist in the function call, then, I don't want the fetched row to return.
    Right now, the data has to be thrown to REF Cursor because it's being outputted to the Java application as a Result Set.
    I've found many statements where you can take a SELECT statement and throw the Results in the Ref Cursor. But, I want to go a step further and before I throw each row in the Ref Cursor, I want to some processing to see if I put the Fetched Row in the Ref Cursor.
    If someone has a better idea to accomplish this, I'm all ears. Like I say, I'm doing this method only for the sake of doing some database tuning and I think this will speed things up. Having the EXISTS clause works and it runs fast from an End-user standpoint but, when it processes on the database with the nested subquery, it is slow.
    Here's an example of something that might be a problem (Notice the nested subquery). I moved the nested subquery to a function call written on the database package and make the call to the procedure/package in the SELECT statement. As I process each row, I want to check some values prior having the function call execute. If it meet some criteria, then the record is Ok to fetch and display in the Ref Cursor. If it does not meet the criteria and goes through the function and doesn't return data, then, I don't want the Fetched row from the main query to return the data.:
    SELECT EMPNO,
    FIRST_NAME,
    LAST_NAME
    FROM EMP E,
    DEPT D
    WHERE E.DEPTNO = D.DEPTNO
    AND EXISTS (SELECT 'X'
    FROM MANAGER M
    WHERE M.MANAGER_ID = E.MANAGER_ID
    AND MANAGER_TYPE IN (SELECT MANAGER_TYPE
    FROM MANAGER_LOOKUP ML WHERE ML.MANAGER_TYPE = M.MANAGER_TYPE))
    Any help or ideas of other things to try is appreciated. Keep in mind that I am returning this data to the Java application so, throwing the data to a Ref Cursor in the PL/SQL is the ideal method.
    Chris

    Ref cursors are not required nor desirable when writing java database application. Cursors are mentioned only once in the JDBC documentation reference guide, in the section "Memory Leaks and Running Out of Cursors".
    In a word cursors are just plain ridiculous, and in fact I never used them in my 15+ years of application development practice:
    http://vadimtropashko.wordpress.com/cursors/

  • From where the sales office data is fetched in Activity moniter for an empl

    Hello ,
    An activity is maintained for an employee who has been  maintained in CRM .So the sales office data that is displayed is fetched from which table or Tcode.It dislayed automatically while monitering the Activity in Activity Moniter.
    If i want to change the sales office where to change it.
    Regards,
    divya

    Hi Natarajan,
          Thanks for your reply.
         So what exactly you want me to do ,if i want to change the sales office which is displayed wrong for Bussiness partner ie. Employee.
    Please say me the procedure or path to change the Sales office.
    Could you please say me "what is Organisation Data profile?"
    Regards,
    Divya

  • Siebel data bean fetch data in chunks, executeQuery2 ?

    Hi,
    I am able to retrieve all records of Employee business component using Siebel data bean. Below is the code snippet-
    oPsDR_Header = dataBean.newPropertySet();
    lPS_values = dataBean.newPropertySet();
    busComp.clearToQuery();
    busComp.setViewMode(AllView);
    busComp.activateMultipleFields(oPsDR_Header);
    busComp.executeQuery(true);
         Through this code I am getting all Employee records. My requirement is to fetch records in chunks. As, when there would be several thousand records memory requirement for my process would be very high. Can I configure anything in data bean or in Siebel server to achieve this? How API busComp.executeQuery is different from busComp.executeQuery2 ?
         Thanks very much for any help!
    Regards,
    Ajay

    Hi!
    You need to use Siebel Tools to determine the Business Component Field Mappings to the Applet Controls that you're looking at.
    Regards,
    Oli

  • Data being fetched bigger than DB Buffer Cache

    DB Version: 10.2.0.4
    OS : Solarit 5.10
    We have a DB with 1gb set for DB_CACHE_SIZE . Automatic Shared Memory Management is Disabled (SGA_TARGET = 0).
    If a query is fired against a table which is going retrieve 2GB of data. Will that session hang? How will oracle handle this?

    Tom wrote:
    If the retrieved blocks get automatically removed from the buffer cache after it is fetched as per LRU algorithm, then Oracle should handle this without any issues. Right?Yes. No issues in that the "+size of a fetch+" (e.g. selecting 2GB worth of rows) need to fit completely in the db buffer cache (only 1GB in size).
    As Sybrand mentioned - everything in that case will be flushed as newer data blocks will be read... and that will be flushed again shortly afterward as even newer data blocks are read.
    The cache hit ratio will thus be low.
    But this will not cause Oracle errors or problems - simply that performance degrade as the data volumes being processed exceeds the capacity of the cache.
    It is like running a very large program that requires more RAM than what is available on a PC. The "extra RAM" comes from the swap file on disk. App program will be slow as its memory pages (some on disk) needs to be swapped into and out of memory as needed. It will work faster if the PC has sufficient RAM. However, the o/s is designed to deal with this exact situation where more RAM is needed than what physically available.
    Similar situation with processing larger data chunks than what the buffer cache has capacity for.

  • Help:Using Order by for a date column fetching wrong results

    Hi all,
    In my table I am having a date column which is having values
    13/06/2007 09:24:00
    31/05/2007 10:30:00
    I am selecting this column with some other columns using order by at the end by giving the date column.
    But I am getting rows in reverse order like above. I am using to_char(start_time,'dd/mm/yyyy hh:24mi:ss') in the select.
    If I give to_date(start_time) in the order by the values are fetching correclty.But I was not allowed to use to_char in the select statement.
    Kindly suggest me where it went wrong.
    Thanks in advance.

    Hi , Thanks for the update. Actually that column was an Oracle DATE type. Earlier I gave simply that column name alone in the order by clause. That is getting falied only if the 2 entries falls in different months. Otherwise that is perfect.
    So I gave to_date in the order by. But now I am facing a problem in the selecing that column.

  • Can data be fetched from Oracle Database to ApEx websheets?

    Hi,
    Oracle 11g R2
    ApEx 4.2.2
    We are in need of giving an MRU form for the customized data by the users. The number of columns will be defined dynamically by the end user. We thought of using apex's standard tabular forms, but they cannot have dynamic columns. We also thought of having manual tabular forms using apex collections, but there is a limitation that the collection can have maximum of 50 character datatypes, 5 numeric, 5 date and 1 each for clob and blob. The number of columns at runtime may exceed these numbers. so we ruled out this. We then thought of using apex websheets. We know that the data can be fetched from Oracle database to websheet reports, but the reports cannot be edited. We know that the data grid gives option to edit the data, but I couldnt find a way to fetch the data from database to it.  I understand that the data in the apex websheets can be queried from apex's meta tables apex$_ws_... views and somehow in a back door way of doing these can be updated to the business tables. But we also need to fetch the data  from Oracle database to the grid. I do not want to insert into the apex$_ws tables as Oracle does not support it. Is there any other possibility?
    Thanks in advance.
    Regards,
    Natarajan

    Nattu wrote:
    Thanks for your reply. Actually it is a data that is fully customized by the end user. These custom attributes are really stored as rows and a complex code returns them as columns at run time. So the number of columns is different user to user based on the number of rows they have.
    They'd never have got that far if I'd been around...
    They now want to edit them in a tabular form as well.
    Well once you've implemented one really bad idea, one more isn't going to make much difference.
    It rather sounds like you've got people who know nothing about relational databases or user interfaces responsible for the design of both. You need to be very, very, worried.

  • Limit of data Volume fetched by RFC.

    Hi all,
    I want to know the maximum data records (Volume) count that can be fetched by any RFC call at a time? Is their any limitation? Is this volume count generic or does it vary from RFC to RFC.
    Regards,
    Anirban.

    i think its more to do with time required to fetch data than the volume of data tranfer in RFC calls. if the time required is more than the RFC time out set in profile parameter , then the RFC call would fail.
    Regards
    Raja

Maybe you are looking for