10g - merge 2 fact tables

hi, experts,
there are 2 tables
columns in A:
CK1
CK2
CK3
X varchar
Y varchar
Z varchar
FA1 double
FA2 double
FA3 double
columns in B:
CK1
CK2
CK3
FB1 double
FB2 double
FB3 double
CK1,CK2 and CK3 are the common keys of TABLE A and TABLE B.
if there is a table/pivot table having columns : CK1,CK2,CK3, X,Y,Z, FA1,FA2,FA3,FB1,FB2,FB3
I found that all FB1,FB2,FB3 are showing null.
how to solve?

i used to work on a very primitive tool called INFORMATICA POWER ANALYZER ( 10 YEARS BACK- 2001/2002 ) and OBIEE is a re incarnation of that tool. I think this feature is avaiable in OBIEE 11g ( pulling columns from different subject areas via implicit common DIMENSION tables )
Cheers

Similar Messages

  • Obi 10g, merge the table cells horizontally

    hi, experts,
    is it possible to merge the table cells horizontally?
    thank you very much!

    Forreging,
    Try removing left border of the cells using CSS/options in the coulmn properties.
    mark post promptly
    -bifacts
    http://www.obinotes.com
    J

  • OBI 10g merge 2 time dimensions on the same table/graph in analysis

    Hi,
    My problem is: I have 2 different analyses (working fine) (using OBI 10g), they both use a different time dimension:
    time_dim_1 | fact1
    time_dim_2 | fact2
    I want to merge them to have something like the following:
    time_dim | fact1 | fact2
    What I managed to get so far is something like this:
    time_dim1 | time_dim2 | fact1 | fact2 (with, if we suppose there are 12 rows in time_dim1 and time_dim2, 12*12 = 144 rows instead of just 12)
    Here is some more explanations about how I set my repository:
    My fact table "issues" is made of the following columns:
    issue_id, issue_type, issue_create_date, issue_end_date
    I also have a table "calendar" with year, quarter, month and full_gregorian_date.
    In the physical layer I created 2 aliases for the "calendar" table to join with the fact table using "issue_create_date" and "issue_end_date".
    In the business model I created the 2 related time dimensions.
    Everything is working fine so far, I was able to get 2 different analyses :
    - time dimension (joined with "issue_CREATE_date") | count of issues (with aggregation rule "count" on issue_id)
    to see how many new issues are CREATED through time (are there more issues created or less)
    - time dimension (joined with "issue_END_date") | count of issues (with aggregation rule "count" on issue_id)
    to see how many new issues are ENDED through time (when are more issues ended)
    What I want is a third analysis like this:
    - time dimension | count of created issues | count for ended issues
    Thanks for your help.
    Florence

    did you set the content level as well?

  • NQS ERROR:14025 NO FACT TABLE EXISTS -after migrating from 10g to 11g

    NQS ERROR:14025 NO FACT TABLE EXISTS AT THE REQUESTED LEVEL OF DETAIL in all the reports after migrating from 10g to 11g ...
    then we applied the patch (One-off Patch for Bug: 11850704) for the error <<NQS ERROR:14025 NO FACT TABLE EXISTS AT THE REQUESTED LEVEL OF DETAIL>>
    But after applying the above the above patch we are still getting the same error.
    but in the above patch instructions file - Post deployment instructions to create the Variable
    Post Install Instructions:
    - To revert to the 10g navigator behavior for handling conforming dimensions,
    you must set the following session variable via an init block in the RPD:
    NO_FORCE_TO_DETAIL_BIN=1
    The default value for the above variable is 0.
    - Restart all servers (Admin Server and all Managed Server(s))
    but we didn’t find the process to create the specified variable and Initialization block in the RPD
    Can you please suggest us how to go further.
    Our questions are:

    Hi
    Refer the below thread.
    obiee 11g non-conforming dimensions and nQSError 14025
    Might be help you/
    Thanks,
    satya

  • Merging Multiple fact tables and creating a BIA target

    Hi Folks
    We are  using Dataservices with BIA.
    We want to merge mutliple fact table and create a sinvglc cube on BIA.
    when we tried to do that we get a error message that we cannot merge multiple fact tables.
    any pointers ?
    Poonam

    You could have the cubes individually on the BIA and then have a multiprovider which you can use..?
    or create separate universes on the cubes and then merge the universes in BO and then you could hit the BIA for the same..?

  • Deploying Factless Fact Tables in 10G

    Hello - We are trying to deploy the same factless fact tables we used for OWB 9i to 10G and we are receiving validation errors saying that every cube must have a measure. We do not want to create a null column just to get around something that worked fine in a previous version of the tool, and we have quite a few factless fact tables in our model.
    Are there any thoughts on this?
    Thanks,
    Kristi

    Hi,
    I already face that kind of fact table, but in the same time, i needed to count the number of intersection between dimensions or sum, so i used a one measure, that always takes 1 as value, that did the trick.
    Regards

  • #datasync error - could that come from merging on fields from fact tables instead of dimensions tables?

    I'm reporting out of two universes published by Epic
    1. Warehouse - Patient
    2. Warehouse - Transactions
    I wanted to join two queries based on the primary key for a hospital encounter. Following the tutorial seemed pretty straightforward until I got to displaying data from a merged query.
    The table displayed results from Query1, but adding fields from Query2 wiped out all the data in the table, leaving only #datasync in each field.
    My workaround to get fields from both queries displayed (see screenshot)
    1. Merge queries on *two* fields - primary key of hospital encounter and primary key of patient
    2. Create new variable
    3. Make variable type Detail
    4. Associate variable with hospital encounter key from Query1
    5. Set formula equal to a field from Query2
    I'm not sure why this workaround works or if what I'm experiencing is a symptom of something larger. Could this workaround be needed because I am merging on fields from a fact table instead of fields from a dimension table?
    Thanks in advance

    You have defined merged dimension on 'company code-Key' which is common in both the Bex-queries.
    You are able to bring the merged dimension and other objects from  Query_ One to report block without any issue but when adding objects from Query_Two you are getting the error #DATASYNC.
    In this case objects from Query_One are got sync with the merged dimension object without any issue because they got added first to it.
    Similarly when you add merged dimension and objects from Query_Two you find no issue, because objects from Query_Two go sync first.
    Once objects from a query(Query_One/Query_Two) got added to a merged dimension, when we try adding objects from other query we get #DATASYNC error. This is because data in the other query is not able to sync with the initial result set, this is a know behavior.
    There are two workarounds:
    1) Merging all common dimension/characteristic objects: Only merged dimensions data will sync with the initial query,  un-merged dimensions/characteristic objects will still give #DATASYNC error.
    2) Create detail objects/attribute objects at report level for all uncommon characteristic/dimension objects from query_two referring to merged dimension. Then add these newly created detail/attribute objects to the table block that is having initial query objects with merged dimension, with this you see result set of query_two in the table block  not #DATASYNC error.
    ~ Manoj

  • Mapping dimension table to fact table in admin tool 10g

    Hi
    I have a criteria when it is run uses fact 1 as fact table and gives result, and when I add new column to this criteria an extra dimension(dim1) gets added to query backend and fact 1 changes to fact2.
    The problem I am facing over here is both should result same amount for records. but due to new column from dim1 and new fact table fact2the amount varies and inaccruate results have come.
    When I have checked the sources for Fact in logical layer i can find these facts(fact1,fact2) where fact 1 is not mapped with dim 1 and fact 2 is mapped with dim1.
    Will mapping dim1 to fact1 will solve my problem.. And what would be the steps to add/map dim1 to fact1.
    Please suggest.

    I just checked back the setting and here are the changed details again.
    I have a criteria where when it is run, the backend query is formed with f1 fact table.
    And for the same criteria when I add a column c1, the backedn query is formed with f2 fact table and f1 is no more here and one new dimension d1 consisting c1 is getting added.
    In both the cases results are different..the expected thing is same results.
    Note:
    d1 is connected to f2 checked in physical diagram
    d1 is not connected to f1 checked in physical diagram.
    when I checked the connection between three table in physical diagram. Only d1 and f2 are connected and f1 is not connected to either of table. How to go about this issue.
    Please suggest.

  • Self join with fact table in Obie 10G

    I am a newbie to obiee.I have a development requirement as follows-
    I need to find supervisors designation with the existing star RPD design. explanation is below
                                                        DIM_Designation(Desig_Wid)
                                                      |(Row_wid)
                                                      |
    DIM_EMPLOYEE--------WORKER_FACT------------DIM_Supervisor
    (Row_Wid)-----------------(Employee_Wid)
                                      (Supervisor_Wid)------------(Row_Wid)
    3 dimension is joined to fact to get employee, his supervisor and designation of employee. now i want to get the supervisor's designation? how is it possible? already employee and supervisor dimension is same W_employee_d table joined with fact as alias DIM_EMPLOYEE and DIM_SUPERVISOR. how to do self join with fact to get supervisor's designation. i do not have any supervisor_desig_wid in fact table. any help is deeply appreciated.

    Yes,Duplicate the fact table create a primary key on the newly fact table alias dimension table.So you can ur data modelling as usual.

  • LTS of fact table in obiee 10g

    Hi
    In LTS for the fact table, I want to set the level to the lowest level of the dimension
    How to achieve this.
    Please suggest.

    Hi,
    To set the LTS level first you need to have the dimension hierarchy for that table, after that double click the LTS >> click on Content Tab >> there in level drop down you can select which level you want assign. so now select the lowest level from the drop down and save.
    Mark if Helpful/correct.
    Thanks

  • OBIEE 10g: Problem on fact table mapping

    Hi to everyone,
    i have this situation:
    1) i have 2 fact tables in phisical model (FHRCTR and HCTOT) this two table is the same in the FK columns.
    2) the structure is: FHRCTR[month_fk, transaction_type, measure A], HCTOT[month_fk, measure B]. 'transaction_type' is an attribute of the table not dimension.
    3) i have created a logical fact table with this two tables for source.
    4) each field of logical table is mapped on the corrispetive field on the both phisical table, except for 'transaction_type' column having as source only FHRCTR table.
    5) i've created a report with follow colums: month_fk, transaction_type, measure_A, measure_B
    The problem is: if i create the report with the columns in point 5, the measure_B returns null values and the measure_A have a correct value.
    Insteed if i create a report without 'transaction_type' column the value of Measure_B and Measure_A are correct.
    I remind you that: 'transaction_type' in the logical fact table is mapped only on the FHRCTR table because on HCTOT it doesn't exist.
    Can you help me to solve this issue?
    Thanks

    832596 wrote:
    Can you help me to solve this issue?
    ThanksI guess when you set some aggregation rule for 'transaction_type' in a repository then all will be ok (for examplу - min). But you'll get results that you don't expect.

  • Load fact table with null dimension keys

    Dear All,
    We have OWB 10g R2 and ROLAP star schema. In our source system some rows don’t have all attributes populated with values (null value), and this empty attributes are dimension (business) keys in star schema. Is it possible to load fact table with such rows (some dimension keys are null) in the OWB mappings? We use cube operator in mappings.
    Thanks And Regards
    Miran

    The dimension should have a row indicating UNKNOWN, this will have a business key outside of the normal range e.g. -999999.
    In the mapping the missing business keys can then be NVL'd to -999999.
    Cheers
    Si

  • How do I use Derived Table to dynamically choose fact table

    How do I use the Derived Table functionality to dynamically choose a fact table?
    I am using BO XI R2 querying against Genesys Datamart kept in Oracle 10g.  The datamart contains aggregated fact tables at different levels (no_agg, hour, day, week, etc...) I would like to build my universe so that if the end user chooses a parameter to view reports at daily granularity, then the daily fact table is used;  choose hourly granularity, then hourly fact table is used, etc....
    I tried using dynamic SQL in Oracle Syntax, but Business Obljects Universe didn't like that type of coding.
    The tables look something like this:
    O_LOB1_NO_AGG o
    inner join V_LOB1_NO_AGG v on o.object_id = v.object_id
    inner join T_LOB1_NO_AGG t on v.timekey = t.timekey
    Likewise, in the 'hour', 'day', 'week', etc... fact tables, the Primary Key to Foreign Key names and relationships are the same.  And the columns in each O_, V_, T_ fact table is the same, or very similar (just aggregated at different levels of time).
    I was thinking of going a different route and using aggregate aware; but there are many Lines of Business (20+) and multiple time dimensions (7) and I believe aggregate aware would require me to place all relevant tables in the Universe as separate objects, which would create a large Universe with multiple table objects,  and not be maintenance-friendly. I also was going to dynamically choose Line of Business (LOB) in the derived tables, based on the end user choosing a parameter for LOB, but that is out-of-scope for my current question.  But that information sort of points you down the train of thought I am travelling. Thanks for any help you can provide!

    You can create a derived table containing a union like the following:
    select a,b,c from DailyFacts where (@prompt('View'....) = 'Daily' and (<rest of your where conditions here if necessary>)
    union
    (select a,b,c from MonthlyFacts where (@prompt('View'....) = 'Monthly' and (<rest of your where conditions here if necessary>))
    union
    (select a,b,c from YearlyFacts where (@prompt('View'....) = 'Yearly' and (<rest of your where conditions here if necessary>))
    I assume that you are familiar with the @prompt syntax
    Regards,
    Stratos

  • Issue with Multiple LTS for a fact table and filters

    Hello,
    I am facing an issue with obiee 10g.
    In my model, I have a huge FACT table F1 (partitioned and indexed). The average response time for the queries, which targeted it, was ~30-60 seconds, which was not really convincing our end user.
    So, we decided to create a materialized view, which removes some dimensions that are not used by default, but might be used if the end user adds some filters. I added the Materialized view in the Physical Layer and in the corresponding Logical Table Source.
    I then tried to see if it works, but I was a bit surprised by the result. Indeed,
    -> If the report does not reference a truncated dimension, it targets the materialized view. -> Perfect
    -> If the report does reference a truncated dimension in the columns, it targets the Fact Table. -> Perfect
    -> If the report does reference a truncated dimension in the Filters, it targets the materialized view. For this reason, the filter is never resolved and no join on the dimension table is applied, whereas it exists in logical SQL generated. -> Ko.
    A suggestion could be to add the filters into the columns, but I am not satisfied by this response because it will never use the materialized view in that case.
    An other suggestion could be to use query rewrite, but I 'd like to have the full control on the generation of the queries.
    Does someone know if the filters are not evaluated to determine which LTS to use? How can I force this evaluation?
    Regards,

    Hi,
    If I understand your description correctly, then your materialized view skips some dimensions (infrequent ones). However, when you reference these skipped dimensions in filters, the queries are hitting the materialized view and failing as these values do not exist. In this case, you could resolve it as follows
    1. Create dimensional hierarchies for all dimensions.
    2. In the fact table's logical sources set the content tabs properly. (Yes, I think this is it).
    When you skipped some dimensions, the grain of the new fact source (the materialized view in this case) is changed. For example:
    Say a fact is available with the keys for Product, Customer, Promotion dimensions. The grain for this is Product * Customer * Promotion
    Say another fact is available with the keys for Product, Customer. The grain for this is Product * Customer (In fact, I would say it is Product * Customer * Promotion Total).
    So in the second case, the grain of the table is changed. So setting appropriate content levels for these sources would automatically switch the sources.
    So, I request you to try these settings and let me know if it works.
    Thank you,
    Dhar

  • 10g: parallel pipelined table func. using table(cast(SQL collect.))?

    Hi,
    i try to distribute SQL data objects - stored in a SQL data type TABLE OF <object-Type> - to multiple (parallel) instances of a table function,
    by passing a CURSOR(...) to the table function, which selects from the SQL TABLE OF storage via "select * from TABLE(CAST(<storage> as <storage-type>)".
    But oracle always only uses a single table function instance :-(
    whatever hints i provide or setting i use for the parallel table function (parallel_enable ...)
    Could it be, that this is due to the fact, that my data are not
    globally available, but only in the main thread data?
    Can someone confirm, that it's not possible to start multiple parallel table functions
    for selecting on SQL data type TABLE OF <object>storages?
    Here's an example sqlplus program to show the issue:
    -------------------- snip ---------------------------------------------
    set serveroutput on;
    drop table test_table;
    drop type ton_t;
    drop type test_list;
    drop type test_obj;
    create table test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_obj as object(
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_list as table of test_obj;
    create or replace type ton_t as table of number;
    create or replace package test_pkg
    as
         type test_rec is record (
              a number(19,0),
              b timestamp with time zone,
              c varchar2(256)
         type test_tab is table of test_rec;
         type test_cur is ref cursor return test_rec;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    end;
    create or replace package body test_pkg
    as
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a))
    is
              sid number;
              counter number(19,0) := 0;
              myrec test_rec;
              mytab test_tab;
              mytab2 test_list := test_list();
         begin
              select userenv('SID') into sid from dual;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
              loop
                   fetch mycur into myRec;
                   exit when mycur%NOTFOUND;
                   mytab2.extend;
                   mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
              end loop;
              for i in mytab2.first..mytab2.last loop
                   -- attention: saves own SID in test_obj.a for indication to caller
                   --     how many sids have been involved
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
                   counter := counter + 1;
              end loop;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
         end;
    end;
    declare
         myList test_list := test_list();
         myList2 test_list := test_list();
         sids ton_t := ton_t();
    begin
         for i in 1..10000 loop
              myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
         end loop;
         -- save into the real table
         insert into test_table select * from table(cast (myList as test_list));
         dbms_output.put_line(chr(10) || 'copy ''mylist'' to ''mylist2'' by streaming via table function...');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from table(cast (myList as test_list)) tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
         dbms_output.put_line(chr(10) || 'copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from test_table tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
    end;
    -------------------- snap ---------------------------------------------
    Here's the output:
    -------------------- snip ---------------------------------------------
    copy 'mylist' to 'mylist2' by streaming via table function...
    test_pkg.TF( sid => '98' ): enter
    test_pkg.TF( sid => '98' ): exit, piped #10000 records
    ... saved #10000 records
    worker thread's sid list:
    sid #98 -- ONLY A SINGLE SID HERE!
    copy physical 'test_table' to 'mylist2' by streaming via table function:
    ... saved #10000 records
    worker thread's sid list:
    sid #128 -- A LIST OF SIDS HERE!
    sid #141
    sid #85
    sid #125
    sid #254
    sid #101
    sid #124
    sid #109
    sid #142
    sid #92
    PL/SQL procedure successfully completed.
    -------------------- snap ---------------------------------------------
    I posted it to newsgroup comp.databases.oracle.server.
    (summary: "10g: parallel pipelined table functions with cursor selecting from table(cast(SQL collection)) doesn't work ")
    But i didn't get a response.
    There i also wrote some background information about my application:
    -------------------- snip ---------------------------------------------
    My application has a #2 steps/stages data selection.
    A 1st select for minimal context base data
    - mainly to evaluate for due driving data records.
    And a 2nd select for all the "real" data to process a context
    (joining much more other tables here, which i don't want to do for non-due records).
    So it's doing stage #1 select first, then stage #2 select - based on stage #1 results - next.
    The first implementation of the application did the stage #1 select in the main session of the pl/sql code.
    And for the stage #2 select there was done a dispatch to multiple parallel table functions (in multiple worker sessions) for the "real work".
    That worked.
    However there was a flaw:
    Between records from stage #1 selection and records from stage #2 selection there is a 1:n relation (via key / foreign key relation).
    Means, for #1 resulting record from stage #1 selection, there are #x records from stage #2 selection.
    That forced me to use "cluster curStage2 by (theKey)".
    Because the worker sessions need to evaluate the all-over status for a context of #1 record from stage #1 and #x records from stage #2
    (so it needs to have #x records of stage #2 together).
    This then resulted in delay for starting up the worker sessions (i didn't find a way to get rid of this).
    So i wanted to shift the invocation of the worker sessions to the stage #1 selection.
    Then i don't need the "cluster curStage2 by (theKey)" anymore!
    But: i also need to do an update of the primary driving data!
    So the stage #1 select is a 'select ... for update ...'.
    But you can't use such in CURSOR for table functions (which i can understand, why it's not possible).
    So i have to do my stage #1 selection in two steps:
    1. 'select for update' by main session and collect result in SQL collection.
    2. pass collected data to parallel table functions
    And for 2. i recognized, that it doesn't start up multiple parallel table function instances.
    As a work-around
    - if it's just not possible to start multiple parallel pipelined table functions for dispatching from 'select * from TABLE(CAST(... as ...))' -
    i need to select again on the base tables - driven by the SQL collection data.
    But before i do so, i wanted to verify, if it's really not possible.
    Maybe i just miss a special oracle hint or whatever you can get "out of another box" :-)
    -------------------- snap ---------------------------------------------
    - many thanks!
    rgds,
    Frank

    Hi,
    i try to distribute SQL data objects - stored in a SQL data type TABLE OF <object-Type> - to multiple (parallel) instances of a table function,
    by passing a CURSOR(...) to the table function, which selects from the SQL TABLE OF storage via "select * from TABLE(CAST(<storage> as <storage-type>)".
    But oracle always only uses a single table function instance :-(
    whatever hints i provide or setting i use for the parallel table function (parallel_enable ...)
    Could it be, that this is due to the fact, that my data are not
    globally available, but only in the main thread data?
    Can someone confirm, that it's not possible to start multiple parallel table functions
    for selecting on SQL data type TABLE OF <object>storages?
    Here's an example sqlplus program to show the issue:
    -------------------- snip ---------------------------------------------
    set serveroutput on;
    drop table test_table;
    drop type ton_t;
    drop type test_list;
    drop type test_obj;
    create table test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_obj as object(
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_list as table of test_obj;
    create or replace type ton_t as table of number;
    create or replace package test_pkg
    as
         type test_rec is record (
              a number(19,0),
              b timestamp with time zone,
              c varchar2(256)
         type test_tab is table of test_rec;
         type test_cur is ref cursor return test_rec;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    end;
    create or replace package body test_pkg
    as
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a))
    is
              sid number;
              counter number(19,0) := 0;
              myrec test_rec;
              mytab test_tab;
              mytab2 test_list := test_list();
         begin
              select userenv('SID') into sid from dual;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
              loop
                   fetch mycur into myRec;
                   exit when mycur%NOTFOUND;
                   mytab2.extend;
                   mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
              end loop;
              for i in mytab2.first..mytab2.last loop
                   -- attention: saves own SID in test_obj.a for indication to caller
                   --     how many sids have been involved
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
                   counter := counter + 1;
              end loop;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
         end;
    end;
    declare
         myList test_list := test_list();
         myList2 test_list := test_list();
         sids ton_t := ton_t();
    begin
         for i in 1..10000 loop
              myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
         end loop;
         -- save into the real table
         insert into test_table select * from table(cast (myList as test_list));
         dbms_output.put_line(chr(10) || 'copy ''mylist'' to ''mylist2'' by streaming via table function...');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from table(cast (myList as test_list)) tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
         dbms_output.put_line(chr(10) || 'copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from test_table tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
    end;
    -------------------- snap ---------------------------------------------
    Here's the output:
    -------------------- snip ---------------------------------------------
    copy 'mylist' to 'mylist2' by streaming via table function...
    test_pkg.TF( sid => '98' ): enter
    test_pkg.TF( sid => '98' ): exit, piped #10000 records
    ... saved #10000 records
    worker thread's sid list:
    sid #98 -- ONLY A SINGLE SID HERE!
    copy physical 'test_table' to 'mylist2' by streaming via table function:
    ... saved #10000 records
    worker thread's sid list:
    sid #128 -- A LIST OF SIDS HERE!
    sid #141
    sid #85
    sid #125
    sid #254
    sid #101
    sid #124
    sid #109
    sid #142
    sid #92
    PL/SQL procedure successfully completed.
    -------------------- snap ---------------------------------------------
    I posted it to newsgroup comp.databases.oracle.server.
    (summary: "10g: parallel pipelined table functions with cursor selecting from table(cast(SQL collection)) doesn't work ")
    But i didn't get a response.
    There i also wrote some background information about my application:
    -------------------- snip ---------------------------------------------
    My application has a #2 steps/stages data selection.
    A 1st select for minimal context base data
    - mainly to evaluate for due driving data records.
    And a 2nd select for all the "real" data to process a context
    (joining much more other tables here, which i don't want to do for non-due records).
    So it's doing stage #1 select first, then stage #2 select - based on stage #1 results - next.
    The first implementation of the application did the stage #1 select in the main session of the pl/sql code.
    And for the stage #2 select there was done a dispatch to multiple parallel table functions (in multiple worker sessions) for the "real work".
    That worked.
    However there was a flaw:
    Between records from stage #1 selection and records from stage #2 selection there is a 1:n relation (via key / foreign key relation).
    Means, for #1 resulting record from stage #1 selection, there are #x records from stage #2 selection.
    That forced me to use "cluster curStage2 by (theKey)".
    Because the worker sessions need to evaluate the all-over status for a context of #1 record from stage #1 and #x records from stage #2
    (so it needs to have #x records of stage #2 together).
    This then resulted in delay for starting up the worker sessions (i didn't find a way to get rid of this).
    So i wanted to shift the invocation of the worker sessions to the stage #1 selection.
    Then i don't need the "cluster curStage2 by (theKey)" anymore!
    But: i also need to do an update of the primary driving data!
    So the stage #1 select is a 'select ... for update ...'.
    But you can't use such in CURSOR for table functions (which i can understand, why it's not possible).
    So i have to do my stage #1 selection in two steps:
    1. 'select for update' by main session and collect result in SQL collection.
    2. pass collected data to parallel table functions
    And for 2. i recognized, that it doesn't start up multiple parallel table function instances.
    As a work-around
    - if it's just not possible to start multiple parallel pipelined table functions for dispatching from 'select * from TABLE(CAST(... as ...))' -
    i need to select again on the base tables - driven by the SQL collection data.
    But before i do so, i wanted to verify, if it's really not possible.
    Maybe i just miss a special oracle hint or whatever you can get "out of another box" :-)
    -------------------- snap ---------------------------------------------
    - many thanks!
    rgds,
    Frank

  • Fact tables are compatible with the query request

    Hi,
    i am using 11g.In 10g working fine without any error.After migrate 10g into 11g below error will returning. What is the problem and How we will fix.Could you pls let me know.Thanks
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 14020] None of the fact tables are compatible with the query request FACT_AGENT_TIME_STATISTICS.FKEY. (HY000)
    SQL Issued: SELECT s_0, s_1, s_2, s_3, s_4, s_5, s_6, s_7, s_8, s_9, s_10 FROM ( SELECT 0 s_0, "Team Performance"."DIMENSION - LOCATION"."COUNTRY CODE" s_1, "Team Performance"."DIMENSION - TIME"."BUSINESS DATE" s_2, CASE WHEN "Team Performance"."DIMENSION - TEAM"."ROLE" ='TEAM LEADER' THEN 'Team Leader' ELSE "Team Performance"."DIMENSION - TEAM"."TEAM" END s_3, CASE WHEN '30 Mins Interval' ='15 Mins Interval' THEN "Team Performance"."DIMENSION - TIME"."15 Mins Interval" ELSE "Team Performance"."DIMENSION - TIME"."30 Mins Interval" END s_4, ((SUM("Team Performance"."FACT - AGENT CALL STATISTICS"."TIME - ACD CALL HANDLING"+"Team Performance"."FACT - AGENT CALL STATISTICS"."TIME - AFTER CALL WORK (ACW)")+SUM("Team Performance"."FACT - AGENT TIME STATISTICS"."AVAILABLE TIME"))/60)/(COUNT(DISTINCT "Team Performance"."FACT - AGENT TIME STATISTICS"."DATE ID")*"Team Performance"."FACT - AGENT TIME STATISTICS"."INTERVAL TYPE") s_5, (SUM("Team Performance"."FACT - AGENT TIME STATISTICS"."AUX3 - TRAINING"+"Team Performance"."FACT - AGENT TIME STATISTICS"."AUX4 - MEETING"+"Team Performance"."FACT - AGENT TIME STATISTICS"."AUX5 - PROJECT"+"Team Performance"."FACT - AGENT TIME STATISTICS"."AUX6 - COACHING")/60)/(COUNT(DISTINCT "Team Performance"."FACT - AGENT TIME STATISTICS"."DATE ID")*"Team Performance"."FACT - AGENT TIME STATISTICS"."INTERVAL TYPE") s_6, (SUM("Team Performance"."FACT - AGENT TIME STATISTICS"."STAFFED TIME")/60)/(COUNT(DISTINCT "Team Performance"."FACT - AGENT TIME STATISTICS"."DATE ID")*"Team Performance"."FACT - AGENT TIME STATISTICS"."INTERVAL TYPE") s_7, COUNT(DISTINCT "Team Performance"."FACT - AGENT TIME STATISTICS"."DATE ID")*"Team Performance"."FACT - AGENT TIME STATISTICS"."INTERVAL TYPE" s_8, MIN("Team Performance"."DIMENSION - TIME"."FULL DATE TIME") s_9, REPORT_AGGREGATE(((SUM("Team Performance"."FACT - AGENT CALL STATISTICS"."TIME - ACD CALL HANDLING"+"Team Performance"."FACT - AGENT CALL STATISTICS"."TIME - AFTER CALL WORK (ACW)")+SUM("Team Performance"."FACT - AGENT TIME STATISTICS"."AVAILABLE TIME"))/60)/(COUNT(DISTINCT "Team Performance"."FACT - AGENT TIME STATISTICS"."DATE ID")*"Team Performance"."FACT - AGENT TIME STATISTICS"."INTERVAL TYPE") BY CASE WHEN '30 Mins Interval' ='15 Mins Interval' THEN "Team Performance"."DIMENSION - TIME"."15 Mins Interval" ELSE "Team Performance"."DIMENSION - TIME"."30 Mins Interval" END, CASE WHEN "Team Performance"."DIMENSION - TEAM"."ROLE" ='TEAM LEADER' THEN 'Team Leader' ELSE "Team Performance"."DIMENSION - TEAM"."TEAM" END) s_10 FROM "Team Performance" WHERE (("DIMENSION - TIME"."BUSINESS DATE" BETWEEN timestamp '2012-11-23 00:00:00' AND timestamp '2012-11-23 00:00:00') AND ("DIMENSION - LOCATION"."COUNTRY CODE" = 'US') AND ("DIMENSION - LOCATION".DEPARTMENT = 'CSG')) ) djm FETCH FIRST 65001 ROWS ONLY

    Back up your repository before following these steps:
    1 Reduce the problem report to minimum number of columns required to generate the error. This will usually identify which dimension and which fact are incompatible.
    2 Open the repository in the Administration tool and verify that the dimension and fact table join at the physical layer of the repository
    3 Verify that there is a complex join between the dimension and the fact in the business layer.
    4 Check the logical table sources for the fact table. At least one of them must have the Content tab set to a level in the hierarchy that represents the problem dimension. This is usually the detailed level.
    5 Check the logical table source Content tab for the dimension table. Unless there is a valid reason, this should be set to blank.
    6. Save any changes to the repository.

Maybe you are looking for

  • Getting value from portal or bsp application to selection screen parameter

    Daer SDNer's.                        Is there any possibility of getting value from bsp application and that value to be passed to selection paramater of bw query varaiable. concept as follows.                                 report is based on vendo

  • Tiger second hand

    I have just bought two second hand discs on eBay for what was described as a Full Install version of Tiger for iMac G5. They won't install on my PowerBook G4 - I gather from previous discussion posts that they are machine specific. Have I been done?

  • Hide the Notification Bar in my sub site in Sharepoint Online

    Hi All, I have this annoying notification or status bar right at top of my Team Site that continuously displays the message -  This site can be shared with people outside this organisation. Learn More When I click the learn more option it says that t

  • Crystal 2008 DB2 error that works in Crystal XI R1

    We finally converted a legacy server from Crystal Server XI R1 to Crystal 2008. All users are now suppose to start writing in CR2008 only. We found a small number of reports running again a DB2 data source that are erroring out when run on CR2008 loc

  • Infospokes -Data transfer to third party

    Hello Folks, Im now working on SAP BW 3.5 system. I need to transfer data from SAP BW DSO/ODS object to an Oracle Database. I am thinking of using Infospokes to achieve this requirement. My findings: In the Destination tab of the Infospoke maintanenc