Aggregate in ODI - Part 2

Not sure why I am not getting how to aggregate in ODI! I have been trying to come up with a way to produce these results in ODI for 2 weeks now, and I am missing something. I don't believe this should be that difficult. This is the sql code I am using in sqlplus to yield test results and here is my scenerio:
sql:
select project_code
from
(select project_code, floor(sum(unit_price*quantity)) as total
from table1
where order_base_number=8004
group by project_code)
where total =
(select max(floor(sum(unit_price*quantity)))
from table1
where po_header_id = 8004
group by project_code)
and rownum = 1;
Source: Table1 contain fields:
order_base_number
order_mod_number
line_item
quantity
unit_price
project_code
Target: Aggregate_Table_1 contains
order_base_number
order_mod_number
aggregate_project_code = the project_code should be the project code with the highest total_cost grouped by order_base_number,order_mod_number (if the result returns = value then return 1st occurence rn=1)
for example,
order - 1001
mod_number - 200
line_item - 1
quantity - 100
unit_price - 100
project_code - 55001
order - 1001
mod_number - 200
line_item - 2
quantity - 100
unit_price - 10
project_code - 55002
order - 1001
mod_number - 200
line_item - 3
quantity - 100
unit_price - 5
project_code - 55001
Results
order_base_number - 1001
order_mod_number - 200
aggregate_project_code - 55001

Sorry for the late reply .
Here are the two solutions i can think of
SOLUTION 1
Here replace the above case statement with below modified query . Please provide the table name with schema.table_name format and also change the column names accordingly.
SELECT DISTINCT
ORDERS,
MOD_NUMBER,
CASE WHEN RANK () OVER (PARTITION BY ORDERS,MOD_NUMBER ORDER BY (UNIT_PRICE*QUANTITY) DESC )=1
THEN PROJECT_CODE
ELSE
     (  SELECT * FROM
               SELECT DISTINCT CASE WHEN MAX(UNIT_PRICE*QUANTITY) OVER (PARTITION BY ORDERS,MOD_NUMBER) = UNIT_PRICE*QUANTITY 
                  THEN PROJECT_CODE END "PROJ_CODE"
               FROM ORD )
     WHERE PROJ_CODE IS NOT NULL  )
END
FROM ORD;In the above code what i am actually doing is bring the same max project code even in else condition , so instead of project code and null there will be only single distinct project_code and should give you the results you are looking for .I have tried with the data you have provided.
Solution 2- Using Temporary Table.
CASE WHEN RANK () OVER (PARTITION BY ORDERS,MOD_NUMBER ORDER BY (UNIT_PRICE*QUANTITY) DESC )=1
THEN NVL(PROJECT_CODE,'NA')
END Modify the case statement this way so that your results will be some thing like this
6007 6002 111110
6007 6005 NA
6007 6006 111130
6007 6006 null
Now before loading into target table add a filter for nulls so that you load all nulls and for records with na
use replace to change NA to nulls
REPLACE( <COLUMN_NAME>,'NA',NULL) hope this solves your requirement.

Similar Messages

  • Options Complex Aggregate in ODI interface

    I have one source and i need a one target aggregate table.
    I have implemented the next sample query:
    SELECT
    ATRIBUTE1,
    ATRIBUTE2,
    ATRIBUTE3,
    SUM(ATRIBUTE4),
    SUM(ATRIBUTE5),
    AVG(SUM(ATRIBUTE4+ATRIBUTE5)/ATRIBUTE6),
    FROM TABLE_AGG
    GROUP BY SUBSTR(ATRIBUTE1,0,6),ATRIBUTE2,ATRIBUTE3
    ¿Options to implement this query or similar please?
    Thanks

    Have you looked at this thread?
    how to use GROUP BY  in ODI
    Just add the SUM operator to the required target fields in your ODI interface and the GROUP BY clause will be added automatically by ODI. I note that you have a group by field that doesn't exist in your select list - was this deliberate?

  • Best way to aggregate large data

    Hi,
    We load actual numbers and run aggregation monthly.
    The data file grew from 400k lines to 1.4 million lines. The aggregation time grew proportionately and it takes now 9 hours. It will continue growing.
    We are looking for a better way to aggregate data.
    Can you please help in improving performance significantly?
    Any possible solution will help: ASO cube and partitions, different script of aggregation, be creative.
    Thank you and best regards,
    Some information on our enviroment and process:
    We aggregate using CALC DIM(dim1,dim2,...,dimN).
    Windows server 64bit
    We are moving from 11.1.2.1 to 11.2.2
    Block size: 70,000 B
    Dimensions,Type, Members, Sparse Members:
    Bold and underlined dimensions are aggregated.
    Account
    Dense
    2523
    676
    Period
    Dense
    19
    13
    View
    Dense
    3
    1
    PnL view
    Sparse
    79
    10
    Currency
    Sparse
    16
    14
    Site
    Sparse
    31
    31
    Company
    Sparse
    271
    78
    ICP
    Sparse
    167
    118
    Cost center
    Sparse
    161
    161
    Product line
    Sparse
    250
    250
    Sale channels
    Sparse
    284
    259
    Scenario
    Sparse
    10
    10
    Version
    Sparse
    32
    30
    Year
    Sparse
    6
    6

    Yes I have implemented ASO. Not in relation to Planning data though. It has always been in relation to larger actual reporting requirements. In the new releases of Planning they are moving towards having integrated ASO reporting cubes so that where the planning application has large volumes of data you can push data to an ASO cube to save on aggregation times. For me the problem with this is that in all my historical Planning applications there has always been a need to aggregate data as part of the calculation process, so the aggregations were always required within Planning so having an ASO cube would not have really taken any time away.
    So really the answer is yes you can go down the ASO route. But having data aggregating in an ASO application would need to fit your functional requirements. So the biggest one would be, can you do without aggregated data within your planning application? Also, its worth pointing out that even though you don't have to aggregate in an ASO application, it is still recommended to run aggregations on the base level data. Otherwise your users will start complaining about poor performing reports. They can be quite slow, and if you have many users then this will only be worse. Aggregations in ASO are different though. You run aggregations in a number of different ways, but the end goal is to have run aggregations that cover the most commonly run reporting combinations. So not aggregating everything and therefore quicker to run. But more data will result in more time to run an aggregation.
    In your post you mentioned that your actuals have grown and the aggregations have grown with it, and will continue to grow. I don't know anything about your application, but is there a need to keep all of your actuals loading and aggregating each month? Why don't you just load the current years actuals (Or the periods of actuals that are changing) each month and only aggregate those? Are all of your actuals really changing all the time and therefore requiring you to aggregate all of the data each time? Normally I would only load the required actuals to support the planning and forecasting exercise. Any previous years data (Actuals, old fcsts, budgets etc) I would archive and keep an aggregated static copy of the application.
    Also, you mentioned that you did have calc parallel set to 3 and then moved to 7. But did you have the TASK DIMS set at all? The reason I say this is because if you didn't then your calc parallel would likely give you no improvement at all. If you don't set it to the optimal value then by default it will try to paralyze using the last dimension (in your case Year), so not really breaking up the calc ( This is a very common mistake that is made when CALC PARALLEL is used). Setting this value in older versions of Essbase is a bit trial and error, but the saying goes it should be set to at least the last sparse aggregating dimension to get any value. So in your case the minimum value should be TASK DIM 4, but its worth trying higher, so 6. Try 4 then 5 and then 6. As I say, trial and error. But I will say one thing, by getting your calc parallel correct you will save much more than 10% on aggregations. You say you are moving to 11.1.2.2, so I assume you haven't run this aggregation on that environment yet? If so the TASK DIM setting is not required in that environment, essbase will calculate the best value for you, so you only need to set CALC PARALLEL.
    Is it possible for you to post your script? Also I noticed in your original email that for Company and ICP your member numbers on the right are significantly smaller than the left numbers, why is this? Do you have dynamic members in those dimensions?
    I will say 6 aggregating dimensions is always challenging, but 9 hours does sound a little long to simply aggregate actuals, even for the 1.4 millions records

  • How to create data stores in ODI ?

    Hi all,
    I am new to this ODI part.Can anyone please help me as how to create data stores in ODI.
    A prompt reply will be highly aprreciated.
    Thanks
    Saurabh.

    What do you mean by "create datastores"?
    If you mean you want to reverse engineer existing tables from a database, then the phrase used in the ODI docs is "reverse enginnering". If you mean to create new tables in a database, then:
    1) ODI is not meant to be a database design tool.
    2) Using the "diagrams" node under a data model, you are able to use the "Common Format Designer" (CFD) tool to design and create the structure. The CFD tool is a simple ER-digram tool, but importantantly, if you drag structures in from one model to another, it remembers where it came from, allowing automatic generation of interfaces, and it automatically translates the data types.

  • Drill Down Report Performance issue?

    Hi,
    why drill down report is slower performance comparing with Action link/Navigation method? what could be the back end processing?
    Thanks
    MRD

    Need to know/see your config to tell why it is slow, I would suggest to follow best practices for the same.
    Drill down back end process something like:
    Report fetch next level columns and the all aggregated measures are grouped by next level this may take some time unless you go with aggregate table (is part of best practices)
    Appreciate if you mark as correct/helpful

  • Inconsistent datatypes: expected NUMBER got CHAR error

    Hi,
    I have the following table
    create GLOBAL TEMPORARY TABLE br_total_rtn_data_tmp
    code varchar(50)     NOT NULL,
    name                         varchar(255),     
    cum_ytd_rtn_amt          varchar(255),     
    cum_one_mon_rtn_amt          varchar(255)     ,
    cum_thr_mon_rtn_amt          varchar(255)     ,
    cum_six_mon_rtn_amt          varchar(255),
    cum_nine_mon_rtn_amt     varchar(255),
    cum_one_yr_rtn_amt          varchar(255),
    cum_thr_yr_rtn_amt          varchar(255),
    cum_five_yr_rtn_amt          varchar(255),
    cum_ten_yr_rtn_amt          varchar(255),
    cum_lof_rtn_amt               varchar(255),
    avg_anl_one_yr_rtn_amt     varchar(255),
    avg_anl_thr_yr_rtn_amt     varchar(255),
    avg_anl_five_yr_rtn_amt     varchar(255),
    avg_anl_ten_yr_rtn_amt     varchar(255),
    avg_anl_lof_rtn_amt          varchar(255),
    cum_prev_1m_month_end     varchar(255),
    cum_prev_2m_month_end     varchar(255)
    )ON COMMIT PRESERVE ROWS;
    I have a case statement
    CASE
                 WHEN code = 'MDN' THEN
                           max(case when p.m_date = v_prev2_yr_mon and p.period_type = '1M' then p.mdn /100  else null end)
                 WHEN code = 'QRT' THEN
                      max(case when p.m_date = v_prev2_yr_mon and p.period_type = '1M' then p.quartile  else null end)
                 WHEN code = 'PCT' THEN
                      max(case when p.m_date = v_prev2_yr_mon and p.period_type = '1M' then p.pct_beaten / 100 else null end)
                 WHEN code = 'RNK' THEN
                           case when (p.m_date = v_prev2_yr_mon and p.period_type = '1M'  and p.rank is  null and p.cnt is null)
                        THEN
                                       P.RANK
                        else
                                        p.rank||'/'||p.cnt
                        end           
                 ELSE NULL
                 END CASE The output for code = RNK should be somewhat like 3/5 which is rank/count
    but i get the error "Inconsistent datatypes: expected NUMBER got CHAR error" when i put p.rank||'/'||p.cnt
    How can that be solved.
    ORacle version is 10g.

    Taken from the documentation of the CASE expression:
    "For a simple CASE expression, the expr and all comparison_expr values must either have the same datatype (CHAR, VARCHAR2, NCHAR, or NVARCHAR2, NUMBER, BINARY_FLOAT, or BINARY_DOUBLE) or must all have a numeric datatype. If all expressions have a numeric datatype, then Oracle determines the argument with the highest numeric precedence, implicitly converts the remaining arguments to that datatype, and returns that datatype.
    For both simple and searched CASE expressions, all of the return_exprs must either have the same datatype (CHAR, VARCHAR2, NCHAR, or NVARCHAR2, NUMBER, BINARY_FLOAT, or BINARY_DOUBLE) or must all have a numeric datatype. If all return expressions have a numeric datatype, then Oracle determines the argument with the highest numeric precedence, implicitly converts the remaining arguments to that datatype, and returns that datatype."
    You need to use the same data type for all your expressions. If you want to return a string, then you need to convert the remaining numbers explicitly to strings. E.g. you could try something like this:
    CASE
                 WHEN code = 'MDN' THEN
                           to_char(max(case when p.m_date = v_prev2_yr_mon and p.period_type = '1M' then p.mdn /100  else null end), 'TM')
                 WHEN code = 'QRT' THEN
                      to_char(max(case when p.m_date = v_prev2_yr_mon and p.period_type = '1M' then p.quartile  else null end), 'TM')
                 WHEN code = 'PCT' THEN
                      to_char(max(case when p.m_date = v_prev2_yr_mon and p.period_type = '1M' then p.pct_beaten / 100 else null end), 'TM')
                 WHEN code = 'RNK' THEN
                           case when (p.m_date = v_prev2_yr_mon and p.period_type = '1M'  and p.rank is  null and p.cnt is null)
                        THEN
                                       to_char(P.RANK, 'TM')
                        else
                                        p.rank||'/'||p.cnt
                        end           
                 ELSE NULL
                 END CASE I see another potential issue, you're mixing aggregate functions with non-aggregate expressions, this can only work if these non-aggregate expressions are part of the group by clause, but you haven't posted the complete statement so I can only guess.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Use Group By on oracle

    We have to use a group by into a request from an oracle database :
    SELECT SUBSTR(ETETAFI,0,6)
    ||'0'
    ||' '
    ||' '
    ||MAX(SUBSTR(ETETAFI,13,35))
    ||RPAD(TO_CHAR(SUM(TO_NUMBER(SUBSTR(ETETAFI,48,16)))),16,' ')
    ||RPAD(TO_CHAR(SUM(TO_NUMBER(SUBSTR(ETETAFI,64,16)))),16,' ')
    ||RPAD(TO_CHAR(SUM(TO_NUMBER(SUBSTR(ETETAFI,80,16)))),16,' ')
    ||RPAD(TO_CHAR(SUM(TO_NUMBER(SUBSTR(ETETAFI,96,16)))),16,' ')
    FROM JDEDATA900.F7409FOW GROUP BY SUBSTR(ETETAFI,0,6)
    Do you know how to do that as the table JDEDATA900.F7409FOW is our source.
    Thanks.

    To use GROUP BY, when you map any column with an aggregate function, ODI will automatically generate a group by on all other columns which are mapped, but without an aggregate function.

  • Calculating 30,60,90 Days Totals in sql

    Hi All,
    Here I  have written a query to find the total for 30Days for a perticular member after the discharge date.
    These 30Days Amount will be populated into another table based on joins.
    Now my question is i am trying to get the total for 60 Days , 90 Days and 180 Days.
    I can write Different queries changing datediff to 60,90,180 but i want all those fields in single query .
    please suggest how to write in single query.... 
    Select Distinct  ip.member
    sum(isnull(NetAmt,0.00)) as 30DaysAmount
    from claims c
    join claimsDetail cd on c.claimsnbr=cd.claimsnbr
    join inpatient ip on cd.membernbr=ip.membernbr
    where c.formnbr='SNF' 
    and (datediff(day,cd.specificdateofservice ,dischargedate) between 0 and 30
    group by ip.member
    I really appreciate your help.
    Thanks,
    Kalyan.

    Hi Visakh16/Dan,
    I tried using case statement in select statement itself using 
    case when datediff(d,cd.specificdateofservice,dischargedate) between 0 and 30 then sum(isnull(NETAMT),0.00)
    then it throws an error i cannot use between in select, then i removed then it says all other columns not part of aggregate must be part of group by(similar to that)
    Then i included all  other fields in group by, but i am worried will the value be correct ?
    Thanks for your reply i will try using your query as above:using SUM(case.....)
    I dont have access to work network, so i am writing general wordings of errror and query...
    Thanks,
    Kalyan.
    The CASE..WHEN should be inside the SUM
    you can also use this
    Select ip.member,
    sum(case when datediff(day,cd.specificdateofservice ,dischargedate) >= 0 and datediff(day,cd.specificdateofservice ,dischargedate) <= 30 then isnull(NetAmt,0.00) else 0.00 end) as 30DaysAmount,
    sum(case when datediff(day,cd.specificdateofservice ,dischargedate) >= 0 and datediff(day,cd.specificdateofservice ,dischargedate) <= 60 then isnull(NetAmt,0.00) else 0.00 end) as 60DaysAmount,
    sum(case when datediff(day,cd.specificdateofservice ,dischargedate) >= 0 and datediff(day,cd.specificdateofservice ,dischargedate) <= 90 then isnull(NetAmt,0.00) else 0.00 end) as 90DaysAmount,
    sum(isnull(NetAmt,0.00)) as 180DaysAmount
    from claims c
    join claimsDetail cd on c.claimsnbr=cd.claimsnbr
    join inpatient ip on cd.membernbr=ip.membernbr
    where c.formnbr='SNF'
    and datediff(day,cd.specificdateofservice ,dischargedate) between 0 and 180
    group by ip.member
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Anywhere i can get a download copy of VirtualPC?

    i've found a few online stores that still have copy but nicer to just download it... not waiting for shipping or paying for shipping.
    found this place but don't know if it's a legit sight or not...
    http://alll-soft.com/download-virtual-pc-70-for-mac-now.php
    Message was edited by: tmartine

    That's definitely not a legit site - I would not buy from there. My favorite part of the site, in the FAQ section, is this: "+We offer the software for downloading only, it means that you do not receive a fancy package, a printed manual and license that actually aggregate the largest part of the retail price.+" Just a hunch, but software sold without a valid license is probably pirated.
    I don't believe Virtual PC was ever offered for sale (legitimately) as a download. It was only available as a 3 CD package.

  • Hi i have 50 infoobjects as part of my aggregates and in that 10 infoobjects have received changes in masterdata.so in my process chain the Attribute change run in running for a long time.can i kill the job and repeat the same.

    Hi i have 50 infoobjects as part of my aggregates and in that 10 infoobjects have received changes in masterdata.so in my process chain the Attribute change run in running for a long time.can i kill the job and repeat the same.

    Hi,
    I believe this would be your Prod system, so don't just cancel it but look at the job log. If it is still processing then don't kill it and wait for the change run to complete but if you can see that nothing is happening and it is stuck for a long time then you can go ahead and cancel it.
    But please be sure, as these kind of jobs can create problems if you cancel them in the middle of a job.
    Regards,
    Arminder Singh

  • How to implement this aggregate logic at target column in odi inteface mapp

    sum(NOTICES) over ( partition by property order by RELAVANTDATE range between interval '30' day preceding and current row)
    how to implement this aggregate logic at target column in odi inteface mappings

    Hi
    if you don't want to aggregate try to define a user function
    analytic_sum($(value))
    implémented by
    sum($(value))
    after that
    replace your
    sum(NOTICES) over ( partition by property order by RELAVANTDATE range between interval '30' day preceding and current row)
    by
    analytic_sum(NOTICES) over ( partition by property order by RELAVANTDATE range between interval '30' day preceding and current row)

  • Optimistic Locking fails when version field is part of a Aggregate

    I'm trying to persist a Mapped Object using 9.0.3 Toplink.
    The object uses optimistic locking while the Timestamp versioning field is part of an Aggreate Descriptor. This works well in the Workbench (does not complain).
    Unfortunally it does not work whenever I use the UnitOfWork to register and commit the chances.
    Sample code:
    Object original;
    UnitOfWork unitOfWork = ...          
    Object clone =   unitOfWork.registerExistingObject(original);
    clone.setBarcode("bliblalbu");
    unitOfWork.commit();This throws an nasty OptimisticLockException, complaining about a missing versioning field:
    LOCAL EXCEPTION STACK:
    EXCEPTION [TOPLINK-5004] (TopLink - 9.0.3 (Build 423)): oracle.toplink.exceptions.OptimisticLockException
    EXCEPTION DESCRIPTION: An attempt was made to update the object [BusinessObject:{id:12382902,shorttext:null,barcode:bliblablu,ownerLocation:null,IdEntryName:0,idCs:20579121}], but it has no version number in the identity map.
    It may not have been read before the update was attempted.
    CLASS> de.grob.wps.domain.model.BusinessObjectBO PK> [12382902]
         at oracle.toplink.exceptions.OptimisticLockException.noVersionNumberWhenUpdating(Unknown Source)
         at oracle.toplink.descriptors.VersionLockingPolicy.addLockValuesToTranslationRow(Unknown Source)
         at oracle.toplink.internal.queryframework.DatabaseQueryMechanism.updateObjectForWrite(Unknown Source)
         at oracle.toplink.queryframework.WriteObjectQuery.executeCommit(Unknown Source)
         at oracle.toplink.internal.queryframework.DatabaseQueryMechanism.executeWrite(Unknown Source)
         at oracle.toplink.queryframework.WriteObjectQuery.execute(Unknown Source)
         at oracle.toplink.queryframework.DatabaseQuery.execute(Unknown Source)
         at oracle.toplink.publicinterface.Session.internalExecuteQuery(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.internalExecuteQuery(Unknown Source)
         at oracle.toplink.publicinterface.Session.executeQuery(Unknown Source)
         at oracle.toplink.publicinterface.Session.executeQuery(Unknown Source)
         at oracle.toplink.internal.sessions.CommitManager.commitAllObjects(Unknown Source)
         at oracle.toplink.publicinterface.Session.writeAllObjects(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.commitToDatabase(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.commitRootUnitOfWork(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.commitAndResume(Unknown Source)
         at de.grob.wps.dwarf.domainstore.toplink.ToplinkTransaction.commit(ToplinkTransaction.java:60)
         at de.grob.wps.dwarf.domainstore.toplink.ToplinkPersistenceManager.commit(ToplinkPersistenceManager.java:396)
         at de.grob.wps.dwarf.domainstore.toplink.ToplinkPersistenceManagerTest.testPersistSerializableWithBusinessObjects(ToplinkPersistenceManagerTest.java:87)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:324)
         at junit.framework.TestCase.runTest(TestCase.java:154)
         at junit.framework.TestCase.runBare(TestCase.java:127)
         at junit.framework.TestResult$1.protect(TestResult.java:106)
         at junit.framework.TestResult.runProtected(TestResult.java:124)
         at junit.framework.TestResult.run(TestResult.java:109)
         at junit.framework.TestCase.run(TestCase.java:118)
         at junit.framework.TestSuite.runTest(TestSuite.java:208)
         at junit.framework.TestSuite.run(TestSuite.java:203)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:392)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:276)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:167)So what can I to fix this problem? BTW the Object I try to persists has been read from database and the IDE debugger shows what that the aggregate object contains java.sql.Timestamp instances.

    Sorry guys. My debugger fooled me. The locking field wasn't initialized in the database. This caused the problem which is fixed now.
    Thx anyway.
    Bye
    Toby

  • Use ODI + ESSBASE, which part of Essbase to install

    hi, i'm a beginner.
    now i use ODI to extract data from one oracle db to another oracle db, then use ODI to construct.
    i saw there are many parts of Essbase:
    OTN Dowhttp://www.oracle.com/technology/software/products/bi/performance-management/111120/hyperion_essbase_111120.html
    Oracle Essbase Client Release 11.1.1.2.0
    Oracle Essbase Server Release 11.1.1.2.0
    Oracle Hyperion Provider Services Release 11.1.1.2.0
    http://www.oracle.com/technology/products/bi/essbase/index.html
    Oracle Essbase Studio
    Oracle Essbase Administration Services
    Oracle Essbase Integration Services
    Oracle Essbase Provider Services
    Oracle Essbase Visual Explorer
    Hyperion Application Builder.NET
    i don't know which parts to download yet.thanks.

    Hi,
    If you are only intending on using essbase V11, then you will need to download
    Oracle Hyperion Enterprise Performance Management System Installer, Fusion Edition Release 11.1.1.2.0
    Oracle Hyperion Enterprise Performance Management System Foundation Services Release 11.1.1.2.0
    Oracle Essbase Client Release 11.1.1.2.0
    Oracle Essbase Server Release 11.1.1.2.0
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Aggregates: what is split part?

    Hi, all!
    When I run RSRT, i met with selecting diffent aggregates for different split parts. As i understand, number of split parts depends on number of RKF in query. But I have 3 RKF and 2 CKF in query and only 2 split parts.
    Can you specify, when and what for are split parts used?

    Generally a split part correponds to RKF, but it really depends on the nature of the restrictions.  If the BW can generate SQL code such that two of the RKFs could be populated from the same set of restrictions, tehn you would have only one split part.
    Perhaps the easiet way to understand this, is to examine the actual SQL that has been generated.
    In my experience, it can become a real challenge act trying to figure if a BW query benefits from this capability of having different portions of the query generated against multiple aggregates (base cube and aggregates). 
    Sometimes having too many aggregates can hurt performance when you have split parts resulting in 4 differnt aggregates being used for the BW query. What happens then is 4 SQL queries must run (possibly 8 if the aggregates are not fully compressed) and the results of the 4 (or 8) queries merged.  Also each of the 4 queries must perfrom the required reads of dimension tables and master data tables.  If some of these dimension or master data tables are large and require a full table scan, you are now doing that 4 times instead of just once if you only had the base cube or a single aggregate that could satisfy all the split parts.   
    You can test different scenarios by turing off different aggregates to see what the total impact is.
    e.g. (won't worry about compressed cube in this example)
    Base InfoCube - 10 million rows
    Now you have 4 aggregates (we'll assume no parent/child aggregates)
    Aggr 100001 - 1.2 million rows
    Aggr 100002 - 800,000 rows 
    Aggr 100003 - 1 million rows
    Aggr 100004 - 500,000 rows
    A user query is split and 4 SQL queries are generated, one against each aggregate.  If all the split parts could be satisfied from Aggr 100001, you could be better off getting rid of the other aggregates completely.

  • How to refresh ODI variable as part of a Knowledge Module?

    Hi,
    I want to know how can i refresh a ODI varaible in a knowledge module using oracle Technology. EX: i have created a variable called VAR_TEST in ODI.
    In one of the knowledge module steps, i want to refresh as mentioned below:
    begin
    select colname into #VAR_TEST from tablename;
    end;
    Many thanks ....

    Hi Martin,
    Put the query in the refresh tab into the variable, generate a scenario from the variable and call the scenario thru "OdiStartScen" (Sunopsis API technology) from KM.
    In this case the variable needs to be "Last Value"
    An alternative is to use a Java variable instead.... Works better to KM's.
    Does it help you?
    Cezar Santos
    http://odiexperts.com

Maybe you are looking for

  • WPA security Types are missing after upgrading to Windows 8.1

    After upgrading from Windows 8 Pro 64bit to Windows 8.1 Pro 64bit on my Sony Vaio Duoe 13, I could not find WPA security type while configuring Wireless network manually. (only WPA2 and 802.1X are available). I have done a lot of troubleshootings lik

  • Photos & Videos are not saved in Media Card

    Hi, I am not able to save the Wht Apps Video and Photos in Media Card in Blackberry 9320.  Please give me the solution for the same.

  • Variable Exit-Urgent

    Hi , I need to write a exit for the variable on 0infoprovider field.When we press F4 on that variable, it should sisplay only those infoproviders included in multiprovider. Now it is dispalying all infoproviders. I thought I can use the function modu

  • [SOLVED] Huge power consumption after kernel upgrade.

    Dear All, I have recently bought the new lenovo thinkpad X1 equipped with a Core I5 processors and 4GB of RAM. I am quite satisfied with this machine except for the fact that the fan is extremely loud (but perhaps a bios upgrade will fix problem). No

  • PSE 10 install on second computer

    I bought PSE 10 last year and installed it on my own and my husband's computers - I understand that two computers is the maximum - from the Adobe website.  Recently, he had to restore his computer to factory default and lost the program.  Where can I