Patch 2529822, bp 4 Tabular Cube Aggregation
Hi,
According to the bibeans installation doc,
I need these patches from metalink:
2632931 -- The Oracle9i 9.2.0.2 patch set. Choose the appropriate
platform, then download and install the patch set.
2529822 -- Best Practices for Tabular Cube Aggregation and Query
Operations. When you configure the database, you must follow these
configuration settings to ensure good performance of BI Beans.
But...
metalink tells me that 2529822 is not a valid patch.
And...
A general metalink search on 2529822 reveals this:
BI Beans Connection Problem:
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=FOR&p_id=483349.995
BI Beans Connection Problem:
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=FOR&p_id=483350.995
But...
both of the above links are dead ends.
They say...
This is a forum thread. The owner of this thread has chosen not to publish the contents.
Should I try to track down: "Best Practices for Tabular Cube Aggregation and Query Operations" ?
Or is it a waste of time?
-atm
First I would recommend upgrading to database version 9.2.0.4.1 and upgrading to the latest version of BI Beans. The database patch is a two stage patch:
Patch Description
- 3095277 : 9.2.0.4 PATCH SET FOR ORACLE DATABASE SERVER
- 3084634 : 9.2.0.4 ONE-OFF PATCH AND 9.2.0.4.1 SERVER PATCH
If you goto Metalink.oracle.com and type 'Best Practices for Tabular Cube Aggregation and Query Operations' you will see the document listed in the returned results. I would recommend right-mouse click and 'Save Target As' to download the pdf file as clicking on the link does not seem to work very well with my browser.
alternatively try following this link:
http://metalink.oracle.com/cgi-bin/cr/getfile.cgi?p_attid=155514.1:284497
Hope this helps
BI Beans Product Management Team
Oracle Corporation
Similar Messages
-
About patch 2529822,is it necessory for BI sample?
Hi folds!
I want to use the sample of olap,and i must install BI Beans from OTN.My database is Oracle9.2.0.2,and i know i need patch sets :263293,2529822 .
But i am a student and have no Support Identifier.How can i get patch sets? what is patch 2529822? is it necessory?
Best Regards.
Thanks
Jiaowei ZhaoOracle9i OLAP
Best Practices for Tabular Cube Aggregation and Query Operations
Release 2 for Windows and UNIX
March 2003
Part No. A92122-04
This document provides techniques and practices designed to optimize the performance of typical OLAP operations and manageability of the environment. This information is divided into four distinct categories:
Schema Design Considerations
Oracle Configuration Parameters and Session Settings for Access Structure Builds
Access Structure Scripting Package
Query Operation Optimizations
Schema Design Considerations
Assign surrogate keys, which are unique across all hierarchical levels, to dimensional data. The surrogate keys are defined as type NUMBER with length defined by the magnitude of distinct dimension members. (If there is expected growth in this list, factor that in accordingly.) These surrogate keys are to be used as the primary keys of the dimension tables. A routine to supplement the character keys in the fact tables with the newly defined surrogates must be run prior to defining OLAP objects. An example script for this type of routine is provided below.
Create separate tablespaces for planned Materialized Views and View Indices. This will not significantly affect any aspect of performance, but it is good practice for file management.
Compute statistics on base tables prior to executing build scripts for access structures (materialized views and indices).
Surrogate Key Assignment (Example)
This example generates surrogate keys for the channel members and recreates the channel table with the surrogate keys. It then recreates the sales table with the surrogate keys included.
PROMPT Generate surrogate keys for CHANNELS
DROP TABLE chan_skey;
CREATE TABLE chan_skey AS
SELECT
lvlid AS lvlid,
dimval AS dimval,
dense_rank () OVER
(ORDER BY lvlid, dimval) AS dimkey
FROM
(SELECT DISTINCT TO_CHAR(channel_total) AS dimval, 0 lvlid FROM
channels UNION ALL
SELECT DISTINCT TO_CHAR(channel_class) AS dimval, 1 lvlid FROM
channels UNION ALL
SELECT DISTINCT TO_CHAR(channel_id) AS dimval, 2 lvlid FROM
channels);
COMMIT;
DROP INDEX chan_skey_index;
CREATE INDEX chan_skey_index ON chan_skey(lvlid);
COMMIT;
PROMPT Create CHANNELS_NEW table with surrogate keys
DROP TABLE channels_new;
CREATE TABLE channels_new
( channel_id CHAR(1),
channel_id_key NUMBER,
channel_desc VARCHAR2(20),
channel_class VARCHAR2(20),
channel_class_key NUMBER,
channel_total VARCHAR2(13),
channel_total_key NUMBER) ;
INSERT /*+ APPEND*/ INTO channels_new
SELECT c.channel_id,
(SELECT DISTINCT sk2.dimkey
FROM chan_skey sk2
WHERE sk2.dimval = TO_CHAR(c.channel_id)
AND sk2.lvlid = 2),
c.channel_desc,
c.channel_class,
(SELECT DISTINCT sk1.dimkey
FROM chan_skey sk1
WHERE sk1.dimval = TO_CHAR(c.channel_class)
AND sk1.lvlid = 1),
c.channel_total,
(SELECT DISTINCT sk0.dimkey
FROM chan_skey sk0
WHERE sk0.dimval = TO_CHAR(c.channel_total)
AND sk0.lvlid = 0)
FROM channels c;
COMMIT;
DROP table chan_skey;
prompt Create SALES with references to the surrogate keys
DROP TABLE sales_new;
CREATE TABLE sales_new
PARTITION BY RANGE (time_id_key)
(PARTITION sales_q1_1998 VALUES LESS THAN (91),
PARTITION sales_q2_1998 VALUES LESS THAN (182),
PARTITION sales_q3_1998 VALUES LESS THAN (274),
PARTITION sales_q4_1998 VALUES LESS THAN (366),
PARTITION sales_q1_1999 VALUES LESS THAN (456),
PARTITION sales_q2_1999 VALUES LESS THAN (547),
PARTITION sales_q3_1999 VALUES LESS THAN (639),
PARTITION sales_q4_1999 VALUES LESS THAN (731),
PARTITION sales_q1_2000 VALUES LESS THAN (822),
PARTITION sales_q2_2000 VALUES LESS THAN (913),
PARTITION sales_q3_2000 VALUES LESS THAN (1005),
PARTITION sales_q4_2000 VALUES LESS THAN (MAXVALUE))
AS
SELECT products_new.prod_id_key,
customers_new.cust_id_key,
times_new.time_id_key,
channels_new.channel_id_key,
promotions_new.promo_id_key,
sales.quantity_sold,
sales.amount_sold
FROM products_new,
customers_new,
times_new,
channels_new,
promotions_new,
sales
WHERE sales.prod_id = products_new.prod_id AND
sales.cust_id = customers_new.cust_id AND
sales.time_id = times_new.time_id AND
sales.channel_id = channels_new.channel_id AND
sales.promo_id = promotions_new.promo_id;
REM DROP TABLE sales;
REM RENAME table sales_new sales;
Oracle Configuration Parameters and Session Settings for Access Structure Builds
Cube Aggregation Optimizations
init.ora Parameters
Modify the standard init.ora file (named initsid.ora, located oracle_home/admin/sid/pfile) by setting or adding the parameters described below. When starting the database for an initial build or major data update operation, use this specifically modified parameters file with a command such as this:
startup pfile = filepath
Make the following changes to the parameters file:
db_cache_size=n
Where: n is 25% of real memory
Example: 128M for 512M of RAM
db_file_multiblock_read_count=n
Where: n is a number based on hardware. The ideal would be multiplier of 8k blocks to equal the disk stripe width.
shared_pool_size = n
Where: n is 10% or less of real memory.
Example: Approximately 10M for 512M of RAM
pga_aggregate_target=n
Where: n is 50% of real memory
Example: 256M for 512M of RAM
parallel_automatic_tuning=true
#parallel_max_servers
Comment out this setting with the # symbol.
#parallel_min_servers
Comment out this setting with the # symbol.
workarea_size_policy=AUTO
disk_asynch_io=true
Session Parameters
In the SQL*Plus® session in which the access structure build scripts will be executed, set the following parameters:
alter session enable parallel query;
alter session enable parallel dml;
alter session set "query_rewrite_enabled"=FALSE;
Access Structure Scripting Package
Refer to the Oracle9i OLAP User's Guide, Part V, for creating and managing tabular summary data structures.
Query Operation Optimizations
init.ora Parameters
For the RDBMS 9.2.0.3 patch set, make the following change to the init.ora parameter file. For additional information on changing the database initialization settings, refer to "init.ora Parameters".
multijoin_key_table_lookup=false
ALTER_SESSION Parameters
Insure that the SYS.OLAP$ALTER_SESSION table is populated with the following settings:
pga_aggregate_target=n
Where: n is 25% of real memory. Increase as needed for larger concurrent user loads.
Example: 128M for 512M RAM
db_file_multiblock_read_count=2
star_transformation_enabled=true
query_rewrite_enabled=true
query_rewrite_integrity=stale_tolerated
optimizer_index_cost_adj=25
#optimizer_mode=all_rows
Uncomment this setting for the RDBMS 9.2.0.2.x patch set
subquerypruning_enabled=true
noor_expansion=true
querycost_rewrite=false
prerewrite_push_pred=true
generalizedpruning_enabled=true
unionrewrite_for_gs=force
The SYS.OLAP_ALTER_SESSION table contents are invoked automatically when an OLAP API client connects to the database. Other clients connecting for querying purposes should set these parameters via the ALTER SESSION command. -
Data loaded to Power Pivot via Power Query is not yet supported in SSAS Tabular Cube
Hello, I'm trying to create a SSAS Tabular cube from a data loaded to Power Pivot via Power Query (SAP BOBJ connector) but looks like is not yet supported.
Any one tried this before? any workaround that make sense?
The final goal is pull data from SAP BW, BO Universe (using PowerQuery) and be able to create a SSAS Tabular cube.
Thanks in advance
SebastianSebastian,
Depending on the size of the data from Analysis Services, one work around could be to import the data into into Excel and then make an Excel table and then use the Excel table as a data source.
Reeves
Denver, CO -
ROLAP cube aggregation deployment failure
Hi All
My ROLAP cube aggregation deployment is failing with following error:
dbms_odm.createcubeleveltuple ORA-06533: Subscript beyond count
Has anyone got any idea about this failure please ?Hi Bonniejp,
According to your description, it seems that it's a permission issue when deploy project to Analysis Services. By default, the default account for AS services has only public permission to few databases. If you haven't change the AS services
account, or haven't grant the permission for this account, then you cannot deploy the AS project to server.
So in your scenario, to avoid this issue, you can grant the SELECT permission for the default account for AS services to data source DB (Adventure Works DW2012). Or you can change the services account to this domain user who has the corresponding to the
data source DB.
If this is not what you want, pelase post the detail information about the error, so that we can make further analysis.
Regards,
Charlie Liao
TechNet Community Support -
Large tabular cube w/ lots of measures -- keyboard stuck
hi folks,
I was wondering if anyone has seen this weird behavior: on fairly large tabular cubes (SSAS 2012 SP2), with lots of calculations, the keyboard seems to be stuck, one cannot type or make any modifications to any measures, even after closing a project.
The only workaround is to completely restart the machine. Using Visual Studio 2010 Shell.
Are there any possibly corruptions or calc limitations that one should be aware of?
Notable that this never happens with any small cubes, but just with the largest cube that we have, that has probably tens of calcs.
any thoughts are much appreciated,
CosHi Cos2008,
According to your description, your keyboard get stuck when processing a huge tabular. However, it works properly when processing a small tabular. Right?
In this scenario, I think the keyboard stuck you mentioned should be the entire machine stuck. Since this only happens for big tabular, I guess it should be caused by lack of hardware resource. Please reserve enough disk space for the .cub file. And
set the memory configuration for SSAS engine and SQL instance:
Server Properties (Memory Page)
Memory Properties
Also please refer to a white paper below to optimize your SSAS tabular.
Performance Tuning of Tabular Models in SQL Server 2012 Analysis Services
Best Regards,
Simon Hou
TechNet Community Support -
Hi All,
We have an ASO cubes with millions of records having 10 dimensions (1 Account dimension).Few facts values are getting changed after aggregation where without aggregation, we are getting correct values.
We are loading data from source system (Teradata) where we have rounded off values till 7 places of decimal. But when we see data at report level (WA) it displays non zero values after 7 places of decimal for few facts.
For us it is making a huge difference as we are calculating var% value on report. The denominator value for this var% which is 0 for certain line items but due to aggregation its showing some value (0.0000000198547587458) which result very large values for var%.
Is it the tool behavior or there is some problem with aggregation?
Any input will be really appreciable.
Thanks,What doesn't make sense to me is you don't need aggregations on level zero members as that is where the data is stored. I'm guessing oyu mean level sero members of one dimension and higher level members of other dimensions. Are those other dimensions dynamic or stored? Do you have a lot of calculations going on that are being retrieved? Have you materalized aggregations on the cube?
-
I have one particular cube definition that when I go into the aggregation tab the properties will not come up i.e. you just see the properties for the previous tab you were in.
Has anyone else had this problem?Hi Mohann,
That is Aggregation levels tab in multiprovider, this means;
if you want to work on planning and you want to include those aggregation levels which are created in planning modeler, can be directly used to create a multiprovider.
Aggregation levels:
Chk the below site to know more abt aggregation level.
http://help.sap.com/saphelp_nw70/helpdata/EN/43/1c3c7031b70701e10000000a422035/frameset.htm
Hope your doubt clarified.........
Rgds,
Ravi -
First time to build OLAP Cube - aggregation problem
I am new to OLAP technology and i want to build first OLAP cube. i use scott schema doing:
1- build dimension on "DEPT" table with one level deptno.
2- build cube with one measure sal on "EMP" table.
Table Emp
DEPTNO SAL
20 800
30 1600
30 1250
20 2975
30 1250
30 2850
10 2450
20 3000
10 5000
30 1500
20 1100
30 950
20 3000
10 1300
table DEPt
DEPTNO DNAME
10 ACCOUNTING
20 RESEARCH
30 SALES
40 OPERATIONS
when i use maintain wizard and then view data the sum of salary is not accurate it looks like
sum
all depts 5550
accounting 1300
researches. 3000
sales 1250
operations 0
values should be
sum
all depts 29025
accounting 8750
researches. 10875
sales 9400
operations 0
why the aggregation values for departments is no accurate??Problem is visible in your below table.
Table Emp
DEPTNO SAL
20 800
30 1600
30 1250
20 2975
30 1250
30 2850
10 2450
20 3000
10 5000
30 1500
20 1100
30 950
20 3000
10 1300
There are multiple of dept no with different sal. In OLAP when you load such record then the last record comes wins. This is why if you observe then you will see the value you see is the last value getting loaded.
To resolve this you should do a group by (in emp) deptno and sum(sal). Load the records and you will see correct result.
Hope this helps.
Thanks,
Brijesh -
Error in Running a dimension ( BIB-9509 Oracle OLAP did not create cursor.)
oracle.dss.dataSource.common.QueryRuntimeException: BIB-9509 Oracle OLAP did not create cursor.
oracle.express.ExpressServerExceptionError class: OLAPI
Server error descriptions:
DPR: Unable to create server cursor, Generic at TxsOqDefinitionManagerSince9202::crtCurMgrs4
OES: ORA-00938: not enough arguments for function
, Generic at TxsRdbRandomAccessQuery::TxsRdbRandomAccessQuery
help is appreciated
Thanks, PrasadThis is the patch: 2529822
"Best Practices for Tabular Cube Aggregation & Query Operations". Thanks very much for your advice.
I followed the instructions in the document, but it did not seem to make a difference when
trying to follow the tutorial instructions and using the SALES measure.
However, I selected the stock price measures and it worked ok.
Is there a size limit here? I am looking at Oracle OLAP and BIBeans to replace our Cognos installation,
which is expensive and unreliable on unix. Of course our managers and customers want every dimension
across a huge fact table to be available for end users... -
Does BI Beans903 support OLAP 9.2.0.2?
Hi,
Does BI Beans903 support OLAP9.2.0.2? Is BI Beans903 certified with OLAP9.2.0.2?
Thank you.
SeijiThe answer to your question is yes. In fact the latest release of BIBeans (9.0.3) which has just been made available for download from OTN will only work with a 9iR2 database if it has been patched to 9.2.0.2 level. As part of the OTN software download there is a link to the installation documentation which provides information on the database requirements
(http://otn.oracle.com/products/bib/htdocs/installation/installing_bibeans.html)
Preparing the Oracle9i Release 2 Database to Use with BI Beans. If you want to run against Oracle9i Release 2, complete the following tasks:
If you have not already done so, install the Enterprise Edition of the Oracle9i database, Release 2. For instructions, download the Oracle9i installation guide for the appropriate platform from Oracle Technology Network (http://otn.oracle.com).
Important: When you install the database client, do not install it into the Oracle9i Application Server home.
On the same server machine, download and install the appropriate Oracle9i patch sets from Oracle MetaLink (http://metalink.oracle.com). To locate the patch sets to download, log onto MetaLink and click Patches, then search for the following patch set numbers:
2632931 -- The Oracle9i 9.2.0.2 patch set. Choose the appropriate platform, then download and install the patch set.
2529822 -- Best Practices for Tabular Cube Aggregation and Query Operations. When you configure the database, you must follow these configuration settings to ensure good performance of BI Beans.
Hope this helps
Keith Laker
BIBeans Product Manager
Oracle Corporation -
Hi All,
I have created a cube using OWB wizard. The aggregation tab applies to all measures of the cube, where as i want to apply aggregation at an individual measure. Any idea.
Jak.
Message was edited by:
jakdwhHi,
i have tried to assign differnet aggregation roles to differing measures within the OWB Cude editor without success.
I created a simple dimension with two measures one requiring a sum aggregation and one requiring an average aggregation. I can set them ok and vailidate ok but when I close the editor and re-open theey have bothe been set to the overall cube aggregation policy. Even if this is set to NOAGG they all become NOAGG. It seems there must be a flaw in the implementation of aggregation within OWB.
Anyone found a resolution or work-around? I came accross this wjhen trying to utilise degenerate dimension keys in the cube definition....
Robbie -
Is it OK to call tabular model database as tabular cube?
~DBA and BI Developer~ MCSA 2012 Please vote for this Post if found usefulHi Psingla,
Generally, we do not call tabular model database as tabular cube. As you can see, when connecting to a tabular database in SQL Server Management Studio and expand a tabular database, there is no cube inside the database. However, when connecting to a multidimensional
database in SSMS and expand a multidimensional database, we can see the cubes inside the database.
Reference
http://msdn.microsoft.com/en-IN/library/hh230906(v=sql.110).aspx
Regards,
Charlie Liao
TechNet Community Support -
Adding up Downloads excessively due to multiple rows in Tabular
Hi All ,
We have fact table table which has data for Products that are taken down from the App Store ,due to different reasons like App has risky key words , App has adult content etc ... We get the data from the staging table as below
As the "reasonForTakeDown" column is having comma separated string , now we need to split this reason in to multiple rows taking Comma as the delimiter. So the first ,second and Third ProductGUIDs have to repeated multiple
times to each "reasonForTakeDown" as shown below
So in the above example if get the sum(Downloads) for the productKey 1 , then value is 300 where as the Staging value is 100.Same as with the productKey 2 , Sum(Downloads) from the fact table is 150 where as the Staging value is 50 and same case with ProductKey
3 as well . Downloads and Revenue numbers are getting inflated due to multiple rows
I have a Tabular basing on the above fact table and having measures as "Downloads" and "Revenue" which are set to aggregation type as "SUM". For the products like 1,2 and 3 we are
getting inflated numbers from cube when compared to Staging table due to the column ReasonForRemovalkey.
Now the problem is avoid the excess counting of downloads and Revenue by keeping multiple rows in fact table to satisfy reasonForRemovalKey Scenario . Can you help me out in writing an DAX so that anytime i will get the
correct number of downloads and revenue which are matching with staging table.
Approaches tried:
1) Tried changing the aggregation type from SUM to Avg , but this aggregation type would give avg of downloads when the user selects multiple products. Suppose if the user selects ProductKey 1 & 2 then Avg downloads measure would be (100+100+100+50+50+50)
/ 6 = 75 , where as the original downloads sum is 100+50 = 150 .So Avg aggregation wont work for me.
2) I cannot divide the value of downloads/NetRevenue for a product with the count of rows for that product in fact table.So that Sum(Downloads) and Sum(NetRevenue) would give me correct values.I cannot do this because , if want to get the downloads
for a TCR and a product then i would download measure share of that TCR but not the total downloads.
Hope i explained my scenario well. Thanks in advanceSpliting the message as it supports only two images per message .
Contd :
Now the scenarios that needs to satisfied as below :
If user selects a product then we should get only the latest download and revenue out of the above multiple rows for a product from the tabular cube .i.e. as shown in the below snip
Suppose if the user selects any other column instead of Product from the tabular cube , suppose if the user selects a date as a dimension on the row labels and Downloads ,Revenue as the measure from the Tabular cube , then it should only the max downloads
and Revenue for a product. i.e
Now the same case as date with Requester also .If user selects requester then we should count only the downloads and revenue only once i.e. is the max downloads and revenue for a product and a requestor as shown in the below table
RequestorKey Downloads Revenue
1 100 50
2 150 40
3 105 55
Hope i explained now clearly the scenarios.Thanks you once again for your time -
Unable to deploy BPEL Process -- Says If you have installed a patch to the
I have a simple BPEL process that calles an ESB flow deployed in the same server.
I am facing the following issue when trying to deploy the BPEL process.
The ESB flow is deployed and is working perfectly.
Configuration:
JDeveloper 10.1.3.5
SOA Suite 10g Release 3
Downloaded from:
http://www.oracle.com/technetwork/middleware/ias/downloads/index.html
Here I downloaded this file:
http://download.oracle.com/otn/nt/ias/101310/soa_windows_x86_101310_disk1.zip
When I compile the BPEL project, it compiles fine:
Compiling...
Compiling C:\JDeveloper\10gMywork\OrderProcessingApp\OrderDistributionProcess\bpel\OrderDistributionProcess.bpel
*[BPEL Compiler] Initializing compiler for first time use...*
BPEL suitcase generated in: C:\JDeveloper\10gMywork\OrderProcessingApp\OrderDistributionProcess\output\bpel_OrderDistributionProcess_v2011_09_01__39889.jar
copying bpel/OrderDistributionProcess.wsdl to output directory
copying bpel/OrderSystem_OrderSearchService_RSRef.wsdl to output directory
converting, through native2ascii, build.properties to output directory
*[11:05:04 AM] Successful compilation: 0 errors, 0 warnings.*
Now when I choose (Right Click on Project) -> Deploy -> BPEL Process Deployer,
I get this error:
Buildfile: C:\JDeveloper\10gMywork\OrderProcessingApp\OrderDistributionProcess\build.xml
[java] Java Result: 1
validateTask:
[echo]
| Validating workflow
[validateTask] Validation of workflow task definitions is completed without errors
deployProcess:
[echo]
| Deploying bpel process OrderDistributionProcess on localhost, port 8888
[deployProcess] Deploying process C:\JDeveloper\10gMywork\OrderProcessingApp\OrderDistributionProcess\output\bpel_OrderDistributionProcess_1.0.jar
BUILD FAILED
C:\JDeveloper\10gMywork\OrderProcessingApp\OrderDistributionProcess\build.xml:78: A problem occured while connecting to server "localhost" using port "8888": bpel_OrderDistributionProcess_1.0.jar failed to deploy. Exception message is: ORABPEL-05215
Error while loading process.
The process domain encountered the following errors while loading the process "OrderDistributionProcess" (revision "1.0"): null.
If you have installed a patch to the server, please check that the bpelcClasspath domain property includes the patch classes.at com.collaxa.cube.engine.deployment.CubeProcessHolder.bind(CubeProcessHolder.java:285)
at com.collaxa.cube.engine.deployment.DeploymentManager.deployProcess(DeploymentManager.java:804)
at com.collaxa.cube.engine.deployment.DeploymentManager.deploySuitcase(DeploymentManager.java:670)
at com.collaxa.cube.ejb.impl.BPELDomainManagerBean.deploySuitcase(BPELDomainManagerBean.java:445)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.evermind.server.ejb.interceptor.joinpoint.EJBJoinPointImpl.invoke(EJBJoinPointImpl.java:35)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSInterceptor.java:52)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
at com.evermind.server.ejb.interceptor.system.JAASInterceptor$1.run(JAASInterceptor.java:31)
at com.evermind.server.ThreadState.runAs(ThreadState.java:620)
at com.evermind.server.ejb.interceptor.system.JAASInterceptor.invoke(JAASInterceptor.java:34)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
at com.evermind.server.ejb.interceptor.system.TxRequiredInterceptor.invoke(TxRequiredInterceptor.java:50)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSInterceptor.java:52)
at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
at com.evermind.server.ejb.InvocationContextPool.invoke(InvocationContextPool.java:55)
at com.evermind.server.ejb.StatelessSessionEJBObject.OC4J_invokeMethod(StatelessSessionEJBObject.java:87)
at DomainManagerBean_RemoteProxy_4bin6i8.deploySuitcase(Unknown Source)
at com.oracle.bpel.client.BPELDomainHandle.deploySuitcase(BPELDomainHandle.java:317)
at com.oracle.bpel.client.BPELDomainHandle.deployProcess(BPELDomainHandle.java:339)
at deployHttpClientProcess.jspService(_deployHttpClientProcess.java:376)
at com.orionserver.http.OrionHttpJspPage.service(OrionHttpJspPage.java:59)
at oracle.jsp.runtimev2.JspPageTable.service(JspPageTable.java:453)
at oracle.jsp.runtimev2.JspServlet.internalService(JspServlet.java:591)
at oracle.jsp.runtimev2.JspServlet.service(JspServlet.java:515)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
at com.evermind.server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:64)
at oracle.security.jazn.oc4j.JAZNFilter$1.run(JAZNFilter.java:396)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAsPrivileged(Subject.java:517)
at oracle.security.jazn.oc4j.JAZNFilter.doFilter(JAZNFilter.java:410)
at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:621)
at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:368)
at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:866)
at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:448)
at com.evermind.server.http.HttpRequestHandler.serveOneRequest(HttpRequestHandler.java:216)
at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:117)
at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:110)
at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260)
at oracle.oc4j.network.ServerSocketAcceptHandler.procClientSocket(ServerSocketAcceptHandler.java:239)
at oracle.oc4j.network.ServerSocketAcceptHandler.access$700(ServerSocketAcceptHandler.java:34)
at oracle.oc4j.network.ServerSocketAcceptHandler$AcceptHandlerHorse.run(ServerSocketAcceptHandler.java:880)
at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
at java.lang.Thread.run(Thread.java:595)
Total time: 6 seconds
I checked the host port, i.e., 8888 here, from JDeveloper; and it says that the tests passed.
Help me please.Balu,
it seems you are trying to deploy example for B2B. Please check your B2B is up, AQ are up and accessible and you can also check your SOA dehydration DB .
You can check some setup and dexample at: http://www.oracle.com/technology/products/integration/b2b/pdf/B2B_TU_001_B2B_BPEL.pdf
-JMehta -
Are Cube organized materialized view with Year to Date calculated measure eligible for Query Rewrite
Hi,
Will appreciate if someone can help me with a question regarding Cube organized MV (OLAP).
Does cube organized materialized view with calculated measures based on time series Year to date, inception to date eg.
SUM(FCT_POSITION.BASE_REALIZED_PNL) OVER (HIERARCHY DIM_CALENDAR.CALENDAR BETWEEN UNBOUNDED PRECEDING AND CURRENT MEMBER WITHIN ANCESTOR AT DIMENSION LEVEL DIM_CALENDAR."YEAR")
are eligible for query rewrites or these are considered advanced for query rewrite purposes.
I was hoping to find an example with YTD window function on physical fact dim tables with optimizer rewriting it to Cube Org. MV but not much success.
Thanks in advanceI dont think this is possible.
(My own reasoning)
Part of the reason query rewrite works for base measures only (not calc measures in olap like ytd would be) is due to the fact that the data is staged in olap but its lineage is understandable via the olap cube mappings. That dependency/source identification is lost when we build calculated measures in olap and i think its almost impossible for optimizer to understand the finer points relating to an olap calculation defined via olap calculation (olap dml or olap expression) and also match it with the equivalent calculation using relational sql expression. The difficulty may be because both the olap ytd as well as relational ytd defined via sum() over (partition by ... order by ...) have many non-standard variations of the same calculation/definition. E.g: You can choose to use or choose not to use the option relating to IGNORE NULLs within the sql analytic function. OLAP defn may use NASKIP or NASKIP2.
I tried to search for query rewrite solutions for Inventory stock based calculations (aggregation along time=last value along time) and see if olap cube with cube aggregation option set to "Last non-na hierarchical value" works as an alternative to relational calculation. My experience has been that its not possible. You can do it relationally or you can do it via olap but your application needs to be aware of each and make the appropriate backend sql/call. In such cases, you cannot make olap (aw/cubes/dimensions) appear magically behind the scenes to fulfill the query execution while appearing to work relationally.
HTH
Shankar
Maybe you are looking for
-
I need help! Premere elements 9 wont open correctly
Hello I am running windows 7 and I have Premere elements 9. I think I got a virus latley and now when I try to open elements 9 it will only open up a mostly gray screen and the only thing I can click on is the Organizer. The project bord, timeline, e
-
Upto now we used to pay the FOREIGN vendors via BANK transfer using the HouseBank01 while the domestic ones via Check. The company decided to pay from now on via Bank transfer and the domestic vendors BUT using a different HouseBank02. Is it possible
-
Develop List Workflow in Sharepoint APP and attach to host web List
Hi Team, Anybody let me know can we develop a List workflow in SharePoint App and later can we attach to host web list after deploying APP Note: Looking for Only On -premises SharePoint not for SharePoint Online
-
Start workflow from DAM-folder
Hi, is it possible to declare a workflow model so that it is possible to start it by selecting the folder in the DAM rather than the DamAssets? The case is that we would like to add properties to all assets in the selected folder, a variant of the Bu
-
Guru's, My Motion 4 application has gone on the fritz. I am working with a relatively small project file. I can not scrub through the timeline with out the program freezing and 'not responding'. Furthermore I have opened up a previous project and I a