OLAP_TABLE performance issues

Please advice. OLAP_TABLE performance is slow and SELECT did not return data after running for > 10 hours.
ISSUES:
(1) I started with only one measure and one dimension in the OLAP_TABLE statement and it returned data. However, when I added the remaining measures and dimensions, the statement never return data. The end user required all 20 measures and all 7 dimensions. How can I overcome the performance issue and get the data from the cube? Please HELP!
(2) The cube is compressed. I read articles saying that OLAP_TABLE cannot use the LOOP keyword when cube is being compressed. I have already using MODEL keyword and the statement did not come back. How can I improve the performance and return data?
(3) Can I create MV using OLAP_TABLE in 10g? If not, is there anyway to get around without MV because the VIEW is killing the performance so badly (it simply did not return and has to be killed manually).
(4) I also used “AWM plug-in” to create the relational table view. However, all 20 measures and 7 dimensions must be included. It exceeded 4000 characters in single PL/SQL function parameter and caused the limitmap error. So the “AWM plug-in” did not work for me.
Appreciate all of our help!
(*) DATA
1 Fact; 7 Dimensions; 1 cube; 20 measures
POSITION_FACT - 9,387,384 rows
DIM_BUSINESS_DAY - 2 rows - 1 hierarchy - 2 levels - 1 attribute
DIM_INSTRUMENT_TYPE - 16 rows - 1 hierarchy - 2 levels - 1 attribute
DIM_RISK_TYPE - 21 rows - 1 hierarchy - 2 levels - 1 attribute
DIM_BOOK - 673 rows - 1 hierarchy - 10 levels - 2 attributes
DIM_CURVE - 4,869 rows - 1 hierarchy - 6 levels - 1 attribute
DIM_REFERENCE_ENTITY - 3,756 rows - 1 hierarchy - 7 levels - 3 attributes
DIM_POSITION - 745,957 rows - 1 hierarchy - 2 levels - 9 attributes
(*) CUBE CREATED IN AWM:
fully pre-aggregated;
used global composites;
used compassion by integer;
partition by business date;
took minimum ~ 30 minutes to build the cube.
ENVIRONMENT:
(*) Oracle Database 10g Release 2 Patch Set 2 (10.2.0.3.0) 64-bit
(*) 3 products installed in Oracle Home: Interim patches: 5746153 (OLAP 'A' patch), 5556081, 5557962
(*) AWM 10.2.0.3.0A
(*) SQL
CREATE OR REPLACE VIEW vw_cube_bi_nrdb_vw_fl AS
SELECT * FROM TABLE(OLAP_TABLE(
'bi_nrdb DURATION SESSION',
MEASURE am_value_total AS NUMBER FROM bi_nrdb_vw_total
MEASURE am_value_03m AS NUMBER FROM bi_nrdb_vw_am_03m
MEASURE am_value_06m AS NUMBER FROM bi_nrdb_vw_am_06m
MEASURE am_value_09m AS NUMBER FROM bi_nrdb_vw_am_09m
MEASURE am_value_01y AS NUMBER FROM bi_nrdb_vw_am_01y
MEASURE am_value_18m AS NUMBER FROM bi_nrdb_vw_am_18m
MEASURE am_value_02y AS NUMBER FROM bi_nrdb_vw_am_02y
MEASURE am_value_03y AS NUMBER FROM bi_nrdb_vw_am_03y
MEASURE am_value_04y AS NUMBER FROM bi_nrdb_vw_am_04y
MEASURE am_value_05y AS NUMBER FROM bi_nrdb_vw_am_05y
MEASURE am_value_06y AS NUMBER FROM bi_nrdb_vw_am_06y
MEASURE am_value_07y AS NUMBER FROM bi_nrdb_vw_am_07y
MEASURE am_value_08y AS NUMBER FROM bi_nrdb_vw_am_08y
MEASURE am_value_09y AS NUMBER FROM bi_nrdb_vw_am_09y
MEASURE am_value_10y AS NUMBER FROM bi_nrdb_vw_am_10y
MEASURE am_value_12y AS NUMBER FROM bi_nrdb_vw_am_12y
MEASURE am_value_15y AS NUMBER FROM bi_nrdb_vw_am_15y
MEASURE am_value_20y AS NUMBER FROM bi_nrdb_vw_am_20y
MEASURE am_value_30y AS NUMBER FROM bi_nrdb_vw_am_30y
MEASURE am_value_40y AS NUMBER FROM bi_nrdb_vw_am_40y
DIMENSION dim_business FROM dim_business_day WITH
ATTRIBUTE dt_business FROM dim_business_day_long_description
DIMENSION dim_risk_type FROM dim_risk_type WITH
ATTRIBUTE id_risk_type FROM dim_risk_type_long_description
DIMENSION dim_instrument_type FROM dim_instrument_type WITH
ATTRIBUTE id_instrument_type FROM dim_instrument_type_long_description
DIMENSION dim_book FROM dim_book WITH
ATTRIBUTE nm_dim_book FROM dim_book_long_description
ATTRIBUTE trader FROM dim_book_trader
DIMENSION dim_reference_entity FROM dim_reference_entity WITH
ATTRIBUTE nm_dim_reference_entity FROM dim_reference_entity_long_description
ATTRIBUTE nm_spn_moody_rating FROM dim_reference_entity_spn_moody_rating
ATTRIBUTE nm_spn_sp_rating FROM dim_reference_entity_spn_sp_rating
DIMENSION dim_position FROM dim_position WITH
ATTRIBUTE id_buysell FROM dim_position_buysell
ATTRIBUTE id_coupon FROM dim_position_coupon
ATTRIBUTE id_cusip FROM dim_position_cusip
ATTRIBUTE id_isin FROM dim_position_isin
ATTRIBUTE id_instrument_name FROM dim_position_instrument_name
ATTRIBUTE id_maturity FROM dim_position_maturity
ATTRIBUTE id_mtm FROM dim_position_mtm
ATTRIBUTE id_notional FROM dim_position_notional
MODEL
DIMENSION BY (dim_business, dt_business, dim_risk_type, id_risk_type, dim_instrument_type, id_instrument_type,
dim_book, nm_dim_book, trader, dim_reference_entity, nm_dim_reference_entity, nm_spn_moody_rating,
nm_spn_sp_rating, dim_position, id_buysell, id_coupon, id_cusip, id_isin, id_instrument_name,
id_maturity, id_mtm, id_notional)
MEASURES (am_value_total, am_value_03m, am_value_06m, am_value_09m, am_value_01y,
am_value_18m,am_value_02y, am_value_03y, am_value_04y, am_value_05y,
am_value_06y, am_value_07y,am_value_08y, am_value_09y, am_value_10y,
am_value_12y, am_value_15y, am_value_20y, am_value_30y, am_value_40y)
RULES UPDATE SEQUENTIAL ORDER ();

(1a) Thank you so much! The SQL “the CREATE OR REPLACE VIEW vw_cube_bi_nrdb_vw_fl…” the SQL I provided from my previous was the OLAP_STATEMENT I ran for > 10 hrs and killed manually. Please advice.
I have business requirements to display all 7 dimensions and 20 measures for reporting. So I can’t really filter my dimensions much.
(1b) Separately, I also follow your advice to add the WHERE clause after I created the VIEW vw_cube_bi_nrdb_vw_fl, see below statement and received error.
SQL> select * from vw_cube_bi_nrdb_vw_fl
where dt_business = '06/23/2008'
and ID_RISK_TYPE ='CR01'
and ID_INSTRUMENT_TYPE='VANILLA CDS'
and dim_book='BOOK_63272279'
AND dim_reference_entity='GFN_0113182'
and dim_position='3645636' ;
select * from vw_cube_bi_nrdb_vw_fl
ERROR at line 1:
ORA-32638: Non unique addressing in MODEL dimensions
(4) I received below error when I ran “AWM plug-in” to create a VIEW from AWM. I have already uncheck fields that I do not need and only keeping the measures and dimensions I need. Sorry for the long err below:
AUG-21-2008 12:14:47: . Creating view for cube BI_NRDB_AGGR
AUG-21-2008 12:14:48: ..... generating limitmap for cube
AUG-21-2008 12:14:48: ... generating limitmap for cube
AUG-21-2008 12:14:48: ..... mapping table out of date for cube BI_NRDB_AGGR. Updating mapping table.
AUG-21-2008 12:14:48: ... populating mapping for cube BI_NRDB_AGGR
AUG-21-2008 12:14:48: ..... clearing mappings for the cube
AUG-21-2008 12:14:48: ..... collecting metadata for measures in cube
AUG-21-2008 12:14:48: ..... retrieving dimensions for the cube. Need limitmap for each dimension.
AUG-21-2008 12:14:48: ... generating limitmap for DIM_BUSINESS_DAY
AUG-21-2008 12:14:48: ..... mapping table out of date for dimension DIM_BUSINESS_DAY. Updating mapping table.
AUG-21-2008 12:14:48: ... populating dimension map for DIM_BUSINESS_DAY
AUG-21-2008 12:14:48: ..... clearing mappings for the dimension
AUG-21-2008 12:14:48: ..... retrieving physical objects
AUG-21-2008 12:14:48: ..... checking for value hierarchies
AUG-21-2008 12:14:48: ..... retrieving label for dimension levels
AUG-21-2008 12:14:48: ..... populating mapping info for the DIMENSION clause
AUG-21-2008 12:14:48: ..... populating mapping info for the INHIERARCHY clause
AUG-21-2008 12:14:48: ..... retrieving hierarchy information
AUG-21-2008 12:14:48: ..... populating mapping info for the HIERARCHY and FAMILYREL clauses for hierarchy PRIM
AUG-21-2008 12:14:48: ..... populating mapping info for the ATTRIBUTE clause
AUG-21-2008 12:14:48: ... completed populating mapping for DIM_BUSINESS_DAY
AUG-21-2008 12:14:48: ... generating limitmap for DIM_INSTRUMENT_TYPE
AUG-21-2008 12:14:48: ..... mapping table out of date for dimension DIM_INSTRUMENT_TYPE. Updating mapping table.
AUG-21-2008 12:14:48: ... populating dimension map for DIM_INSTRUMENT_TYPE
AUG-21-2008 12:14:48: ..... clearing mappings for the dimension
AUG-21-2008 12:14:48: ..... retrieving physical objects
AUG-21-2008 12:14:48: ..... checking for value hierarchies
AUG-21-2008 12:14:48: ..... retrieving label for dimension levels
AUG-21-2008 12:14:48: ..... populating mapping info for the DIMENSION clause
AUG-21-2008 12:14:48: ..... populating mapping info for the INHIERARCHY clause
AUG-21-2008 12:14:48: ..... retrieving hierarchy information
AUG-21-2008 12:14:49: ..... populating mapping info for the HIERARCHY and FAMILYREL clauses for hierarchy PRIM
AUG-21-2008 12:14:49: ..... populating mapping info for the ATTRIBUTE clause
AUG-21-2008 12:14:49: ... completed populating mapping for DIM_INSTRUMENT_TYPE
AUG-21-2008 12:14:49: ... generating limitmap for DIM_RISK_TYPE
AUG-21-2008 12:14:49: ..... mapping table out of date for dimension DIM_RISK_TYPE. Updating mapping table.
AUG-21-2008 12:14:49: ... populating dimension map for DIM_RISK_TYPE
AUG-21-2008 12:14:49: ..... clearing mappings for the dimension
AUG-21-2008 12:14:49: ..... retrieving physical objects
AUG-21-2008 12:14:49: ..... checking for value hierarchies
AUG-21-2008 12:14:49: ..... retrieving label for dimension levels
AUG-21-2008 12:14:49: ..... populating mapping info for the DIMENSION clause
AUG-21-2008 12:14:49: ..... populating mapping info for the INHIERARCHY clause
AUG-21-2008 12:14:49: ..... retrieving hierarchy information
AUG-21-2008 12:14:49: ..... populating mapping info for the HIERARCHY and FAMILYREL clauses for hierarchy PRIM
AUG-21-2008 12:14:49: ..... populating mapping info for the ATTRIBUTE clause
AUG-21-2008 12:14:49: ... completed populating mapping for DIM_RISK_TYPE
AUG-21-2008 12:14:49: ... generating limitmap for DIM_BOOK
AUG-21-2008 12:14:49: ..... mapping table out of date for dimension DIM_BOOK. Updating mapping table.
AUG-21-2008 12:14:49: ... populating dimension map for DIM_BOOK
AUG-21-2008 12:14:49: ..... clearing mappings for the dimension
AUG-21-2008 12:14:49: ..... retrieving physical objects
AUG-21-2008 12:14:49: ..... checking for value hierarchies
AUG-21-2008 12:14:49: ..... retrieving label for dimension levels
AUG-21-2008 12:14:49: ..... populating mapping info for the DIMENSION clause
AUG-21-2008 12:14:49: ..... populating mapping info for the INHIERARCHY clause
AUG-21-2008 12:14:49: ..... retrieving hierarchy information
AUG-21-2008 12:14:49: ..... populating mapping info for the HIERARCHY and FAMILYREL clauses for hierarchy PRIM
AUG-21-2008 12:14:49: ..... populating mapping info for the ATTRIBUTE clause
AUG-21-2008 12:14:50: ... completed populating mapping for DIM_BOOK
AUG-21-2008 12:14:50: ... generating limitmap for DIM_CURVE
AUG-21-2008 12:14:50: ..... mapping table out of date for dimension DIM_CURVE. Updating mapping table.
AUG-21-2008 12:14:50: ... populating dimension map for DIM_CURVE
AUG-21-2008 12:14:50: ..... clearing mappings for the dimension
AUG-21-2008 12:14:50: ..... retrieving physical objects
AUG-21-2008 12:14:50: ..... checking for value hierarchies
AUG-21-2008 12:14:50: ..... retrieving label for dimension levels
AUG-21-2008 12:14:50: ..... populating mapping info for the DIMENSION clause
AUG-21-2008 12:14:50: ..... populating mapping info for the INHIERARCHY clause
AUG-21-2008 12:14:50: ..... retrieving hierarchy information
AUG-21-2008 12:14:50: ..... populating mapping info for the HIERARCHY and FAMILYREL clauses for hierarchy PRIM
AUG-21-2008 12:14:50: ..... populating mapping info for the ATTRIBUTE clause
AUG-21-2008 12:14:50: ... completed populating mapping for DIM_CURVE
AUG-21-2008 12:14:50: ... generating limitmap for DIM_REFERENCE_ENTITY
AUG-21-2008 12:14:50: ..... mapping table out of date for dimension DIM_REFERENCE_ENTITY. Updating mapping table.
AUG-21-2008 12:14:50: ... populating dimension map for DIM_REFERENCE_ENTITY
AUG-21-2008 12:14:50: ..... clearing mappings for the dimension
AUG-21-2008 12:14:50: ..... retrieving physical objects
AUG-21-2008 12:14:50: ..... checking for value hierarchies
AUG-21-2008 12:14:50: ..... retrieving label for dimension levels
AUG-21-2008 12:14:50: ..... populating mapping info for the DIMENSION clause
AUG-21-2008 12:14:50: ..... populating mapping info for the INHIERARCHY clause
AUG-21-2008 12:14:50: ..... retrieving hierarchy information
AUG-21-2008 12:14:50: ..... populating mapping info for the HIERARCHY and FAMILYREL clauses for hierarchy PRIM
AUG-21-2008 12:14:50: ..... populating mapping info for the ATTRIBUTE clause
AUG-21-2008 12:14:51: ... completed populating mapping for DIM_REFERENCE_ENTITY
AUG-21-2008 12:14:51: ... generating limitmap for DIM_POSITION
AUG-21-2008 12:14:51: ..... mapping table out of date for dimension DIM_POSITION. Updating mapping table.
AUG-21-2008 12:14:51: ... populating dimension map for DIM_POSITION
AUG-21-2008 12:14:51: ..... clearing mappings for the dimension
AUG-21-2008 12:14:51: ..... retrieving physical objects
AUG-21-2008 12:14:51: ..... checking for value hierarchies
AUG-21-2008 12:14:51: ..... retrieving label for dimension levels
AUG-21-2008 12:14:51: ..... populating mapping info for the DIMENSION clause
AUG-21-2008 12:14:51: ..... populating mapping info for the INHIERARCHY clause
AUG-21-2008 12:14:51: ..... retrieving hierarchy information
AUG-21-2008 12:14:51: ..... populating mapping info for the HIERARCHY and FAMILYREL clauses for hierarchy PRIM
AUG-21-2008 12:14:51: ..... populating mapping info for the ATTRIBUTE clause
AUG-21-2008 12:14:51: ... completed populating mapping for DIM_POSITION
AUG-21-2008 12:14:51: ... completed generating limitmap for cube BI_NRDB_AGGR
AUG-21-2008 12:14:51: ..... assigning limitmap to variable in the AW
AUG-21-2008 12:14:51: ..... BI_NRDB_AGGR_CUBE_LIMITMAP found. Will update the variable
AUG-21-2008 12:14:51: ..... defining view BI_DEMO.BI_NRDB_AGGR_CUBEVIEW over the cube
AUG-21-2008 12:14:51: **
AUG-21-2008 12:14:51: ** ERROR: View not created.
AUG-21-2008 12:14:51: ** CAUSE: CREATE VIEW statement failed
AUG-21-2008 12:14:51: ORA-36804: The OLAP_TABLE function encountered an error while parsing the LIMITMAP.
AUG-21-2008 12:14:51: *** DEBUG INFORMATION ***
AUG-21-2008 12:14:51: VIEW CREATION DDL ((truncated after 3900 characters)
AUG-21-2008 12:14:51: CREATE OR REPLACE VIEW BI_DEMO.BI_NRDB_AGGR_CUBEVIEW AS
SELECT *
FROM table(OLAP_TABLE ('BI_DEMO.BI_NRDB duration session',
'&(BI_NRDB_AGGR_CUBE_LIMITMAP)'))
MODEL
DIMENSION BY (
DIM_BUSINESS_DAY,
DIM_INSTRUMENT_TYPE,
DIM_RISK_TYPE,
DIM_BOOK,
DIM_CURVE,
DIM_REFERENCE_ENTITY,
DIM_POSITION)
MEASURES (
DIM_BUSINE_PRIM_PRNT,
DIM_BUSINE_ALL_BUSINE_LVLDSC,
DIM_BUSINE_BUSINESS_D_LVLDSC,
DIM_BUSINE_LDSC,
DIM_BUSINE_LEVEL,
DIM_INSTRU_PRIM_PRNT,
DIM_INSTRU_ALL_INSTRU_LVLDSC,
DIM_INSTRU_INSTRUMENT_LVLDSC,
DIM_INSTRU_LDSC,
DIM_INSTRU_LEVEL,
DIM_RISK_T_PRIM_PRNT,
DIM_RISK_T_ALL_RISK_T_LVLDSC,
DIM_RISK_T_RISK_TYPE_LVLDSC,
DIM_RISK_T_LDSC,
DIM_RISK_T_LEVEL,
DIM_BOOK_PRIM_PRNT,
DIM_BOOK_ALL_BOOK_LVLDSC,
DIM_BOOK_SYSTEM_LVLDSC,
DIM_BOOK_REGION_LVLDSC,
DIM_BOOK_BUSINESS2_LVLDSC,
DIM_BOOK_BUSINESS1_LVLDSC,
DIM_BOOK_BUSINESS_LVLDSC,
DIM_BOOK_DESK_LVLDSC,
DIM_BOOK_RISKSTRIPE_LVLDSC,
DIM_BOOK_SUBRISKSTR_LVLDSC,
DIM_BOOK_BOOK_LVLDSC,
DIM_BOOK_TRADER,
DIM_BOOK_LDSC,
DIM_BOOK_LEVEL,
DIM_CURVE_PRIM_PRNT,
DIM_CURVE_ALL_CURVE_LVLDSC,
DIM_CURVE_CURRENCY_LVLDSC,
DIM_CURVE_SENIORITY_LVLDSC,
DIM_CURVE_CURVE_OWNE_LVLDSC,
DIM_CURVE_CURVE_NAME_LVLDSC,
DIM_CURVE_CURVE_LVLDSC,
DIM_CURVE_LDSC,
DIM_CURVE_LEVEL,
DIM_REFERE_ALL_GFN_LVLDSC,
DIM_REFERE_GFN_INDUST_LVLDSC,
DIM_REFERE_GFN_COUNTR_LVLDSC,
DIM_REFERE_GFN_LVLDSC,
DIM_REFERE_SPN_INDUST_LVLDSC,
DIM_REFERE_SPN_COUNTR_LVLDSC,
DIM_REFERE_SPN_LVLDSC,
DIM_REFERE_LDSC,
DIM_REFERE_SPN_SP_RATING,
DIM_REFERE_SPN_MOODY_RATING,
DIM_REFERE_LEVEL,
DIM_REFERE_PRIM_PRNT,
DIM_POSITI_PRIM_PRNT,
DIM_POSITI_ALL_POSITI_LVLDSC,
DIM_POSITI_TRADE_ID_LVLDSC,
DIM_POSITI_COUPON,
DIM_POSITI_MATURITY,
DIM_POSITI_INSTRUMENT_NAME,
DIM_POSITI_NOTIONAL,
DIM_POSITI_BUYSELL,
DIM_POSITI_CUSIP,
DIM_POSITI_ISIN,
DIM_POSITI_LDSC,
DIM_POSITI_MTM,
DIM_POSITI_LEVEL,
TOTAL,
AM_40Y,
AM_30Y,
AM_20Y,
AM_15Y,
AM_12Y,
AM_10Y,
AM_09Y,
AM_08Y,
AM_07Y,
AM_06Y,
AM_05Y,
AM_04Y,
AM_03Y,
AM_02Y,
AM_18M,
AM_01Y,
AM_09M,
AM_06M,
AM_03M,
OLAP_CALC
) RULES UPDATE SEQUENTIAL ORDER()
AUG-21-2008 12:14:51: LIMITMAP (truncated after 3900 characters):
AUG-21-2008 12:14:51: DIMENSION DIM_BUSINESS_DAY FROM DIM_BUSINESS_DAY WITH -
HIERARCHY DIM_BUSINE_PRIM_PRNT FROM DIM_BUSINESS_DAY_PARENTREL(DIM_BUSINESS_DAY_HIERLIST \'PRIM\') -
INHIERARCHY DIM_BUSINESS_DAY_INHIER -
FAMILYREL DIM_BUSINE_ALL_BUSINE_LVLDSC, -
DIM_BUSINE_BUSINESS_D_LVLDSC -
FROM DIM_BUSINESS_DAY_FAMILYREL(DIM_BUSINESS_DAY_LEVELLIST \'ALL_BUSINESS_DAY\'), -
DIM_BUSINESS_DAY_FAMILYREL(DIM_BUSINESS_DAY_LEVELLIST \'BUSINESS_DAY\') -
LABEL DIM_BUSINESS_DAY_LONG_DESCRIPTION -
ATTRIBUTE DIM_BUSINE_LDSC FROM DIM_BUSINESS_DAY_LONG_DESCRIPTION -
ATTRIBUTE DIM_BUSINE_LEVEL FROM DIM_BUSINESS_DAY_LEVELREL-
DIMENSION DIM_INSTRUMENT_TYPE FROM DIM_INSTRUMENT_TYPE WITH -
HIERARCHY DIM_INSTRU_PRIM_PRNT FROM DIM_INSTRUMENT_TYPE_PARENTREL(DIM_INSTRUMENT_TYPE_HIERLIST \'PRIM\') -
INHIERARCHY DIM_INSTRUMENT_TYPE_INHIER -
FAMILYREL DIM_INSTRU_ALL_INSTRU_LVLDSC, -
DIM_INSTRU_INSTRUMENT_LVLDSC -
FROM DIM_INSTRUMENT_TYPE_FAMILYREL(DIM_INSTRUMENT_TYPE_LEVELLIST \'ALL_INSTRUMENT_TYPE\'), -
DIM_INSTRUMENT_TYPE_FAMILYREL(DIM_INSTRUMENT_TYPE_LEVELLIST \'INSTRUMENT_TYPE\') -
LABEL DIM_INSTRUMENT_TYPE_LONG_DESCRIPTION -
ATTRIBUTE DIM_INSTRU_LDSC FROM DIM_INSTRUMENT_TYPE_LONG_DESCRIPTION -
ATTRIBUTE DIM_INSTRU_LEVEL FROM DIM_INSTRUMENT_TYPE_LEVELREL-
DIMENSION DIM_RISK_TYPE FROM DIM_RISK_TYPE WITH -
HIERARCHY DIM_RISK_T_PRIM_PRNT FROM DIM_RISK_TYPE_PARENTREL(DIM_RISK_TYPE_HIERLIST \'PRIM\') -
INHIERARCHY DIM_RISK_TYPE_INHIER -
FAMILYREL DIM_RISK_T_ALL_RISK_T_LVLDSC, -
DIM_RISK_T_RISK_TYPE_LVLDSC -
FROM DIM_RISK_TYPE_FAMILYREL(DIM_RISK_TYPE_LEVELLIST \'ALL_RISK_TYPE\'), -
DIM_RISK_TYPE_FAMILYREL(DIM_RISK_TYPE_LEVELLIST \'RISK_TYPE\') -
LABEL DIM_RISK_TYPE_LONG_DESCRIPTION -
ATTRIBUTE DIM_RISK_T_LDSC FROM DIM_RISK_TYPE_LONG_DESCRIPTION -
ATTRIBUTE DIM_RISK_T_LEVEL FROM DIM_RISK_TYPE_LEVELREL-
DIMENSION DIM_BOOK FROM DIM_BOOK WITH -
HIERARCHY DIM_BOOK_PRIM_PRNT FROM DIM_BOOK_PARENTREL(DIM_BOOK_HIERLIST \'PRIM\') -
INHIERARCHY DIM_BOOK_INHIER -
FAMILYREL DIM_BOOK_ALL_BOOK_LVLDSC, -
DIM_BOOK_SYSTEM_LVLDSC, -
DIM_BOOK_REGION_LVLDSC, -
DIM_BOOK_BUSINESS2_LVLDSC, -
DIM_BOOK_BUSINESS1_LVLDSC, -
DIM_BOOK_BUSINESS_LVLDSC, -
DIM_BOOK_DESK_LVLDSC, -
DIM_BOOK_RISKSTRIPE_LVLDSC, -
DIM_BOOK_SUBRISKSTR_LVLDSC, -
DIM_BOOK_BOOK_LVLDSC -
FROM DIM_BOOK_FAMILYREL(DIM_BOOK_LEVELLIST \'ALL_BOOK\'), -
DIM_BOOK_FAMILYREL(DIM_BOOK_LEVELLIST \'SYSTEM\'), -
DIM_BOOK_FAMILYREL(DIM_BOOK_LEVELLIST \'REGION\'), -
DIM_BOOK_FAMILYREL(DIM_BOOK_LEVELLIST \'BUSINESS2\'), -
DIM_BOOK_FAMILYREL(DIM_BOOK_LEVELLIST \'BUSINESS1\'), -
DIM_BOOK_FAMILYREL(DIM_BOOK_LEVELLIST \'BUSINESS\'), -
DIM_BOOK_FAMILYREL(DIM_BOOK_LEVELLIST \'DESK\'), -
DIM_BOOK_FAMILYREL(DIM_BOOK_LEVELLIST \'RISKSTRIPE\'), -
DIM_BOOK_FAMILYREL(DIM_BOOK_LEVELLIST \'SUBRISKSTRIPE\'), -
DIM_BOOK_FAMILYREL(DIM_BOOK_LEVELLIST \'BOOK\') -
LABEL DIM_BOOK_LONG_DESCRIPTION -
ATTRIBUTE DIM_BOOK_TRADER FROM DIM_BOOK_TRADER -
ATTRIBUTE DIM_BOOK_LDSC FROM DIM_BOOK_LONG_DESCRIPTION -
ATTRIBUTE DIM_BOOK_LEVEL FROM DIM_BOOK_LEVELREL-
DIMENSION DIM_CURVE FROM DIM_CURVE WITH -
HIERARCHY DIM_CURVE_PRIM_PRNT FROM DIM_CURVE_PARENTREL(DIM_CURVE_HIERLIST \'PRIM\') -
INHIERARCHY DIM_CURVE_INHIER -
FAMILYREL DIM_CURVE_ALL_CURVE_LVLDSC, -
DIM_CURVE_CURRENCY_LVLDSC, -
DIM_CURVE_SENIORITY_LVLDSC, -
DIM_CURVE_CURVE_OWNE_LVLDSC, -
DIM_CURVE_CURVE_NAME_LVLDSC, -
DIM_CURVE_CURVE_LVLDSC -
FROM DIM_CURVE_FAMILYREL(DIM_CURVE_LEVELLIST \'ALL_CURVE\'), -
DIM_CURVE_FAMILYREL(DIM_CURVE_LEVELLIST \'CURRENCY\'), -
DIM_CURVE_FAMILYREL(DIM_CURVE_LEVELLIST \'SENIORITY\'), -
DIM_CURVE_FAMILYREL(DIM_CURVE_LEVELLIST \'CURVE_OWNER\'), -
DIM_CURVE_FAMILYREL(DIM_CURVE_LE
AUG-21-2008 12:14:51: **
AUG-21-2008 12:14:51: ** ERROR: Unable to create view over cube BI_NRDB_AGGR.
AUG-21-2008 12:14:51: ORA-36804: The OLAP_TABLE function encountered an error while parsing the LIMITMAP.

Similar Messages

  • Performance issues of SQL access to AW

    Hi Experts:
    I wonder whether there is performance issues when using SQL to access AW. When using SQL to access cubes in AW, the SQL queries the relational views for AW objects. And the views are based on OLAP_TABLE function. We know that, views based on any table function are not able to make use of index. That is to query a subset of the data of a view, we would have to full scan the view and then apply the filter. Such query plan always lead to bad performance.
    I want to know, when I use SQL to retrieve a small part of data in an AW-cube, will Oracle OLAP engine retrieve all data in the cube and then apply the filter? If the Oracle OLAP engine only retrieves data needed from AW, how can she did it?
    Thanks.

    For most requests the OLAP_TABLE function can reduce the amount of data it produces by examining the rowsource tree , or WHERE clause. The data in Oracle OLAP is highly indexed. There are steps a user can take to optimize the index use. Specifically, pin down the dimension(s) defined in the OLAP_TABLE function LIMITMAP via (NOT)IN lists on the dimension, parent, level or GID columns. Use of valuesets for the INHIER object, instead of a boolean object.
    In 10g, WHERE clauses like SALES > 50 are also processed prior to sending data out.
    For large requests (thousands of rows) performance can be a problem because the data is being sent through the object layer. In 10 this can be ameliorated by wrapping the OLAP_TABLE function call with a SQL MODEL clause. The SQL MODEL knows a bit more about the Olap options and does not require use to pipe the data through the object layer.
    SQL MODEL example (note no ADT defintion, using of auto ADT) This can be wrapped in a CREATE VIEW statement :
    select * from olap_table('myaw duration session', null, null, 'measure sales as number from aw_sales_obj dimension d1 as varchar2(10) from geog ...rest of dimensions')
    sql model dimension by (d1, d2, d3, d4) measures (sales, any attributes, parent columns etc...) unique single reference rules update sequential order ()
    Example of WHERE clause with above select.
    SELECT *
    FROM (select * from olap_table('myaw duration session', null, null, 'measure sales as number from aw_sales_obj dimension d1 as varchar2(10) from geog ...rest of dimensions')
    sql model dimension by (d1, d2, d3, d4) measures (sales, any attributes, parent columns etc...) unique single reference rules update sequential order ()))
    WHERE GEOG NOT IN ('USA', 'CANADA')
    and GEOG_GID = 1
    and TIME_PARENT IN ('2004')
    and CHANNEL = 'CATALOG'
    and SALES > 50000;

  • Report Performance Issue - Activity

    Hi gurus,
    I'm developing an Activity report using Transactional database (Online real time object).
    the purpose of the report is to list down all contacts related activities and activities NOT related to Contact by activity owner (user id).
    In order to fullfill that requirment I've created 2 report
    1) All Activities related to Contact -- Report A
    pull in Acitivity ID , Activity Type, Status, Contact ID
    2) All Activities not related to Contact UNION All Activities related to Contact (Base report) -- Report B
    to get the list of activities not related to contact i'm using Advanced filter based on result of another request which is I think is the part that slow down the query.
    <Activity ID not equal to any Activity ID in Report B>
    Anyone encountered performance issue due to the advanced filter in analytic before?
    any input is really appriciated
    Thanks in advanced,
    Fina

    Fina,
    Union is always the last option. If you can get all record in one report, do not use union.
    since all records, which you are targeting, are in the activity subject area, it is not nessecery to combine reports. add a column with the following logic
    if contact id is null (or = 'Unspecified') then owner name else contact name
    Hopefully, this is helping.

  • Report performance Issue in BI Answers

    Hi All,
    We have a performance issues with reports. Report is running more than 10 mins. we took query from the session log and ran it in database, at that time it took not more than 2 mins. We have verified proper indexes on the where clause columns.
    Could any once suggest to improve the performance in BI answers?
    Thanks in advance,

    I hope you dont have many case statements and complex calculations that you do in the Answers.
    Next thing you need to monitor is how many rows of data that you are trying to retrieve from the query. If the volume is huge then it takes time to do the formatting on the Answers as you are going to dump huge volumes of data. Database(like teradata) returns initially like 1-2000 records if you hit show all records then even db is gonna fair amount of time if you are dumping many records
    hope it helps
    thanks
    Prash

  • BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Hi Masters,
    I am working out for a solution for BW report developed in 0bcs_vc10 virtual cube.
    Some of the querys is taking more 15 to 20 minutes to execute the report.
    This is huge performance issue. We are using BW 3.5, and report devloped in bex and published thru portal. Any one faced similar problem please advise how you tackle this issue. Please give the detail analysis approach how you resolved this issue.
    Current service pack we are using is
    SAP_BW 350 0016 SAPKW35016
    FINBASIS 300 0012 SAPK-30012INFINBASIS
    BI_CONT 353 0008 SAPKIBIFP8
    SEM-BW 400 0012 SAPKGS4012
    Best of Luck
    Chris
    BW BCS cube(0bcs_vc10 ) Report huge performance issue

    Ravi,
    I already did that, it is not helping me much for the performance. Reports are taking 15 t0 20 minutes. I wanted any body in this forum have the same issue how
    they resolved it.
    Regards,
    Chris

  • Interested by performance issue ?  Read this !  If you can explain, you're a master Jedi !

    This is the question we will try to answer...
    What si the bottle neck (hardware) of Adobe Premiere Pro CS6
    I used PPBM5 as a benchmark testing template.
    All the data and log as been collected using performance counter
    First of all, describe my computer...
    Operating System
    Microsoft Windows 8 Pro 64-bit
    CPU
    Intel Xeon E5 2687W @ 3.10GHz
    Sandy Bridge-EP/EX 32nm Technology
    RAM
    Corsair Dominator Platinum 64.0 GB DDR3
    Motherboard
    EVGA Corporation Classified SR-X
    Graphics
    PNY Nvidia Quadro 6000
    EVGA Nvidia GTX 680   // Yes, I created bench stats for both card
    Hard Drives
    16.0GB Romex RAMDISK (RAID)
    556GB LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed (RAID 0)
    I have other RAID installed, but not relevant for the present post...
    PSU
    Cosair 1000 Watts
    After many days of tests, I wanna share my results with community and comment them.
    CPU Introduction
    I tested my cpu and pushed it at maximum speed to understand where is the limit, can I reach this limit and I've logged precisely all result in graph (See pictures 1).
    Intro : I tested my E5-XEON 2687W (8 Cores Hyperthread - 16 threads) to know if programs can use the maximum of it.  I used Prime 95 to get the result.  // I know this seem to be ordinary, but you will understand soon...
    The result : Yes, I can get 100% of my CPU with 1 program using 20 threads in parallel.  The CPU gives everything it can !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test...
    (picture 1)
    Disk Introduction
    I tested my disk and pushed it at maximum speed to understand where is the limit and I've logged precisely all result in graph (See pictures 2).
    Intro : I tested my RAID 0 556GB (LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed) to know if I can reach the maximum % disk usage (0% idle Time)
    The result : As you can see in picture 2, yes, I can get the max of my drive at ~ 1.2 Gb/sec read/write steady !
    Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test to see the impact of transfering many Go of data during ~10 sec...
    (picture 2)
    Now, I know my limits !  It's time to enter deeper in the subject !
    PPBM5 (H.264) Result
    I rendered the sequence (H.264) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%, the turn around 50%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate seem to be a wave (up and down).  Probably caused by (Encrypt time....  write.... Encrypt time.... write...)  // It's ok, ~5Mb/sec during transfert rate !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  39 Go RAM free after the test !  // Excellent
    ~65 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card seem to be a wave also ! (up and down)  ~40% usage of GPU during the process of encoding.
    GPU Ram get 1.2Go of RAM (But with GTX 680, no problem and Quadro 6000 with 6 GB RAM, no problem !)
    Comment/Question : CPU is free (50%), disks are free (99%), GPU is free (60%), RAM is free (62%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    Other : Quadro 6000 & GTX 680 gives the same result !
    (picture 3)
    PPBM5 (Disk Test) Result (RAID LSI)
    I rendered the sequence (Disk Test) using Adobe Media Encoder on my RAID 0 LSI disk.
    The result :
    My CPU is not used at 100%
    My Disk wave and wave again, but far far from the limit !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Buffering time....  write.... Buffering time.... write...)  // It's ok, ~375Mb/sec peak during transfert rate !  Easy !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40.5 Go RAM free after the test !  // Excellent
    ~48 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
    GPU Load on card = 0 (This kind of encoding is GPU irrelevant)
    GPU Ram get 400Mb of RAM (No usage for encoding)
    Comment/Question : CPU is free (65%), disks are free (60%), GPU is free (100%), RAM is free (63%), my computer is not pushed at limit during the encoding process.  Why ????  Is there some time delay in the encoding process ?
    (picture 4)
    PPBM5 (Disk Test) Result (Direct in RAMDrive)
    I rendered the same sequence (Disk Test) using Adobe Media Encoder directly in my RamDrive
    Comment/Question : Look at the transfert rate under (picture 5).  It's exactly the same speed than with my RAID 0 LSI controller.  Impossible !  Look in the same picture the transfert rate I can reach with the ramdrive (> 3.0 Gb/sec steady) and I don't go under 30% of disk usage.  CPU is idle (70%), Disk is idle (100%), GPU is idle (100%) and RAM is free (63%).  // This kind of results let me REALLY confused.  It's smell bug and big problem with hardware and IO usage in CS6 !
    (picture 5)
    PPBM5 (MPEG-DVD) Result
    I rendered the sequence (MPEG-DVD) using Adobe Media Encoder.
    The result :
    My CPU is not used at 100%
    My Disk is totally idle !
    All the process usage are idle except process of (Adobe Media Encoder)
    The transfert rate wave and wave again (up and down).  Probably caused by (Encoding time....  write.... Encoding time.... write...)  // It's ok, ~2Mb/sec during transfert rate !  Real Joke !
    CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
    RAM, more than enough !  40 Go RAM free after the test !  // Excellent
    ~80 thread opened by Adobe Media Encoder (Lot of thread, but it's ok in multi-thread apps!)
    GPU Load on card = 100 (This use the maximum of my GPU)
    GPU Ram get 1Gb of RAM
    Comment/Question : CPU is free (70%), disks are free (98%), GPU is loaded (MAX), RAM is free (63%), my computer is pushed at limit during the encoding process for GPU only.  Now, for this kind of encoding, the speed limit is affected by the slower IO (Video Card GPU)
    Other : Quadro 6000 is slower than GTX 680 for this kind of encoding (~20 s slower than GTX).
    (picture 6)
    Encoding single clip FULL HD AVCHD to H.264 Result (Premiere Pro CS6)
    You can look the result in the picture.
    Comment/Question : CPU is free (55%), disks are free (99%), GPU is free (90%), RAM is free (65%), my computer is not pushed at limit during the encoding process.  Why ????   Adobe Premiere seem to have some bug with thread management.  My hardware is idle !  I understand AVCHD can be very difficult to decode, but where is the waste ?  My computer want, but the software not !
    (picture 7)
    Render composition using 3D Raytracer in After Effects CS6
    You can look the result in the picture.
    Comment : GPU seems to be the bottle neck when using After Effects.  CPU is free (99%), Disks are free (98%), Memory is free (60%) and it depend of the setting and type of project.
    Other : Quadro 6000 & GTX 680 gives the same result in time for rendering the composition.
    (picture 8)
    Conclusion
    There is nothing you can do (I thing) with CS6 to get better performance actually.  GTX 680 is the best (Consumer grade card) and the Quadro 6000 is the best (Profressional card).  Both of card give really similar result (I will probably return my GTX 680 since I not really get any better performance).  I not used Tesla card with my Quadro, but actually, both, Premiere Pro & After Effects doesn't use multi GPU.  I tried to used both card together (GTX & Quadro), but After Effects gives priority to the slower card (In this case, the GTX 680)
    Premiere Pro, I'm speechless !  Premiere Pro is not able to get max performance of my computer.  Not just 10% or 20%, but average 60%.  I'm a programmor, multi-threadling apps are difficult to manage and I can understand Adobe's programmor.  But actually, if anybody have comment about this post, tricks or any kind of solution, you can comment this post.  It's seem to be a bug...
    Thank you.

    Patrick,
    I can't explain everything, but let me give you some background as I understand it.
    The first issue is that CS6 has a far less efficient internal buffering or caching system than CS5/5.5. That is why the MPEG encoding in CS6 is roughly 2-3 times slower than the same test with CS5. There is some 'under-the-hood' processing going on that causes this significant performance loss.
    The second issue is that AME does not handle regular memory and inter-process memory very well. I have described this here: Latest News
    As to your test results, there are some other noteworthy things to mention. 3D Ray tracing in AE is not very good in using all CUDA cores. In fact it is lousy, it only uses very few cores and the threading is pretty bad and does not use the video card's capabilities effectively. Whether that is a driver issue with nVidia or an Adobe issue, I don't know, but whichever way you turn it, the end result is disappointing.
    The overhead AME carries in our tests is something we are looking into and the next test will only use direct export and no longer the AME queue, to avoid some of the problems you saw. That entails other problems for us, since we lose the capability to check encoding logs, but a solution is in the works.
    You see very low GPU usage during the H.264 test, since there are only very few accelerated parts in the timeline, in contrast to the MPEG2-DVD test, where there is rescaling going on and that is CUDA accelerated. The disk I/O test suffers from the problems mentioned above and is the reason that my own Disk I/O results are only 33 seconds with the current test, but when I extend the duration of that timeline to 3 hours, the direct export method gives me 22 seconds, although the amount of data to be written, 37,092 MB has increased threefold. An effective write speed of 1,686 MB/s.
    There are a number of performance issues with CS6 that Adobe is aware of, but whether they can be solved and in what time, I haven't the faintest idea.
    Just my $ 0.02

  • Performance Issue for BI system

    Hello,
    We are facing performance issues for BI System. Its a preproductive system and its performance is degrading badly everyday. I was checking system came to know program buffer hit ratio is increaasing everyday due to high Swaps. So asked to change the parameter abap/buffersize which was 300Mb to 500Mb. But still no major improvement is found in the system.
    There is 16GB Ram available and Server is HP-UX and with Netweaver2004s with Oracle 10.2.0.4.0 installed in it.
    The Main problem is while running a report or creating a query is taking way too long time.
    Kindly help me.

    Hello SIva,
    Thanks for your reply but i have checked ST02 and ST03 and also SM50 and its normal
    we are having 9 dialog processes, 3 Background , 2 Update and 1 spool.
    No one is using the system currently but in ST02 i can see the swaps are in red.
    Buffer                 HitRatio   % Alloc. KB  Freesp. KB   % Free Sp.   Dir. Size  FreeDirEnt   % Free Dir    Swaps    DB Accs
    Nametab (NTAB)                                                                                0
       Table definition     99,60     6.798                                                   20.000                                            29.532    153.221
       Field definition     99,82      31.562        784                 2,61           20.000      6.222          31,11          17.246     41.248
       Short NTAB           99,94     3.625      2.446                81,53          5.000        2.801          56,02             0            2.254
       Initial records      73,95        6.625        998                 16,63          5.000        690             13,80             40.069     49.528
                                                                                    0
    boldprogram                97,66     300.000     1.074                 0,38           75.000     67.177        89,57           219.665    725.703bold
    CUA                    99,75         3.000        875                   36,29          1.500      1.401          93,40            55.277      2.497
    Screen                 99,80         4.297      1.365                 33,35          2.000      1.811          90,55              119         3.214
    Calendar              100,00       488            361                  75,52            200         42              21,00               0            158
    OTR                   100,00         4.096      3.313                  100,00        2.000      2.000          100,00              0
                                                                                    0
    Tables                                                                                0
       Generic Key          99,17    29.297      1.450                  5,23           5.000        350             7,00             2.219      3.085.633
       Single record        99,43    10.000      1.907                  19,41           500         344            68,80              39          467.978
                                                                                    0
    Export/import          82,75     4.096         43                      1,30            2.000        662          33,10            137.208
    Exp./ Imp. SHM         89,83     4.096        438                    13,22         2.000      1.482          74,10               0    
    SAP Memory      Curr.Use %    CurUse[KB]    MaxUse[KB]    In Mem[KB]    OnDisk[KB]    SAPCurCach      HitRatio %
    Roll area               2,22                5.832               22.856             131.072     131.072                   IDs           96,61
    Page area              1,08              2.832                24.144               65.536    196.608              Statement     79,00
    Extended memory     22,90       958.464           1.929.216          4.186.112          0                                         0,00
    Heap memory                                    0                  0                    1.473.767          0                                         0,00
    Call Stati             HitRatio %     ABAP/4 Req      ABAP Fails     DBTotCalls         AvTime[ms]      DBRowsAff.
      Select single     88,59               63.073.369        5.817.659      4.322.263             0                         57.255.710
      Select               72,68               284.080.387          0               13.718.442             0                        32.199.124
      Insert                 0,00                  151.955             5.458             166.159               0                           323.725
      Update               0,00                    378.161           97.884           395.814               0                            486.880
      Delete                 0,00                    389.398          332.619          415.562              0                             244.495
    Edited by: Srikanth Sunkara on May 12, 2011 11:50 AM

  • RE: Case 59063: performance issues w/ C TLIB and Forte3M

    Hi James,
    Could you give me a call, I am at my desk.
    I had meetings all day and couldn't respond to your calls earlier.
    -----Original Message-----
    From: James Min [mailto:jminbrio.forte.com]
    Sent: Thursday, March 30, 2000 2:50 PM
    To: Sharma, Sandeep; Pyatetskiy, Alexander
    Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
    Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
    Hello,
    I just want to reiterate that we are very committed to working on
    this issue, and that our goal is to find out the root of the problem. But
    first I'd like to narrow down the avenues by process of elimination.
    Open Cursor is something that is commonly used in today's RDBMS. I
    know that you must test your query in ISQL using some kind of execute
    immediate, but Sybase should be able to handle an open cursor. I was
    wondering if your Sybase expert commented on the fact that the server is
    not responding to commonly used command like 'open cursor'. According to
    our developer, we are merely following the API from Sybase, and open cursor
    is not something that particularly slows down a query for several minutes
    (except maybe the very first time). The logs show that Forte is waiting for
    a status from the DB server. Actually, using prepared statements and open
    cursor ends up being more efficient in the long run.
    Some questions:
    1) Have you tried to do a prepared statement with open cursor in your ISQL
    session? If so, did it have the same slowness?
    2) How big is the table you are querying? How many rows are there? How many
    are returned?
    3) When there is a hang in Forte, is there disk-spinning or CPU usage in
    the database server side? On the Forte side? Absolutely no activity at all?
    We actually have a Sybase set-up here, and if you wish, we could test out
    your database and Forte PEX here. Since your queries seems to be running
    off of only one table, this might be the best option, as we could look at
    everything here, in house. To do this:
    a) BCP out the data into a flat file. (character format to make it portable)
    b) we need a script to create the table and indexes.
    c) the Forte PEX file of the app to test this out.
    d) the SQL staement that you issue in ISQL for comparison.
    If the situation warrants, we can give a concrete example of
    possible errors/bugs to a developer. Dial-in is still an option, but to be
    able to look at the TOOL code, database setup, etc. without the limitations
    of dial-up may be faster and more efficient. Please let me know if you can
    provide this, as well as the answers to the above questions, or if you have
    any questions.
    Regards,
    At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
    James, Ken:
    FYI, see attached response from our Sybase expert, Dani Sasmita. She has
    already tried what you suggested and results are enclosed.
    ++
    Sandeep
    -----Original Message-----
    From: SASMITA, DANIAR
    Sent: Wednesday, March 29, 2000 6:43 PM
    To: Pyatetskiy, Alexander
    Cc: Sharma, Sandeep; Tenerelli, Mike
    Subject: Re: FW: Case 59063: Select using LIKE has performance
    issues
    w/ CTLIB and Forte 3M
    We did that trick already.
    When it is hanging, I can see what is doing.
    It is doing OPEN CURSOR. But not clear the exact statement of the cursor
    it is trying to open.
    When we run the query directly to Sybase, not using Forte, it is clearly
    not opening any cursor.
    And running it directly to Sybase many times, the response is always
    consistently fast.
    It is just when the query runs from Forte to Sybase, it opens a cursor.
    But again, in the Forte code, Alex is not using any cursor.
    In trying to capture the query,we even tried to audit any statementcoming
    to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
    James Min
    Technical Support Engineer - Forte Tools
    Sun Microsystems, Inc.
    1800 Harrison St., 17th Fl.
    Oakland, CA 94612
    james.minsun.com
    510.869.2056
    ==============================================
    Support Hotline: 510-451-5400
    CUSTOMERS open a NEW CASE with Technical Support:
    http://www.forte.com/support/case_entry.html
    CUSTOMERS view your cases and enter follow-up transactions:
    http://www.forte.com/support/view_calls.html

    Earthlink wrote:
    Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
    Like:
    - a database version
    - how did you test
    - what data do you have, how is it distributed, indexed
    and so on.
    If you want to find out what's going on then use a TRACE with wait events.
    All nessecary steps are explained in these threads:
    HOW TO: Post a SQL statement tuning request - template posting
    http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
    Another nice one is RUNSTATS:
    http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701

  • Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?

    Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?

    Thanks Kelly,
    The answers would be the following:
    1200 cells per custom section (NEW COUNT), and up to 30 custom sections per spec.
    Assuming all will be populated, and this would apply to all final material specs in the system which could be ~25% of all material specs.
    The cells will be numeric, free text, drop downs, and some calculated numeric.
    Are we reaching the limits for UI performance?
    Thanks

  • IOS 8.1+ Performance Issue

    Hello,
    I encountered a serious performance bug in Adobe Air iOS application on devices running iOS 8.1 or later. Approximately in 1-2 minutes fps drops to 7 or lower without interacting with the app. This is very noticeable in the app. The app looks like frozen for about 0.5 seconds. The bug doesn't appear on every session.
    Devices tested: iPad Mini iOS 8.1.1, iPhone 6 iOS 8.2. iPod Touch 4 iOS 6 is working correctly.
    Air SDK versions: 15 and 17 tested.
    I can track the bug using Adobe Scout. There is a noticeable spike on frame time 1.16. Framerate drops to 7.0. The App spends much time on function Runtime overhead. Sometimes the top activity is Running AS3 attached to frame or Waiting For Next Frame instead of Runtime overhead.
    The bug can be reproduced on an empty application having a one bitmap on stage. Open the app and wait for two minutes and the bug should appear. If not, just close and relaunch the app.
    Bugbase link: Bug#3965160 - iOS 8.1+ Performance Issue
    Miska Savela

    Hi
    Id already activated Messages and entered the 6 digit code I was presented with into my iPhone. I can receive txt messages from non iOS users on my iMac and can reply to those messages.
    I just can't send a new message from scratch to a non iOS user :-s
    Thanks
    Baz

  • Returning multiple values from a called tabular form(performance issue)

    I hope someone can help with this.
    I have a form that calls another form to display a multiple column tabular list of values(needs to allow for user sorting so could not use a LOV).
    The user selects one or more records from the list by using check boxes. In order to detect the records selected I loop through the block looking for boxes checked off and return those records to the calling form via a PL/SQL table.
    The form displaying the tabular list loads quickly(about 5000 records in the base table). However when I select one or more values from the table and return back to the calling form, it takes a while(about 3-4 minutes) to return to the called form with the selected values.
    I guess it is going through the block(all 5000 records) looking for boxes checked off and that is what is causing the noticeable pause.
    Is this normal given the data volumes I have or are there any other perhaps better techniques or tricks I could use to improve performance. I am using Forms6i.
    Sorry for being so long-winded and thanks in advance for any help.

    Try writing to your PL/SQL table when the user selects (or remove when deselect) by usuing a when-checkbox-changed trigger. This will eliminate the need for you top loop through a block with 5000 records and should improve your performance.
    I am not aware of any performance issues with PL/SQL tables in forms, but if you still have slow performance try using a shared record-group instead. I have used these in the past for exactly the same thing and had no performance problems.
    Hope this helps,
    Candace Stover
    Forms Product Management

  • Performance issues with class loader on Windows server

    We are observing some performance issues in our application. We are Using weblogic 11g with Java6 on a windows 2003 server
    The thread dumps indicate many threads are waiting in queue for the native file methods:
    "[ACTIVE] ExecuteThread: '106' for queue: 'weblogic.kernel.Default (self-tuning)'" RUNNABLE
         java.io.WinNTFileSystem.getBooleanAttributes(Native Method)
         java.io.File.exists(Unknown Source)
         weblogic.utils.classloaders.ClasspathClassFinder.getFileSource(ClasspathClassFinder.java:398)
         weblogic.utils.classloaders.ClasspathClassFinder.getSourcesInternal(ClasspathClassFinder.java:347)
         weblogic.utils.classloaders.ClasspathClassFinder.getSource(ClasspathClassFinder.java:316)
         weblogic.application.io.ManifestFinder.getSource(ManifestFinder.java:75)
         weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
         weblogic.application.utils.CompositeWebAppFinder.getSource(CompositeWebAppFinder.java:71)
         weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
         weblogic.utils.classloaders.MultiClassFinder.getSource(MultiClassFinder.java:67)
         weblogic.utils.classloaders.CodeGenClassFinder.getSource(CodeGenClassFinder.java:33)
         weblogic.utils.classloaders.GenericClassLoader.findResource(GenericClassLoader.java:210)
         weblogic.utils.classloaders.GenericClassLoader.getResourceInternal(GenericClassLoader.java:160)
         weblogic.utils.classloaders.GenericClassLoader.getResource(GenericClassLoader.java:182)
         java.lang.ClassLoader.getResourceAsStream(Unknown Source)
         javax.xml.parsers.SecuritySupport$4.run(Unknown Source)
         java.security.AccessController.doPrivileged(Native Method)
         javax.xml.parsers.SecuritySupport.getResourceAsStream(Unknown Source)
         javax.xml.parsers.FactoryFinder.findJarServiceProvider(Unknown Source)
         javax.xml.parsers.FactoryFinder.find(Unknown Source)
         javax.xml.parsers.DocumentBuilderFactory.newInstance(Unknown Source)
         org.ajax4jsf.context.ResponseWriterContentHandler.<init>(ResponseWriterContentHandler.java:48)
         org.ajax4jsf.context.ViewResources$HeadResponseWriter.<init>(ViewResources.java:259)
         org.ajax4jsf.context.ViewResources.processHeadResources(ViewResources.java:445)
         org.ajax4jsf.application.AjaxViewHandler.renderView(AjaxViewHandler.java:193)
         org.apache.myfaces.lifecycle.RenderResponseExecutor.execute(RenderResponseExecutor.java:41)
         org.apache.myfaces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:140)
    On googling this seems to be an issue with java file handling on windows servers and I couldn't find a solution yet. Any recommendation or pointer is appreciated

    Hi shubhu,
    I just analyzed your partial Thread Dump data, the problem is that the ajax4jsf framework ResponseWriterContentHandler triggers internally a new instance of the DocumentBuilderFactory; every time; triggering heavy IO contention because of Class loader / JAR file search operations.
    Too many of these IO operations under heavy load will create excessive contention and severe performance degradation; regardless of the OS you are running your JVM on.
    Please review the link below and see if this is related to your problem.. This is a known issue in JBOSS JIRA when using RichFaces / ajaxJSF.
    https://issues.jboss.org/browse/JBPAPP-6166
    Regards,
    P-H
    http://javaeesupportpatterns.blogspot.com/

  • Performance issues with flashed 7800GT (G5)

    Hey,
    I recently flashed a PC 7800GT with the 128K OEM NVidia ROM. It's one of the cards with a physical 128K ROM chip, so no worries there. However, it doesn't deliver the performance I expected. I have a Late 2005 2.0 DC, 10GB DDR2 and use Leopard 10.5.8.
    This is what OS X puts out (german):
    NVIDIA GeForce 7800 GT:
      Chipsatz-Modell:    GeForce 7800GT
      Typ:    Monitor
      Bus:    PCIe
      Steckplatz:    SLOT-1
      PCIe-Lane-Breite:    x16
      VRAM (gesamt):    256 MB
      Hersteller:    NVIDIA (0x10de)
      Geräte-ID:    0x0092
      Versions-ID:    0x00a1
      ROM-Version:    2152.2
    Performance issues are: I can't seem to reach frame rates that are anywhere near the results provided by barefeats. Quake 3 runs with approx 200fps in 1600x1200x32 (I expected 300+ fps), Quake 4 and UT2004 run okay, but not on high settings with high resolutions. So, not equivalent to what you wouldd expect from the system's specs) Same goes for Colin McRae Rallye. Now, I remember reading about performance issues in 10.5.8 with the flashed ROM since it doesn't seem to be a 1:1 copy of the original one. I didn't try it in Tiger since I don't exactly want to go back from Leopard. Am I right that this is probably an issue of the "OEM ROM" (from the macelite)? Does anyone have the real deal in terms of 7800GT ROMs and could provide me with a link?
    Br

    Hi-
    If you send me an email via my website, I can send you a couple of ROMs that might work better.
    http://www.jcsenterprises.com/Japamacs_Page/All_Things_PPC.html
    Problem with the flashed 256 MB GT, though, is that Leopard runs slow.
    Bad driver interaction.....
    The 512 MB GTX is the way to go........

  • Performance Issues with 10.6.7 and External USB Drives

    I've had a few performance issues come up with the latest 10.6.7 that seem to be related to external USB drives. I have a 2TB USB drive tha I have my iMovie content on this drive and after 10.6.7 update, iMovie is almost unusable. Finder even seems slow when browsing files on this drive as well. It seems like any access to the drive is delayed in all applications. Before the update, the performance was acceptable, but now it almost unusable. Most of the files on this drive are large dv files.
    Anyone else experience this?

    Matt,
    If you want help, please start your own thread here:
    http://discussions.apple.com/forum.jspa?forumID=1339&start=0
    And if your previous thread you aren't getting sufficient help for your iPhone, post a new topic here:
    http://discussions.apple.com/forum.jspa?forumID=1139
    You'll get a wider audience, and won't confuse the original poster. Performance issues can be caused by numerous issues as outlined in my FAQ*
    http://www.macmaps.com/Macosxspeed.html
    If every person who had a performance issue posted to this thread, we'd never find a solution for the initial poster. Let's isolate each case one by one. It is NOT necessarily the same issue, even if the symptoms are the same. There are numerous contributing factors at work with computers, and if we don't isolate them, we'll never get to the root cause.

  • Performance issues with LOV bindings in 3-tier BC4J architecture

    We are running BC4J and JClient (Jdeveloper 9.0.3.4/9iAS 9.0.2) in a 3-tier architecture, and have problems with the performance.
    One of our problems are comboboxes with LOV bindings. The view objects that provides data for the LOV bindings contains simple queries from tables with only 4-10 rows, and there are no view links or entity objects to these views.
    To create the LOV binding and to set the model for the combobox takes about 1 second for each combobox.
    We have tried most of tips in http://otn.oracle.com/products/jdev/tips/muench/jclientperf/index.html, but they do not seem to help on our problem.
    The performance is OK (if not great) when the same code is running as 2-tier.
    Does anyone have any good suggestions?

    I can recommend that you look at the following two bugs in Metalink: Bug 2640945 and Bug 3621502
    They are related to the disabling of the TCP socket-level acknowledgement which slows down remote communications for EJB components using ORMI (the protocol used by Oracle OC4J) to communicate between remote EJB client and server.
    A BC4J Application Module deployed as an EJB suffers this same network latency penalty due to the TCP acknowledgement.
    A customer sent me information (that you'll see there as a part of Bug# 3621502) like this on a related issue:
    We found our application runs very slow in 3-Tier mode (JClient, BC4J deployed
    as EJB Session Bean on 9iAS server 9.0.2 enterprise edition). We spent a lot
    of time to tune up our codes but that helped very little. Eventually, we found
    the problem seemed to happen on TCP level. There is a 200ms delay in TCP
    level. After we read some documents about Nagle Algorithm,  we disabled a
    registry key (TcpDelAckTicks) in windows2000  on both client and server. This
    makes our program a lot faster.
    Anyway, we think we should provide our clients a better solution other than
    changing windows registry for them, for example, there may be a way to disable
    that Nagle's algorithm through java.net.Socket.setTcpNoDelay(true), in BC4J,
    or anywhere in our codes. We have not figured out yet.
    Bug 2640945 was fixed in Oracle Application Server 10g (v9.0.4) and it now disables this TCP Acknowledgement on the server side in that release. In the BugDB, I see backport patches available for earlier 9.0.3 and 9.0.2 releases of IAS as well.
    Bug 3621502 is requesting that that same disabling also be performed on the client side by the ORMI code. I have received a test patch from development to try out, but haven't had the chance yet.
    The customer's workaround in the interim was to disable this TCP Acknowledgement at the OS level by modifying a Windows registry setting as noted above.
    See Also http://support.microsoft.com/default.aspx?kbid=328890
    "New registry entry for controlling the TCP Acknowledgment (ACK) behavior in Windows XP and in Windows Server 2003" which documents that the registry entry to change disable this acknowledgement has a different name in Windows XP and Windows 2003.
    Hope this info helps. It would be useful to hear back from you on whether this helps your performance issue.

Maybe you are looking for