Rejected records in OLAP?
Hi, How can we see rejected records and reason of rejection in Oracle olap like we can see log file in express???
I know we can see number of rejected can be seen xml_load_log table. but I need the full details like this dimension value doesnot exist in cube.
thanks
You can prevent rejected records by querying on the fact table that is mapped to the cube, and know the validity of its data before it even gets loaded in the cube.
select count(*) from [FACT_TABLE]
where (
[FACT_01_COLUMN_ID] not in (select distinct [DIM_01_COLUMN_ID] from [DIM_01_TABLE]) and
[FACT_02_COLUMN_ID] not in (select distinct [DIM_02_COLUMN_ID] from [DIM_02_TABLE])
-- Where [FACT_TABLE] is the table or view that is mapped to the cube
-- Where [FACT_nn_COLUMN_ID] is the column from the Fact table that is related to a certain dimension table
-- Where [DIM_nn_COLUMN_ID] is the column from a table that is mapped to a dimension
-- Where [DIM_nn_TABLE] is the table that is mapped to a dimensionIn my experience, if a certain ID from the fact table does not exist as a member in a dimension, then the record is rejected.
If a certain ID that is mapped to the cube is NULL, then that record is ignored(not processed, not rejected).
If there are duplicate records from the fact table, but the measure columns has different values, then only one of those measure values will be loaded in the cube(I'm not sure which one, probably the last instance).
Similar Messages
-
11g Cube not showing any data with no Rejected records
Hi David ,
Strangely one of my 11g Cube is not showing data from today as it is showing all records rejected . However when I lookup Rejected records table I didn't find any records .Not sure what is happening . When I want to peep out the AWM queries from CUBE_BUILD_LOG and ran to the database in the AWM schema the records are retruning perfectly fine . I wonder same query is firing during Cube load but without any data ? My Cube build script has only LOAD and Aggregate .
after maintain My dimensions data are looking fine but no data populated after Cube maintenance . My MV switch off across all dimensions and cubes .
I navigate to CUBE_OPERATION_LOG and not able to comprehend about the content.
Any advice ?
Thanks and Regards,
DxPHi David ,
To be very frank today is very bad day ... Please see below my observation:
Executed below to make sure that no key values in dimension is missing but present in fact . All below query returns no row.
select distinct owner_postn_wid from w_synm_rx_t_f
minus
select distinct row_wid from postn_dh
select distinct payer_type_Wid from w_synm_rx_t_f
minus
select distinct row_wid from wc_ins_plan_dh
select distinct market_wid from w_synm_rx_t_f
minus
select distinct row_wid from w_product_dh
select distinct period_day_wid from w_synm_rx_t_f
minus
select distinct row_wid from w_daytime_D
select distinct contact_wid from w_synm_rx_t_f
intersect
select distinct row_wid from w_person_d
select distinct X_TERR_TYPE_WID from w_synm_rx_t_f
minus
select distinct row_wid from W_LOV_D
============================
Below returns count of 0 rows : ensure no NULL present
select count(1) from w_synm_rx_t_f where contact_wid is null;
select count(1) from w_synm_rx_t_f where owner_postn_wid is null;
select count(1) from w_synm_rx_t_f where payer_type_Wid is null;
select count(1) from w_synm_rx_t_f where period_day_wid is null;
select count(1) from w_synm_rx_t_f where X_TERR_TYPE_WID is null;
select count(1) from w_synm_rx_t_f where market_wid is null;
+++++++++++++++++++++++++++++++++
Cube Build Log has below entry:
796 0 STARTED CLEAR VALUES MKT_SLS_CUBE CUBE NNOLAP NN_OLAP_POC P14:JAN2010 17-AUG-11 07.12.08.267000000 PM +05:30 JAVA 1 C 47141 67 0 1
796 0 COMPLETED CLEAR VALUES MKT_SLS_CUBE CUBE NNOLAP NN_OLAP_POC P14:JAN2010 17-AUG-11 07.12.08.267000000 PM +05:30 JAVA 1 C 47141 67 0 2
796 0 STARTED LOAD MKT_SLS_CUBE CUBE NNOLAP NN_OLAP_POC P14:JAN2010 17-AUG-11 07.12.08.283000000 PM +05:30 JAVA 1 C 47142 68 0 1
796 0 SQL LOAD MKT_SLS_CUBE CUBE "<SQL>
<![CDATA[
SELECT /*+ bypass_recursive_check cursor_sharing_exact no_expand no_rewrite */
T16_ROW_WID ALIAS_127,
T13_ROW_WID ALIAS_128,
T10_ROW_WID ALIAS_129,
T7_ROW_WID ALIAS_130,
T4_ROW_WID ALIAS_131,
T1_ROW_WID ALIAS_132,
SUM(T20_MKT_TRX) ALIAS_133,
SUM(T20_MKT_NRX) ALIAS_134
FROM
SELECT /*+ no_rewrite */
T1."CONTACT_WID" T20_CONTACT_WID,
T1."MARKET_WID" T20_MARKET_WID,
T1."OWNER_POSTN_WID" T20_OWNER_POSTN_WID,
T1."PAYER_TYPE_WID" T20_PAYER_TYPE_WID,
T1."PERIOD_DAY_WID" T20_PERIOD_DAY_WID,
T1."MKT_NRX" T20_MKT_NRX,
T1."MKT_TRX" T20_MKT_TRX,
T1."X_TERR_TYPE_WID" T20_X_TERR_TYPE_WID
FROM
NN_OLAP_POC."W_SYNM_RX_T_F" T1 )
T20,
SELECT /*+ no_rewrite */
T1."ROW_WID" T16_ROW_WID
FROM
NN_OLAP_POC."W_DAYTIME_D" T1 )
T16,
SELECT /*+ no_rewrite */
T1."ROW_WID" T13_ROW_WID
FROM
NN_OLAP_POC."W_PERSON_D" T1 )
T13,
SELECT /*+ no_rewrite */
T1."ROW_WID" T10_ROW_WID
FROM
NN_OLAP_POC."WC_INS_PLAN_DH" T1 )
T10,
SELECT /*+ no_rewrite */
T1."ROW_WID" T7_ROW_WID
FROM
NN_OLAP_POC."W_LOV_D" T1 )
T7,
SELECT /*+ no_rewrite */
T1."ROW_WID" T4_ROW_WID
FROM
NN_OLAP_POC."POSTN_DH" T1 )
T4,
SELECT /*+ no_rewrite */
T1."ROW_WID" T1_ROW_WID
FROM
NN_OLAP_POC."W_PRODUCT_DH" T1 )
T1
WHERE
((T20_PERIOD_DAY_WID = T16_ROW_WID)
AND (T16_ROW_WID = 20100101)
AND (T20_CONTACT_WID = T13_ROW_WID)
AND (T20_PAYER_TYPE_WID = T10_ROW_WID)
AND (T7_ROW_WID = T20_X_TERR_TYPE_WID)
AND (T20_OWNER_POSTN_WID = T4_ROW_WID)
AND (T20_MARKET_WID = T1_ROW_WID)
AND ((T20_PERIOD_DAY_WID) IN ((20100107.000000) , (20100106.000000) , (20100128.000000) , (20100124.000000) , (20100121.000000) , (20100118.000000) , (20100115.000000) , (20100109.000000) , (20100125.000000) , (20100114.000000) , (20100111.000000) , (20100110.000000) , (20100104.000000) , (20100101.000000) , (20100129.000000) , (20100123.000000) , (20100117.000000) , (20100113.000000) , (20100108.000000) , (20100131.000000) , (20100120.000000) , (20100116.000000) , (20100119.000000) , (20100105.000000) , (20100102.000000) ,
(20100130.000000) , (20100127.000000) , (20100122.000000) , (20100112.000000) , (20100103.000000) , (20100126.000000) ) ) )
GROUP BY
(T1_ROW_WID, T4_ROW_WID, T7_ROW_WID, T10_ROW_WID, T13_ROW_WID, T16_ROW_WID)
ORDER BY
T1_ROW_WID ASC NULLS LAST ,
T4_ROW_WID ASC NULLS LAST ,
T7_ROW_WID ASC NULLS LAST ,
T10_ROW_WID ASC NULLS LAST ,
T13_ROW_WID ASC NULLS LAST ,
T16_ROW_WID ASC NULLS LAST ]]>/>
</SQL>" NNOLAP NN_OLAP_POC P14:JAN2010 17-AUG-11 07.12.08.627000000 PM +05:30 JAVA 1 MAP1 C 47142 68 0 2
796 0 COMPLETED LOAD MKT_SLS_CUBE CUBE "<CubeLoad
LOADED="0"
REJECTED="4148617"/>" NNOLAP NN_OLAP_POC P14:JAN2010 17-AUG-11 07.12.40.486000000 PM +05:30 JAVA 1 C 47142 68 0 3
796 0 STARTED UPDATE MKT_SLS_CUBE CUBE NNOLAP NN_OLAP_POC P14:JAN2010 17-AUG-11 07.12.40.501000000 PM +05:30 JAVA 1 C 47143 69 0 1
796 0 COMPLETED UPDATE MKT_SLS_CUBE CUBE NNOLAP NN_OLAP_POC P14:JAN2010 17-AUG-11 07.12.40.548000000 PM +05:30 JAVA 1 C 47143 69 0 2
+++++++++++++++++
You can observer clear rejection of 4 million rows ... Ran the above query which returns my data successfully.
Look out to CUBE_REJECTED records take the sample record and put into the above query it is returning the data fine with my measures and dimension WID's :(PLEASE SEE BELOW THE FILTERS on ROW_WID)
=========================
SELECT /*+ bypass_recursive_check cursor_sharing_exact no_expand no_rewrite */
T16_ROW_WID ALIAS_127,
T13_ROW_WID ALIAS_128,
T10_ROW_WID ALIAS_129,
T7_ROW_WID ALIAS_130,
T4_ROW_WID ALIAS_131,
T1_ROW_WID ALIAS_132,
SUM(T20_MKT_TRX) ALIAS_133,
SUM(T20_MKT_NRX) ALIAS_134
FROM
SELECT /*+ no_rewrite */
T1."CONTACT_WID" T20_CONTACT_WID,
T1."MARKET_WID" T20_MARKET_WID,
T1."OWNER_POSTN_WID" T20_OWNER_POSTN_WID,
T1."PAYER_TYPE_WID" T20_PAYER_TYPE_WID,
T1."PERIOD_DAY_WID" T20_PERIOD_DAY_WID,
T1."MKT_NRX" T20_MKT_NRX,
T1."MKT_TRX" T20_MKT_TRX,
T1."X_TERR_TYPE_WID" T20_X_TERR_TYPE_WID
FROM
NN_OLAP_POC."W_SYNM_RX_T_F" T1 )
T20,
SELECT /*+ no_rewrite */
T1."ROW_WID" T16_ROW_WID
FROM
NN_OLAP_POC."W_DAYTIME_D" T1 )
T16,
SELECT /*+ no_rewrite */
T1."ROW_WID" T13_ROW_WID
FROM
NN_OLAP_POC."W_PERSON_D" T1 )
T13,
SELECT /*+ no_rewrite */
T1."ROW_WID" T10_ROW_WID
FROM
NN_OLAP_POC."WC_INS_PLAN_DH" T1 )
T10,
SELECT /*+ no_rewrite */
T1."ROW_WID" T7_ROW_WID
FROM
NN_OLAP_POC."W_LOV_D" T1 )
T7,
SELECT /*+ no_rewrite */
T1."ROW_WID" T4_ROW_WID
FROM
NN_OLAP_POC."POSTN_DH" T1 )
T4,
SELECT /*+ no_rewrite */
T1."ROW_WID" T1_ROW_WID
FROM
NN_OLAP_POC."W_PRODUCT_DH" T1 )
T1
WHERE
((T20_PERIOD_DAY_WID = T16_ROW_WID)
AND (T16_ROW_WID = 20100101)
AND (T20_CONTACT_WID = T13_ROW_WID)
AND (T20_PAYER_TYPE_WID = T10_ROW_WID)
AND (T7_ROW_WID = T20_X_TERR_TYPE_WID)
AND (T20_OWNER_POSTN_WID = T4_ROW_WID)
AND (T20_MARKET_WID = T1_ROW_WID)
AND T13_ROW_WID = 255811
AND T7_ROW_WID = 122
AND T4_ROW_WID =3
AND T1_ROW_WID=230
AND T10_ROW_WID = 26
AND ((T20_PERIOD_DAY_WID) IN ((20100107.000000) , (20100106.000000) , (20100128.000000) , (20100124.000000) , (20100121.000000) , (20100118.000000) , (20100115.000000) , (20100109.000000) , (20100125.000000) , (20100114.000000) , (20100111.000000) , (20100110.000000) , (20100104.000000) , (20100101.000000) , (20100129.000000) , (20100123.000000) , (20100117.000000) , (20100113.000000) , (20100108.000000) , (20100131.000000) , (20100120.000000) , (20100116.000000) , (20100119.000000) , (20100105.000000) , (20100102.000000) ,
(20100130.000000) , (20100127.000000) , (20100122.000000) , (20100112.000000) , (20100103.000000) , (20100126.000000) ) ) )
GROUP BY
(T1_ROW_WID, T4_ROW_WID, T7_ROW_WID, T10_ROW_WID, T13_ROW_WID, T16_ROW_WID)
ORDER BY
T1_ROW_WID ASC NULLS LAST ,
T4_ROW_WID ASC NULLS LAST ,
T7_ROW_WID ASC NULLS LAST ,
T10_ROW_WID ASC NULLS LAST ,
T13_ROW_WID ASC NULLS LAST ,
T16_ROW_WID ASC NULLS LAST
=================================
THE XML export of CUBE as below:
<!DOCTYPE Metadata [
<!ENTITY % BIND_VALUES PUBLIC "OLAP BIND VALUES" "OLAP METADATA">
%BIND_VALUES;
]>
<Metadata
Version="1.2"
MinimumDatabaseVersion="11.2.0.1">
<Cube
ETViewName="MKT_SLS_CUBE_VIEW"
Name="MKT_SLS_CUBE">
<Measure>
<BaseMeasure
SQLDataType="NUMBER"
ETMeasureColumnName="TRX"
Name="TRX">
<Description
Type="LongDescription"
Language="AMERICAN"
Value="TRX">
</Description>
<Description
Type="ShortDescription"
Language="AMERICAN"
Value="TRX">
</Description>
<Description
Type="Description"
Language="AMERICAN"
Value="TRX">
</Description>
</BaseMeasure>
</Measure>
<Measure>
<BaseMeasure
SQLDataType="NUMBER"
ETMeasureColumnName="NRX"
Name="NRX">
<Description
Type="LongDescription"
Language="AMERICAN"
Value="NRX">
</Description>
<Description
Type="ShortDescription"
Language="AMERICAN"
Value="NRX">
</Description>
<Description
Type="Description"
Language="AMERICAN"
Value="NRX">
</Description>
</BaseMeasure>
</Measure>
<CubeMap
Name="MAP1"
IsSolved="False"
Query="W_SYNM_RX_T_F"
WhereClause="W_DAYTIME_D.ROW_WID = 20100101">
<MeasureMap
Name="TRX"
Measure="TRX"
Expression="W_SYNM_RX_T_F.MKT_TRX">
</MeasureMap>
<MeasureMap
Name="NRX"
Measure="NRX"
Expression="W_SYNM_RX_T_F.MKT_NRX">
</MeasureMap>
<CubeDimensionalityMap
Name="TIME"
Dimensionality="TIME"
MappedDimension="TIME.CALENDER.MONTHLY"
JoinCondition="W_SYNM_RX_T_F.PERIOD_DAY_WID = W_DAYTIME_D.ROW_WID"
Expression="W_DAYTIME_D.ROW_WID">
</CubeDimensionalityMap>
<CubeDimensionalityMap
Name="CUSTOMER"
Dimensionality="CUSTOMER"
MappedDimension="CUSTOMER.CUSTOMER_HIERARCHY.DETAIL_LEVEL"
JoinCondition="W_SYNM_RX_T_F.CONTACT_WID = W_PERSON_D.ROW_WID"
Expression="W_PERSON_D.ROW_WID">
</CubeDimensionalityMap>
<CubeDimensionalityMap
Name="INS_PLAN_DH"
Dimensionality="INS_PLAN_DH"
MappedDimension="INS_PLAN_DH.INS_PLAN.DETAIL"
JoinCondition="W_SYNM_RX_T_F.PAYER_TYPE_WID = WC_INS_PLAN_DH.ROW_WID"
Expression="WC_INS_PLAN_DH.ROW_WID">
</CubeDimensionalityMap>
<CubeDimensionalityMap
Name="LIST_OF_VALUES"
Dimensionality="LIST_OF_VALUES"
MappedDimension="LIST_OF_VALUES.LOV_HIERARCHY.DETAIL_LEVEL"
JoinCondition="W_LOV_D.ROW_WID = W_SYNM_RX_T_F.X_TERR_TYPE_WID"
Expression="W_LOV_D.ROW_WID">
</CubeDimensionalityMap>
<CubeDimensionalityMap
Name="POSITIONDH"
Dimensionality="POSITIONDH"
MappedDimension="POSITIONDH.POST_HIER.DETAIL"
JoinCondition="W_SYNM_RX_T_F.OWNER_POSTN_WID = POSTN_DH.ROW_WID"
Expression="POSTN_DH.ROW_WID">
</CubeDimensionalityMap>
<CubeDimensionalityMap
Name="PRODH"
Dimensionality="PRODH"
MappedDimension="PRODH.PRODHIER.DETAILLVL"
JoinCondition="W_SYNM_RX_T_F.MARKET_WID = W_PRODUCT_DH.ROW_WID"
Expression="W_PRODUCT_DH.ROW_WID">
</CubeDimensionalityMap>
</CubeMap>
<Organization>
<AWCubeOrganization
MVOption="NONE"
SparseType="COMPRESSED"
MeasureStorage="SHARED"
NullStorage="MV_READY"
CubeStorageType="NUMBER"
PrecomputePercent="35"
PrecomputePercentTop="0"
PartitionLevel="TIME.CALENDER.MONTHLY"
AW="&AW_NAME;">
<SparseDimension
Name="TIME"/>
<SparseDimension
Name="CUSTOMER"/>
<SparseDimension
Name="INS_PLAN_DH"/>
<SparseDimension
Name="LIST_OF_VALUES"/>
<SparseDimension
Name="POSITIONDH"/>
<SparseDimension
Name="PRODH"/>
<DefaultBuild>
<![CDATA[BUILD SPEC LOAD_AND_AGGREGATE
LOAD NO SYNCH,
SOLVE
)]]>
</DefaultBuild>
</AWCubeOrganization>
</Organization>
<Dimensionality
Name="TIME"
ETKeyColumnName="TIME"
Dimension="TIME">
</Dimensionality>
<Dimensionality
Name="CUSTOMER"
ETKeyColumnName="CUSTOMER"
Dimension="CUSTOMER">
</Dimensionality>
<Dimensionality
Name="INS_PLAN_DH"
ETKeyColumnName="INS_PLAN_DH"
Dimension="INS_PLAN_DH">
</Dimensionality>
<Dimensionality
Name="LIST_OF_VALUES"
ETKeyColumnName="LIST_OF_VALUES"
Dimension="LIST_OF_VALUES">
</Dimensionality>
<Dimensionality
Name="POSITIONDH"
ETKeyColumnName="POSITIONDH"
Dimension="POSITIONDH">
</Dimensionality>
<Dimensionality
Name="PRODH"
ETKeyColumnName="PRODH"
Dimension="PRODH">
</Dimensionality>
<Description
Type="LongDescription"
Language="AMERICAN"
Value="MKT SLS CUBE">
</Description>
<Description
Type="ShortDescription"
Language="AMERICAN"
Value="MKT SLS CUBE">
</Description>
<Description
Type="Description"
Language="AMERICAN"
Value="MKT SLS CUBE">
</Description>
<ConsistentSolve>
<![CDATA[SOLVE
SUM
MAINTAIN COUNT
OVER ALL
)]]>
</ConsistentSolve>
</Cube>
</Metadata>
+++++++++++++++++++++++
I dropped the AW and create new from exported XML and maintain all dimensions and then rebuild . Still have the issue :(
Any thing you can hightlight from above ?
Thanks,
DxP
Also I sustpect whethere it is a issue due to below error caused when I click on one of my Position_Hier view from AWM : even if I select that view it is throwing the error in SQL developer after displaying first couple of rows (while page down)
java.sql.SQLException: ORA-33674: Data block size 63 exceeds the maximum size of 60 bytes.
at oracle.olap.awm.util.jdbc.SQLWrapper.execute(Unknown Source)
at oracle.olap.awm.querydialog.PagedQueryDialog$1.construct(Unknown Source)
at oracle.olap.awm.ui.SwingWorker$2.run(Unknown Source)
at java.lang.Thread.run(Thread.java:595)
Edited by: e_**** on Aug 17, 2011 8:41 PM -
Rejected Records when building my cube
the sql statement of the allocation works fine and gives the right result
SELECT /*+ bypass_recursive_check cursor_sharing_exact no_expand no_rewrite */
T1_PUR_DATEORDERED ALIAS_1,
T1_PUR_CTRORIGIN_UID ALIAS_2,
SUM(T1_PUR_AMTFREIGHTBC) ALIAS_3
FROM
SELECT /*+ no_rewrite */
T1."PUR_DATEORDERED" T1_PUR_DATEORDERED,
T1."PUR_CTRORIGIN_UID" T1_PUR_CTRORIGIN_UID,
T1."PUR_AMTFREIGHTBC" T1_PUR_AMTFREIGHTBC
FROM
CHDDL."ITS$QSI$BIVIEWPURCHASE" T1 )
T1
GROUP BY
(T1_PUR_CTRORIGIN_UID, T1_PUR_DATEORDERED)
ORDER BY
T1_PUR_CTRORIGIN_UID ASC NULLS LAST ,
T1_PUR_DATEORDERED ASC NULLS LAST
However when I build it I always get 35570 invalid member of the target dimension
in AWM 11.2 I get very strange characters listed. this is not the case with slqplus. when I select out of rejected records I get as well strange characters.
desc of my cube shows varchar2(19), varchar2(16) and number
the direct query under sqlplus shows
ALIAS_1 ALIAS_2 ALIAS_3
2008-12-03 00:00:00 zzz0000000000n4 0
2009-05-09 00:00:00 zzz0000000000n4 0
2009-05-11 00:00:00 zzz0000000000n4 0
2009-05-12 00:00:00 zzz0000000000n4 0
2011-05-25 00:00:00 zzz0000000000n4 0
2011-09-05 00:00:00 zzz0000000000n4 0
6 Zeilen ausgewõhlt.
in rejected recordes it shows
46736 9584 35570
(XSRWLD06) Ein Spaltenwert ist kein g³ltiges Member seiner Zieldimension.
1
┐┐┐┐┐ë┐┐┐┐┐┐┐┐┐┐┐┐┐
46736 9584 35570
(XSRWLD06) Ein Spaltenwert ist kein g³ltiges Member seiner Zieldimension.
2
┐┐┐┐┐ë┐┐┐┐┐┐┐┐┐┐┐┐┐
46736 9584 35570
(XSRWLD06) Ein Spaltenwert ist kein g³ltiges Member seiner Zieldimension.
3
┐┐┐┐┐ë┐┐┐┐┐┐┐┐┐┐┐┐┐
46736 9584 35570
(XSRWLD06) Ein Spaltenwert ist kein g³ltiges Member seiner Zieldimension.
4
┐┐┐┐┐ë┐┐┐┐┐┐┐┐┐┐┐┐┐
46736 9584 35570
(XSRWLD06) Ein Spaltenwert ist kein g³ltiges Member seiner Zieldimension.
5
┐┐┐┐┐ë┐┐┐┐┐┐┐┐┐┐┐┐┐
46736 9584 35570
(XSRWLD06) Ein Spaltenwert ist kein g³ltiges Member seiner Zieldimension.
6
┐┐┐┐┐ë┐┐┐┐┐┐┐┐┐┐┐┐┐
Dimension building works fine,
Any Help would be appreciated
Christof
Edited by: 883727 on Oct 3, 2011 9:14 PMHi there,
The only situation where records get rejected is when the relevant dimension values are not present and the fact record cannot be joined on one (or more) of it's foreign key records.
Have you maintained all of your dimensions first?
Thanks,
Stuart Bunby
OLAP Blog: http://oracleOLAP.blogspot.com
OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
OLAP on OTN: http://www.oracle.com/technology/products/bi/olap/index.html
DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html -
RE: REJECTED RECORDS IN XI OR BW
I found a bug in the process between XI and BW.
Some rejected records occur when a transaction has a Voided line item.
I think I've found what may be a large part of what's contributing to these rejected records. It looks like if a transaction has a voided line item (but the transaction itself isn't voided) the whole transaction could be rejected by XI/BW because they are looking at the other codes (such as discount reason code, tax codes, etc) but they are ignoring the fact that VOID_CD = 1 for those line items.
For example, there may be a Return transaction with 3 line items, two of them have return reason codes, but one does not because it has been voided.
How should this be handled in SAP? Does SAP care about the voided items in general? Should XI skip validation on those ones and "push them through"? Or is there a special area that voided items can be sent to?
where should i need the changes? On the mapping process in XI or in the BAPI ZPOSDW_BAPIINPUT_PRE in BW?
We donu2019t want the transaction to be rejected..only the void item to not be processed.(ours is idoc to file scenario)
**Please help out in this issue**
Thanks in advance,
Amarnath Reddy>>I found a bug in the process between XI and BW.
You can report this to SAP by opening a OSS message on service marketplace:)
>>We don't want the transaction to be rejected..only the void item to not be processed
XI considers the who payload as one single message,if any condition fails,it will drop the whole payload.
You need to split your payload into several payloads with each one containing only 1 line item.in that case only the ones who are wrong will be voided,rest all will go through properly.
This can be done via modules on sender side,but since you are using IDOC as sender,you can't configure modules.
Try asking the same question in BW/BI forum,its possible they have some way to handle this in BW.
Thanks
Aamir -
Unable to see all rejected records in SAP BPC 10
Folks,
I am trying to validate and process transformation file in BPC 10 NW.
The status shows rejected records as 1877.
Able to see 200 rejected records only.
Even when I save it as a txt file or in Excel, it displays 200 records only.
Tried with MAXREJECTCOUNT = -1 and then with 2000.
Both didn't work.
Regards,
ShriniHi Vadim
Mayby im missing somethin:
If i run the import process, doesnt the data get submitted to the BPCcube?
If i only want to get the list of missing masterdata for manual input in BPC (so i wont get any rejected records), is there a way to do that?
Regards
Björn -
Post mapping for Reject records!!
hi
I would like to capture all the rejected records when I am loading the fact table. I know I can get this information from WB_RT_ERRORS and WB_RT_ERROR_SOURCES. But, what I would like to do is to have Post mapping process which and the action set to "on error", I would like to capture all the reject records into a flat file. Is there a way I can identify reject in the post mapping process other than referring to WB_RT_ERRORS and WB_RT_ERROR_SOURCES in runtime.Because this is what client is requesting for , Any help on this would be greatly appreciated.
Please mark me directly also since I do not get emails sometimes through the distribution list.
Thanks in Anticipation,
BalajiReposting the response from M.Van Der Wiel:
2 comments:
- Ideally, you would explicitly capture the errors, and insert those into a separate table. This would enable you to run the mapping in set-based mode (traditionally this means: no error logging) for optimal performance, and you still get the errors. This does mean you would have
to explicitly design what may go wrong, so you should know what your data looks like. Your flat file could then be created out of the explicit error records, which is probably a bit easier (and faster) than to go from the WB_RT_ERRORS.
- The mapping errors out once the maximum number of errors is reached (as passed at runtime; is defaulted by the configuration setting). Anything in between 0 and the maximum number of errors will result in a status warning.
To do what you want to do, you could indeed use the post mapping process, but perhaps you want to design a separate mapping to write the errors to a file, and use a process flow (with conditional sequencing) to run the second mapping (only if the first one fails or results in a warning). This may be a nicer solution than to write the code manually and implement it as a standalone post-mapping process.
Finally, notice that WB_RT_ERRORS and the like are not tables you should directly query (they will change in the future). Rather you should use the ALL_RT_<something> views to access the same information. Going forward, as the customer desires to migrate to a later release, it is
more likely that their customizations still work.
Thanks,
Mark.
PS.
Another possiblity - if the errors violate a key constraint - would be to configure the mapping target with the constraints parameter set to false and redirect the error records to a error table (this can be done in the mapping configuration - sources and targets section). This configuration will disable the constraints during the load and re-enable them after the load, putting the offending records in the error table. You can then download the records from the error table into a flat file in a separate mapping or in a post-mapping process.
Regards:
Igor -
Rejected records into different Table.
Hi ,
I have a .ctl file and .txt file. for the normal scenario, I use SQLLDR to load data into my table using ctl file and all rejected records will be sent to bad files. But I wanted all the unloaded or Bad records into different table.
do we have any attributes in sqlldr to accomplish this?
Regards
Bala.Bala,
You can load bad file (rejected records) in different table with all the column defined as varchar2 using sqlldr or external table. But you have to defined another table with all the columns as varchar2 and load it up using a different sqlldr control file, you can also consider using external table to load bad file (bad records).
Regards -
Max # of rejected records in the UJFS file?
I am loading transactional data via the LoadInfoProvider standard DM package.
According to the package log, it tries to load 4,910,004 records. It rejects 780,639 of them.
When I download the rejected records file from UJFS, it only has 100,001 records. I've tried this a couple times and same result. Always a cap of 100,001 rejected records in the file.
Anyone knowif there's a parameter I can modify to have all rejected records in the log or am I hitting a system limit?
Thanks.
-AmyHi All,
I am trying to find a solution referring to some of old threads. While doing the transactional data loads,is there any way to make BPC accept all the records without validating for duplicate records.
We have tried Validate Records set to 'No' in transformation file and Max Reject count to 0. Still the duplicate records are getting rejected.
As we are doing data loads from a transactional system flat file, there are lot of records where all the fields are common for some records including amount. i.e. a similar sale for same quantity and product happened during the same period. How to address this?
Should n't the cube be aggregating for similar records? Your thoughts please.
Thanks,
Santosh -
Rejected record message from Essbase Studio load (3355)
I receive the following error when deploying an outline via Essbase Studio to an ASO cube (ver 11.1.2):
\\Record #36798 - Member XXX does not exist. Reference member column types require the field to be an existing member (3355)
As I read the error (BTW, can someone help me find the table of rejected record codes?), it is rejecting the assignment of an attribute to a member because the member does not exist.
What could be a reason though that I did not receive an "Error adding member XXX" rejection beforehand or something indicating that there was an error adding the member originally? I am puzzled because I cannot find anything special about the source data when comparing to the many others that loaded successfullly.Hi All,
Towards this error if we are talking about Hyperion Essbase - Version 11.1.2.1.000 and later.
So mostly we are facing this issue has been verified as unpublished Bug 12967639.
As documented in KM:
Unable to Save ASO Outline After Renaming Members, "Error(1007072) Member [xxx] tagged as <REFER does not have a member to refer to" [ID 1465850.1]
Also towards ASO there is another document having all there as:
Oracle Hyperion Essbase and Aggregate Storage (ASO) [ID 1291202.1]
Thanks,
Shaker -
Resize the number of rejected record in sqlldr
Hi,
I am using sqlldr to load my data. However, it is known that the data has lots of error. So when I load my data, after 50 rejected records, the sqlldr stops. It wouldn' load anymore.
Question: is there a way to specify the number of rejected record in sqlldr?
For sure the file that I need to load has max 200 bad records.
Thank you very much.Set the ERRORS parameter to a high value.
-
Getting calculated records in OLAP
Hi all,
I am new to OLAP. I have a task to achieve, i want to know how to 'get calculated records in olap'.
The calculations depends on the records. For example, if the records are ASSETS, multiply a and b and if LIABILITY, sum all c.
I await your help.
ThanksI assume that you have downloaded AWM (Analytical Workspace manager).
Once you have created/loaded dimensions/hierarchies and stored measures (in cubes) using AWM, you then create calculated measures (also called Derived Measures or formulas) using the stored measures.
Every OLAP engine is a calculation and aggregation engine, so you don't "load" calculated data. Its calculated on-the-fly at reporting time.
Read this section in OLAP documentation:
http://docs.oracle.com/cd/E11882_01/olap.112/e17123/calculations.htm
especially this part:
http://docs.oracle.com/cd/E11882_01/olap.112/e17123/calculations.htm#OLAUG9195
The question that you are asking is very simple to handle in OLAP, using attribute and CASE statement.
Feel free to ask more questions.
- Nasar -
Dimension Build Load Rule Reject Record
Hi,
I'm trying to create a load rule that build the accounts dimsenion with a flat file. I would like to exclude any accounts that starts with "S" from being added to the account dimension. I have a parent/child build method for this load rule. i'm using the Reject Record to set the condition that any string contain S will be rejected but for some reason when I load it doesn't do anything. it makes EAS freeze up. so, my flat file looks like this.
New Accounts,S50000,Accounts - S50000
New Accounts,60000,Accounts - 60000
New Accounts,S45000,Accounts - S45000Hi,
Your posting does not tell me what you entered in the reject record setting to try to achieve the desired result. If you entered "S" and used the "Contains" setting it should have worked providing you know your data and don't have occurrences of this character possible other than in the first ordinal position. If you tried to use a wildcard asterisk callout, Essbase load rule select/reject logic does not take wildcard inputs (like "S*").
Instead, a tried and true trick of Essbase load rules is to make a copy of the field in question, split it up and take background select/reject action on just the first character of a reference copy of the original field. This is done by using the field "create on join" method and simply joining the field to itself (an unintuitive but effective way to make a copy of a field for use in the load rule settings). Having done this, now split the copy of the field after one character. The remaining field section can be set to ignore for dim builds (or data loads as you might have to do later to match your dim build results in data loads). Now you have a single character field for which you can set the reject setting of contains or equal to "S" and then... please get this part... ignore the field for dim builds in field properties. Essbase will not try to use this partial field to build anything since it is ignored, but the assigned reject logic on occurrences of the isolated single character field will still be interpreted.
This is old school Essbase but it still works. I am not here to say it is intuitive or that it represents the ultimate functionality in an ETL interface. Many with SQL skills would take care of this pre-load-rule in SQL string management. But this method is not that hard to apply and is effective if all you want to do is use load rule settings. Let me know if it helps.
Darrell -
hi!
i'm using awm to build cube, when loading the measures from sales_fact view in the global demo of oracle, log hints
"13:05:12 Finished Load of Measures: UNITS from Cube UNITS_CUBE.CUBE. Processed 0 Records. Rejected 222589 Records."
i'm assure awm user has privilege to select sales_fact view. but why rejected the records, needs help.thanks!
best regradsHi there,
The only situation where records get rejected is when the relevant dimension values are not present and the fact record cannot be joined on one (or more) of it's foreign key records.
Have you maintained all of your dimensions first?
Thanks,
Stuart Bunby
OLAP Blog: http://oracleOLAP.blogspot.com
OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
OLAP on OTN: http://www.oracle.com/technology/products/bi/olap/index.html
DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html -
Import Standard Purchase Orders Programs Rejects Records
Hi Guys
I have a simple PL/SQL Block to Insert Standard PO Orders into PO_HEADERS_INTERFACE Table.
The following is the code :
DECLARE
V_REQUEST_ID NUMBER:=0;
-- Start PL/SQL Block --
BEGIN
-- Insert Validated Records into PO Interface Tables --
INSERT INTO PO_HEADERS_INTERFACE
(INTERFACE_HEADER_ID,
BATCH_ID,
ACTION,
ORG_ID,
DOCUMENT_TYPE_CODE,
CURRENCY_CODE,
AGENT_ID,
VENDOR_NAME,
VENDOR_SITE_CODE,
VENDOR_CONTACT_ID,
SHIP_TO_LOCATION,
BILL_TO_LOCATION,
PAYMENT_TERMS,
FREIGHT_CARRIER,
FOB,
FREIGHT_TERMS,
APPROVAL_STATUS,
CREATION_DATE,
CREATED_BY,
LAST_UPDATE_DATE,
LAST_UPDATED_BY,
LAST_UPDATE_LOGIN)
VALUES
(1,
100,
'ORIGINAL',
204,
'STANDARD',
'USD',
13587,
'Advanced Network Devices',
'FRESNO',
1506,
'M1- Seattle',
'V1- New York City',
'Immediate',
'UPS',
'Origin',
'Due',
'INCOMPLETE',
'14-NOV-2008',
1008152,
'14-NOV-2008',
1008152,
1008152);
DBMS_OUTPUT.PUT_LINE('Header Records Inserted');
COMMIT;
END;
The Records have been sucessfully inserted into PO_HEADERS_INTERFACE Table but whne i run the Import Standard Purchase Orders Program by passing my Batch ID = 100 as the parameter systems Rejects the Records but doesnt
update the PO_INTERFACE_ERRORS Table.
Could anybody please guide me. This is quite urgent.
Regards
Nakul.VNot sure of what release are You working with. Review if You have the Diagnostic Tools responsibility, it has a diagnostic for po documents open interface that reports warning on errors on data on staging tables.
-
OWB 10g R2 - Dates getting truncated in rejected records
Hi,
We have been using Oracle Warehouse Builder 10g R2 in a data warehouse built on a Oracle Database Server of the same version. We utilise OWB run-time repository to extract data rejects that might occur in OWB mappings and re-process them, if necessary. In particular, we used to query to RAB_RT_EXEC_ERROR_COLUMNS view in order to retrieve data rejects.
As I have noticed, OWB stores rejected data in its run-time repository having all values transformed to VARCHAR2(2000) data type. The problem I am trying to resolve is the fact that when OWB transfers rejected DATE values to the run-time repo, it truncates all dates so the time component is lost.
For instance, a source table contains a record with a DATE column value "26-AUG-2006 14:30:27". The source column is mapped in an OWB mapping to a target column of the same data type. When the mapping successfully processes the records, DATE values appears in the target table unchanged, with date and time components. When the mapping rejects the record, it appears in OWB run-time repository in VARCHAR2 data type, but without time component. When I query RAB_RT_EXEC_ERROR_COLUMNS, I get "26-AUG-2006" value.
This is a real problem for us, as rejected data being re-processed can cause unique constraint violations and does not match exactly the source data.
Any ideas how to resolve the situation are highly appreciated.Yes, we capture errors in row-based mode. We considered using DML error tables, but refused to be able to capture and re-process data rejects occured at any stage in a mapping, not necessarily when the data is being loaded to a destination table.
BTW, today I tried to set up manually NLS date format
EXECUTE IMMEDIATE 'ALTER session SET nls_date_format = ''DD/MM/YYYY HH24:MI:SS''';
immediately before running a mapping through WB_RT_API_EXEC package (OWB API), but result was the same -- rejected dates got truncated.
Maybe you are looking for
-
Yahoo emails not working in iphone 5
Hi, I have just changed to Iphone 5 and I noticed that my yahoo emails are not working. I cannot save the yahoo password when I set up my yahoo email accounts in Iphone 5. The error message says "Yahoo Server unvailable". Please try again later. I tr
-
Different Vendor in condition type
Hi, We are using different vendor in condition type for import purchase order as given. Ordering Vendor - 800000 Custom Vendor - 900000 in condition type Freight Vendor - 700000 in condition type While i am doing Goods Receipt, In the Freight
-
Raw Conversion: Colors not accurate. Correction with profile?
Hi, When I create JPGs from my Raw files, the results don't look natural. Some colors have more saturation, some less. For example, the colors of the KoMi A series look somehow dirty; the reds of the Maxxum 5D seem to be oversaturated (dark reds are
-
App crashes after Security Update 2013-002
I have installed Security Update 2013-002 on Mac OS X 10.6.8 and now certain apps like Safari, Software Update, Mail, Adobe Reader crash with a segmentation fault. Other apps like Photoshop, Google Chrome work fine. I found a thread on here of someon
-
Transfer data for text with BDC?
why there isn't a field for me to assign text contents to realize inserting text into TEXT in S/O? perform bdc_field using 'BDC_OKCODE' '=T\08'. perform bdc_field using 'BDC_CURSOR'