Rejected records when loading
hi!
i'm using awm to build cube, when loading the measures from sales_fact view in the global demo of oracle, log hints
"13:05:12 Finished Load of Measures: UNITS from Cube UNITS_CUBE.CUBE. Processed 0 Records. Rejected 222589 Records."
i'm assure awm user has privilege to select sales_fact view. but why rejected the records, needs help.thanks!
best regrads
Hi there,
The only situation where records get rejected is when the relevant dimension values are not present and the fact record cannot be joined on one (or more) of it's foreign key records.
Have you maintained all of your dimensions first?
Thanks,
Stuart Bunby
OLAP Blog: http://oracleOLAP.blogspot.com
OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
OLAP on OTN: http://www.oracle.com/technology/products/bi/olap/index.html
DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html
Similar Messages
-
How to only update existing records when loading master data ?
Hello experts, I need your lights one more time.
Here is my need :
I have created an infoobject (IO) which is a very simple version of 0material, let's call it Znewmat --> Znewmat has material type and trademark as attributes, those two fields are available in 2 different datasources :
- 0MATERIAL_ATTR for material type (field MTART)
- 0MAT_SALES_ATTR for trademark (field MVGR2)
When loading my new IO from 0MATERIAL_ATTR I use a filter (at DTP level) to get only a few material types (I get something like 1000 records),
here is my issue : when I load from 0MAT_SALES_ATTR the field "material type" is not available to keep the same filter as for 0MATERIAL_ATTR and existing records are updated with the trademark, but I also get 5000 records I don't need, and my master data is "polluated" with useless lines.
*and my question : is there a way while performing the second loading to ONLY UPDATE EXISTING RECORDS AND NOT ADD ANY
NEW RECORDS ? (i didn't find anything in main options of my DTP)*
(I'd like to avoid the solution to update the 0MAT_SALES_ATTR datasource to add the missing field)
Thanks in advance for any help, points will be distributed.
Guillaume P.
Still no idea ?in the start routine of transformation from 0MAT_SALES_ATTR to znewmat do the following:
select materials from /BIC/PZNEWMAT into i_mat
for all entries in source_package where material eq source_package-material.
loop at source_package.
p_ind = sy-tabix.
read table i_mat with key material = source_package-material.
if sy-subrc ne 0.
delete i_mat index p_ind.
endif.
this way you'll only update records that have previously been loaded by 0MATERIAL_ATTR DS
loading sequence:
first load ZNEWMAT from 0MATERIAL_ATTR. then activate ZNEWMAT. then load 0MAT_SALES_ATTR to ZNEWMAT.
M. -
Rejected Records when building my cube
the sql statement of the allocation works fine and gives the right result
SELECT /*+ bypass_recursive_check cursor_sharing_exact no_expand no_rewrite */
T1_PUR_DATEORDERED ALIAS_1,
T1_PUR_CTRORIGIN_UID ALIAS_2,
SUM(T1_PUR_AMTFREIGHTBC) ALIAS_3
FROM
SELECT /*+ no_rewrite */
T1."PUR_DATEORDERED" T1_PUR_DATEORDERED,
T1."PUR_CTRORIGIN_UID" T1_PUR_CTRORIGIN_UID,
T1."PUR_AMTFREIGHTBC" T1_PUR_AMTFREIGHTBC
FROM
CHDDL."ITS$QSI$BIVIEWPURCHASE" T1 )
T1
GROUP BY
(T1_PUR_CTRORIGIN_UID, T1_PUR_DATEORDERED)
ORDER BY
T1_PUR_CTRORIGIN_UID ASC NULLS LAST ,
T1_PUR_DATEORDERED ASC NULLS LAST
However when I build it I always get 35570 invalid member of the target dimension
in AWM 11.2 I get very strange characters listed. this is not the case with slqplus. when I select out of rejected records I get as well strange characters.
desc of my cube shows varchar2(19), varchar2(16) and number
the direct query under sqlplus shows
ALIAS_1 ALIAS_2 ALIAS_3
2008-12-03 00:00:00 zzz0000000000n4 0
2009-05-09 00:00:00 zzz0000000000n4 0
2009-05-11 00:00:00 zzz0000000000n4 0
2009-05-12 00:00:00 zzz0000000000n4 0
2011-05-25 00:00:00 zzz0000000000n4 0
2011-09-05 00:00:00 zzz0000000000n4 0
6 Zeilen ausgewõhlt.
in rejected recordes it shows
46736 9584 35570
(XSRWLD06) Ein Spaltenwert ist kein g³ltiges Member seiner Zieldimension.
1
┐┐┐┐┐ë┐┐┐┐┐┐┐┐┐┐┐┐┐
46736 9584 35570
(XSRWLD06) Ein Spaltenwert ist kein g³ltiges Member seiner Zieldimension.
2
┐┐┐┐┐ë┐┐┐┐┐┐┐┐┐┐┐┐┐
46736 9584 35570
(XSRWLD06) Ein Spaltenwert ist kein g³ltiges Member seiner Zieldimension.
3
┐┐┐┐┐ë┐┐┐┐┐┐┐┐┐┐┐┐┐
46736 9584 35570
(XSRWLD06) Ein Spaltenwert ist kein g³ltiges Member seiner Zieldimension.
4
┐┐┐┐┐ë┐┐┐┐┐┐┐┐┐┐┐┐┐
46736 9584 35570
(XSRWLD06) Ein Spaltenwert ist kein g³ltiges Member seiner Zieldimension.
5
┐┐┐┐┐ë┐┐┐┐┐┐┐┐┐┐┐┐┐
46736 9584 35570
(XSRWLD06) Ein Spaltenwert ist kein g³ltiges Member seiner Zieldimension.
6
┐┐┐┐┐ë┐┐┐┐┐┐┐┐┐┐┐┐┐
Dimension building works fine,
Any Help would be appreciated
Christof
Edited by: 883727 on Oct 3, 2011 9:14 PMHi there,
The only situation where records get rejected is when the relevant dimension values are not present and the fact record cannot be joined on one (or more) of it's foreign key records.
Have you maintained all of your dimensions first?
Thanks,
Stuart Bunby
OLAP Blog: http://oracleOLAP.blogspot.com
OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
OLAP on OTN: http://www.oracle.com/technology/products/bi/olap/index.html
DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html -
Incorrect records When load Transaction Data
Hi experts:
I just loaded transactional Data. Data first came from a CSV file to PSA, then from PSA to DSO, Last DSO to Cube.
Data in PSA looks correct. After data got loaded to DSO, I found so many Item Numbers are not in PSA. I deleted all the old requests in PSA, and only have one request now. I don't know what are those old Item Numbers come from. It gets loaded everytime. Anyone knows?
Thanks!In that case you must be having some update routine that is increasing number of records in DSO. Can you please paste here number of records transferred and added from DSO Manage for the data load request you are talking about. Also let me know what do you mean by - I found so many Item Numbers are not in PSA
Regards
Pradip -
Post mapping for Reject records!!
hi
I would like to capture all the rejected records when I am loading the fact table. I know I can get this information from WB_RT_ERRORS and WB_RT_ERROR_SOURCES. But, what I would like to do is to have Post mapping process which and the action set to "on error", I would like to capture all the reject records into a flat file. Is there a way I can identify reject in the post mapping process other than referring to WB_RT_ERRORS and WB_RT_ERROR_SOURCES in runtime.Because this is what client is requesting for , Any help on this would be greatly appreciated.
Please mark me directly also since I do not get emails sometimes through the distribution list.
Thanks in Anticipation,
BalajiReposting the response from M.Van Der Wiel:
2 comments:
- Ideally, you would explicitly capture the errors, and insert those into a separate table. This would enable you to run the mapping in set-based mode (traditionally this means: no error logging) for optimal performance, and you still get the errors. This does mean you would have
to explicitly design what may go wrong, so you should know what your data looks like. Your flat file could then be created out of the explicit error records, which is probably a bit easier (and faster) than to go from the WB_RT_ERRORS.
- The mapping errors out once the maximum number of errors is reached (as passed at runtime; is defaulted by the configuration setting). Anything in between 0 and the maximum number of errors will result in a status warning.
To do what you want to do, you could indeed use the post mapping process, but perhaps you want to design a separate mapping to write the errors to a file, and use a process flow (with conditional sequencing) to run the second mapping (only if the first one fails or results in a warning). This may be a nicer solution than to write the code manually and implement it as a standalone post-mapping process.
Finally, notice that WB_RT_ERRORS and the like are not tables you should directly query (they will change in the future). Rather you should use the ALL_RT_<something> views to access the same information. Going forward, as the customer desires to migrate to a later release, it is
more likely that their customizations still work.
Thanks,
Mark.
PS.
Another possiblity - if the errors violate a key constraint - would be to configure the mapping target with the constraints parameter set to false and redirect the error records to a error table (this can be done in the mapping configuration - sources and targets section). This configuration will disable the constraints during the load and re-enable them after the load, putting the offending records in the error table. You can then download the records from the error table into a flat file in a separate mapping or in a post-mapping process.
Regards:
Igor -
Rejected record message from Essbase Studio load (3355)
I receive the following error when deploying an outline via Essbase Studio to an ASO cube (ver 11.1.2):
\\Record #36798 - Member XXX does not exist. Reference member column types require the field to be an existing member (3355)
As I read the error (BTW, can someone help me find the table of rejected record codes?), it is rejecting the assignment of an attribute to a member because the member does not exist.
What could be a reason though that I did not receive an "Error adding member XXX" rejection beforehand or something indicating that there was an error adding the member originally? I am puzzled because I cannot find anything special about the source data when comparing to the many others that loaded successfullly.Hi All,
Towards this error if we are talking about Hyperion Essbase - Version 11.1.2.1.000 and later.
So mostly we are facing this issue has been verified as unpublished Bug 12967639.
As documented in KM:
Unable to Save ASO Outline After Renaming Members, "Error(1007072) Member [xxx] tagged as <REFER does not have a member to refer to" [ID 1465850.1]
Also towards ASO there is another document having all there as:
Oracle Hyperion Essbase and Aggregate Storage (ASO) [ID 1291202.1]
Thanks,
Shaker -
Dimension Build Load Rule Reject Record
Hi,
I'm trying to create a load rule that build the accounts dimsenion with a flat file. I would like to exclude any accounts that starts with "S" from being added to the account dimension. I have a parent/child build method for this load rule. i'm using the Reject Record to set the condition that any string contain S will be rejected but for some reason when I load it doesn't do anything. it makes EAS freeze up. so, my flat file looks like this.
New Accounts,S50000,Accounts - S50000
New Accounts,60000,Accounts - 60000
New Accounts,S45000,Accounts - S45000Hi,
Your posting does not tell me what you entered in the reject record setting to try to achieve the desired result. If you entered "S" and used the "Contains" setting it should have worked providing you know your data and don't have occurrences of this character possible other than in the first ordinal position. If you tried to use a wildcard asterisk callout, Essbase load rule select/reject logic does not take wildcard inputs (like "S*").
Instead, a tried and true trick of Essbase load rules is to make a copy of the field in question, split it up and take background select/reject action on just the first character of a reference copy of the original field. This is done by using the field "create on join" method and simply joining the field to itself (an unintuitive but effective way to make a copy of a field for use in the load rule settings). Having done this, now split the copy of the field after one character. The remaining field section can be set to ignore for dim builds (or data loads as you might have to do later to match your dim build results in data loads). Now you have a single character field for which you can set the reject setting of contains or equal to "S" and then... please get this part... ignore the field for dim builds in field properties. Essbase will not try to use this partial field to build anything since it is ignored, but the assigned reject logic on occurrences of the isolated single character field will still be interpreted.
This is old school Essbase but it still works. I am not here to say it is intuitive or that it represents the ultimate functionality in an ETL interface. Many with SQL skills would take care of this pre-load-rule in SQL string management. But this method is not that hard to apply and is effective if all you want to do is use load rule settings. Let me know if it helps.
Darrell -
How to avoid 'duplicate data record' error message when loading master data
Dear Experts
We have a custom extractor on table CSKS called ZCOSTCENTER_ATTR. The settings of this datasource are the same as the settings of 0COSTCENTER_ATTR. The problem is that when loading to BW it seems that validity (DATEFROM and DATETO) is not taken into account. If there is a cost center with several entries having different validity, I get this duplicate data record error. There is no error when loading 0COSTCENTER_ATTR.
Enhancing 0COSTCENTER_ATTR to have one datasource instead of two is not an option.
I know that you can set ignore duplicates in the infopackage, but that is not a nice solution. 0COSTCENTER_ATTR can run without this!
Is there a trick you know to tell the system that the date fields are also part of the key??
Thank you for your help
PeterAlessandro - ZCOSTCENTER_ATTR is loading 0COSTCENTER, just like 0COSTCENTER_ATTR.
Siggi - I don't have the error message described in the note.
"There are duplicates of the data record 2 & with the key 'NO010000122077 &' for characteristic 0COSTCENTER &."
In PSA the records are marked red with the same message (MSG no 191).
As you see the key does not contain the date when the record is valid. How do I add it? How is it working for 0COSTCENTER_ATTR with the same records? Is it done on the R/3 or on the BW side?
Thanks
Peter -
Is it possible to tag a pdf with 'reject', so when it loads into Bridge, it will be pre-rejected and not view (so long as 'show reject files' is unchecked in Bridge)? - Please see screen shot.
We're wanting to have all pdf's initially set, so they don't view in Bridge, until the 'reject' tag is then removed.
Any help would be great! Thanks.Well, not possible directly because the Acrobat interface doesn't allow changing Bridge's ratings and labels. But as Omke Oudeman showed over in the Bridge Scripting forum, they are stored in the XMP metadata, where "Reject" seems to be an xmp:Rating of -1. Thus you may be able to add this bit of metadata within Acrobat Pro by using the Append... [XMP template] command -- assuming you can come up with an XMP template, and can find the command buried in the interface (in Acrobat 9 it is behind the Additional Metadata button on the Properties/ Description tab, and then from there under Advanced). Another route might be via Javascript. Either way, before investing much time you should consider that most software won't see labels and ratings you add to the XMP metadata, and, as Omke mentions in that thread, their rendering seems somewhat fragile even in Bridge.
Another route might be adapting a metadata category already available in Acrobt's interface: Keywords, say, "Approved" and "NotApproved". (If you can't touch the Keywords, maybe you could re-purpose another of Acrobat's categories; and of course my suggestions are not related to Acrobat's stock stamps "Approved" and "Not Approved," a different kettle of fish). Keywords might suffice for you to filter views in Bridge, but if you really want Bridge to show files with "Reject" then they must carry that tag. If neither an XMP template nor an Acrobat Javascript serves, I imagine a Bridge script could apply "Reject" to files with the keyword NotApproved. Of course, for any PDFs whose status changes you'd need to adjust both the Reject (xmp:Rating -1) and the keyword; perhaps another script could un-Reject and change the keyword to Approved in one step. -
11g Cube not showing any data with no Rejected records
Hi David ,
Strangely one of my 11g Cube is not showing data from today as it is showing all records rejected . However when I lookup Rejected records table I didn't find any records .Not sure what is happening . When I want to peep out the AWM queries from CUBE_BUILD_LOG and ran to the database in the AWM schema the records are retruning perfectly fine . I wonder same query is firing during Cube load but without any data ? My Cube build script has only LOAD and Aggregate .
after maintain My dimensions data are looking fine but no data populated after Cube maintenance . My MV switch off across all dimensions and cubes .
I navigate to CUBE_OPERATION_LOG and not able to comprehend about the content.
Any advice ?
Thanks and Regards,
DxPHi David ,
To be very frank today is very bad day ... Please see below my observation:
Executed below to make sure that no key values in dimension is missing but present in fact . All below query returns no row.
select distinct owner_postn_wid from w_synm_rx_t_f
minus
select distinct row_wid from postn_dh
select distinct payer_type_Wid from w_synm_rx_t_f
minus
select distinct row_wid from wc_ins_plan_dh
select distinct market_wid from w_synm_rx_t_f
minus
select distinct row_wid from w_product_dh
select distinct period_day_wid from w_synm_rx_t_f
minus
select distinct row_wid from w_daytime_D
select distinct contact_wid from w_synm_rx_t_f
intersect
select distinct row_wid from w_person_d
select distinct X_TERR_TYPE_WID from w_synm_rx_t_f
minus
select distinct row_wid from W_LOV_D
============================
Below returns count of 0 rows : ensure no NULL present
select count(1) from w_synm_rx_t_f where contact_wid is null;
select count(1) from w_synm_rx_t_f where owner_postn_wid is null;
select count(1) from w_synm_rx_t_f where payer_type_Wid is null;
select count(1) from w_synm_rx_t_f where period_day_wid is null;
select count(1) from w_synm_rx_t_f where X_TERR_TYPE_WID is null;
select count(1) from w_synm_rx_t_f where market_wid is null;
+++++++++++++++++++++++++++++++++
Cube Build Log has below entry:
796 0 STARTED CLEAR VALUES MKT_SLS_CUBE CUBE NNOLAP NN_OLAP_POC P14:JAN2010 17-AUG-11 07.12.08.267000000 PM +05:30 JAVA 1 C 47141 67 0 1
796 0 COMPLETED CLEAR VALUES MKT_SLS_CUBE CUBE NNOLAP NN_OLAP_POC P14:JAN2010 17-AUG-11 07.12.08.267000000 PM +05:30 JAVA 1 C 47141 67 0 2
796 0 STARTED LOAD MKT_SLS_CUBE CUBE NNOLAP NN_OLAP_POC P14:JAN2010 17-AUG-11 07.12.08.283000000 PM +05:30 JAVA 1 C 47142 68 0 1
796 0 SQL LOAD MKT_SLS_CUBE CUBE "<SQL>
<![CDATA[
SELECT /*+ bypass_recursive_check cursor_sharing_exact no_expand no_rewrite */
T16_ROW_WID ALIAS_127,
T13_ROW_WID ALIAS_128,
T10_ROW_WID ALIAS_129,
T7_ROW_WID ALIAS_130,
T4_ROW_WID ALIAS_131,
T1_ROW_WID ALIAS_132,
SUM(T20_MKT_TRX) ALIAS_133,
SUM(T20_MKT_NRX) ALIAS_134
FROM
SELECT /*+ no_rewrite */
T1."CONTACT_WID" T20_CONTACT_WID,
T1."MARKET_WID" T20_MARKET_WID,
T1."OWNER_POSTN_WID" T20_OWNER_POSTN_WID,
T1."PAYER_TYPE_WID" T20_PAYER_TYPE_WID,
T1."PERIOD_DAY_WID" T20_PERIOD_DAY_WID,
T1."MKT_NRX" T20_MKT_NRX,
T1."MKT_TRX" T20_MKT_TRX,
T1."X_TERR_TYPE_WID" T20_X_TERR_TYPE_WID
FROM
NN_OLAP_POC."W_SYNM_RX_T_F" T1 )
T20,
SELECT /*+ no_rewrite */
T1."ROW_WID" T16_ROW_WID
FROM
NN_OLAP_POC."W_DAYTIME_D" T1 )
T16,
SELECT /*+ no_rewrite */
T1."ROW_WID" T13_ROW_WID
FROM
NN_OLAP_POC."W_PERSON_D" T1 )
T13,
SELECT /*+ no_rewrite */
T1."ROW_WID" T10_ROW_WID
FROM
NN_OLAP_POC."WC_INS_PLAN_DH" T1 )
T10,
SELECT /*+ no_rewrite */
T1."ROW_WID" T7_ROW_WID
FROM
NN_OLAP_POC."W_LOV_D" T1 )
T7,
SELECT /*+ no_rewrite */
T1."ROW_WID" T4_ROW_WID
FROM
NN_OLAP_POC."POSTN_DH" T1 )
T4,
SELECT /*+ no_rewrite */
T1."ROW_WID" T1_ROW_WID
FROM
NN_OLAP_POC."W_PRODUCT_DH" T1 )
T1
WHERE
((T20_PERIOD_DAY_WID = T16_ROW_WID)
AND (T16_ROW_WID = 20100101)
AND (T20_CONTACT_WID = T13_ROW_WID)
AND (T20_PAYER_TYPE_WID = T10_ROW_WID)
AND (T7_ROW_WID = T20_X_TERR_TYPE_WID)
AND (T20_OWNER_POSTN_WID = T4_ROW_WID)
AND (T20_MARKET_WID = T1_ROW_WID)
AND ((T20_PERIOD_DAY_WID) IN ((20100107.000000) , (20100106.000000) , (20100128.000000) , (20100124.000000) , (20100121.000000) , (20100118.000000) , (20100115.000000) , (20100109.000000) , (20100125.000000) , (20100114.000000) , (20100111.000000) , (20100110.000000) , (20100104.000000) , (20100101.000000) , (20100129.000000) , (20100123.000000) , (20100117.000000) , (20100113.000000) , (20100108.000000) , (20100131.000000) , (20100120.000000) , (20100116.000000) , (20100119.000000) , (20100105.000000) , (20100102.000000) ,
(20100130.000000) , (20100127.000000) , (20100122.000000) , (20100112.000000) , (20100103.000000) , (20100126.000000) ) ) )
GROUP BY
(T1_ROW_WID, T4_ROW_WID, T7_ROW_WID, T10_ROW_WID, T13_ROW_WID, T16_ROW_WID)
ORDER BY
T1_ROW_WID ASC NULLS LAST ,
T4_ROW_WID ASC NULLS LAST ,
T7_ROW_WID ASC NULLS LAST ,
T10_ROW_WID ASC NULLS LAST ,
T13_ROW_WID ASC NULLS LAST ,
T16_ROW_WID ASC NULLS LAST ]]>/>
</SQL>" NNOLAP NN_OLAP_POC P14:JAN2010 17-AUG-11 07.12.08.627000000 PM +05:30 JAVA 1 MAP1 C 47142 68 0 2
796 0 COMPLETED LOAD MKT_SLS_CUBE CUBE "<CubeLoad
LOADED="0"
REJECTED="4148617"/>" NNOLAP NN_OLAP_POC P14:JAN2010 17-AUG-11 07.12.40.486000000 PM +05:30 JAVA 1 C 47142 68 0 3
796 0 STARTED UPDATE MKT_SLS_CUBE CUBE NNOLAP NN_OLAP_POC P14:JAN2010 17-AUG-11 07.12.40.501000000 PM +05:30 JAVA 1 C 47143 69 0 1
796 0 COMPLETED UPDATE MKT_SLS_CUBE CUBE NNOLAP NN_OLAP_POC P14:JAN2010 17-AUG-11 07.12.40.548000000 PM +05:30 JAVA 1 C 47143 69 0 2
+++++++++++++++++
You can observer clear rejection of 4 million rows ... Ran the above query which returns my data successfully.
Look out to CUBE_REJECTED records take the sample record and put into the above query it is returning the data fine with my measures and dimension WID's :(PLEASE SEE BELOW THE FILTERS on ROW_WID)
=========================
SELECT /*+ bypass_recursive_check cursor_sharing_exact no_expand no_rewrite */
T16_ROW_WID ALIAS_127,
T13_ROW_WID ALIAS_128,
T10_ROW_WID ALIAS_129,
T7_ROW_WID ALIAS_130,
T4_ROW_WID ALIAS_131,
T1_ROW_WID ALIAS_132,
SUM(T20_MKT_TRX) ALIAS_133,
SUM(T20_MKT_NRX) ALIAS_134
FROM
SELECT /*+ no_rewrite */
T1."CONTACT_WID" T20_CONTACT_WID,
T1."MARKET_WID" T20_MARKET_WID,
T1."OWNER_POSTN_WID" T20_OWNER_POSTN_WID,
T1."PAYER_TYPE_WID" T20_PAYER_TYPE_WID,
T1."PERIOD_DAY_WID" T20_PERIOD_DAY_WID,
T1."MKT_NRX" T20_MKT_NRX,
T1."MKT_TRX" T20_MKT_TRX,
T1."X_TERR_TYPE_WID" T20_X_TERR_TYPE_WID
FROM
NN_OLAP_POC."W_SYNM_RX_T_F" T1 )
T20,
SELECT /*+ no_rewrite */
T1."ROW_WID" T16_ROW_WID
FROM
NN_OLAP_POC."W_DAYTIME_D" T1 )
T16,
SELECT /*+ no_rewrite */
T1."ROW_WID" T13_ROW_WID
FROM
NN_OLAP_POC."W_PERSON_D" T1 )
T13,
SELECT /*+ no_rewrite */
T1."ROW_WID" T10_ROW_WID
FROM
NN_OLAP_POC."WC_INS_PLAN_DH" T1 )
T10,
SELECT /*+ no_rewrite */
T1."ROW_WID" T7_ROW_WID
FROM
NN_OLAP_POC."W_LOV_D" T1 )
T7,
SELECT /*+ no_rewrite */
T1."ROW_WID" T4_ROW_WID
FROM
NN_OLAP_POC."POSTN_DH" T1 )
T4,
SELECT /*+ no_rewrite */
T1."ROW_WID" T1_ROW_WID
FROM
NN_OLAP_POC."W_PRODUCT_DH" T1 )
T1
WHERE
((T20_PERIOD_DAY_WID = T16_ROW_WID)
AND (T16_ROW_WID = 20100101)
AND (T20_CONTACT_WID = T13_ROW_WID)
AND (T20_PAYER_TYPE_WID = T10_ROW_WID)
AND (T7_ROW_WID = T20_X_TERR_TYPE_WID)
AND (T20_OWNER_POSTN_WID = T4_ROW_WID)
AND (T20_MARKET_WID = T1_ROW_WID)
AND T13_ROW_WID = 255811
AND T7_ROW_WID = 122
AND T4_ROW_WID =3
AND T1_ROW_WID=230
AND T10_ROW_WID = 26
AND ((T20_PERIOD_DAY_WID) IN ((20100107.000000) , (20100106.000000) , (20100128.000000) , (20100124.000000) , (20100121.000000) , (20100118.000000) , (20100115.000000) , (20100109.000000) , (20100125.000000) , (20100114.000000) , (20100111.000000) , (20100110.000000) , (20100104.000000) , (20100101.000000) , (20100129.000000) , (20100123.000000) , (20100117.000000) , (20100113.000000) , (20100108.000000) , (20100131.000000) , (20100120.000000) , (20100116.000000) , (20100119.000000) , (20100105.000000) , (20100102.000000) ,
(20100130.000000) , (20100127.000000) , (20100122.000000) , (20100112.000000) , (20100103.000000) , (20100126.000000) ) ) )
GROUP BY
(T1_ROW_WID, T4_ROW_WID, T7_ROW_WID, T10_ROW_WID, T13_ROW_WID, T16_ROW_WID)
ORDER BY
T1_ROW_WID ASC NULLS LAST ,
T4_ROW_WID ASC NULLS LAST ,
T7_ROW_WID ASC NULLS LAST ,
T10_ROW_WID ASC NULLS LAST ,
T13_ROW_WID ASC NULLS LAST ,
T16_ROW_WID ASC NULLS LAST
=================================
THE XML export of CUBE as below:
<!DOCTYPE Metadata [
<!ENTITY % BIND_VALUES PUBLIC "OLAP BIND VALUES" "OLAP METADATA">
%BIND_VALUES;
]>
<Metadata
Version="1.2"
MinimumDatabaseVersion="11.2.0.1">
<Cube
ETViewName="MKT_SLS_CUBE_VIEW"
Name="MKT_SLS_CUBE">
<Measure>
<BaseMeasure
SQLDataType="NUMBER"
ETMeasureColumnName="TRX"
Name="TRX">
<Description
Type="LongDescription"
Language="AMERICAN"
Value="TRX">
</Description>
<Description
Type="ShortDescription"
Language="AMERICAN"
Value="TRX">
</Description>
<Description
Type="Description"
Language="AMERICAN"
Value="TRX">
</Description>
</BaseMeasure>
</Measure>
<Measure>
<BaseMeasure
SQLDataType="NUMBER"
ETMeasureColumnName="NRX"
Name="NRX">
<Description
Type="LongDescription"
Language="AMERICAN"
Value="NRX">
</Description>
<Description
Type="ShortDescription"
Language="AMERICAN"
Value="NRX">
</Description>
<Description
Type="Description"
Language="AMERICAN"
Value="NRX">
</Description>
</BaseMeasure>
</Measure>
<CubeMap
Name="MAP1"
IsSolved="False"
Query="W_SYNM_RX_T_F"
WhereClause="W_DAYTIME_D.ROW_WID = 20100101">
<MeasureMap
Name="TRX"
Measure="TRX"
Expression="W_SYNM_RX_T_F.MKT_TRX">
</MeasureMap>
<MeasureMap
Name="NRX"
Measure="NRX"
Expression="W_SYNM_RX_T_F.MKT_NRX">
</MeasureMap>
<CubeDimensionalityMap
Name="TIME"
Dimensionality="TIME"
MappedDimension="TIME.CALENDER.MONTHLY"
JoinCondition="W_SYNM_RX_T_F.PERIOD_DAY_WID = W_DAYTIME_D.ROW_WID"
Expression="W_DAYTIME_D.ROW_WID">
</CubeDimensionalityMap>
<CubeDimensionalityMap
Name="CUSTOMER"
Dimensionality="CUSTOMER"
MappedDimension="CUSTOMER.CUSTOMER_HIERARCHY.DETAIL_LEVEL"
JoinCondition="W_SYNM_RX_T_F.CONTACT_WID = W_PERSON_D.ROW_WID"
Expression="W_PERSON_D.ROW_WID">
</CubeDimensionalityMap>
<CubeDimensionalityMap
Name="INS_PLAN_DH"
Dimensionality="INS_PLAN_DH"
MappedDimension="INS_PLAN_DH.INS_PLAN.DETAIL"
JoinCondition="W_SYNM_RX_T_F.PAYER_TYPE_WID = WC_INS_PLAN_DH.ROW_WID"
Expression="WC_INS_PLAN_DH.ROW_WID">
</CubeDimensionalityMap>
<CubeDimensionalityMap
Name="LIST_OF_VALUES"
Dimensionality="LIST_OF_VALUES"
MappedDimension="LIST_OF_VALUES.LOV_HIERARCHY.DETAIL_LEVEL"
JoinCondition="W_LOV_D.ROW_WID = W_SYNM_RX_T_F.X_TERR_TYPE_WID"
Expression="W_LOV_D.ROW_WID">
</CubeDimensionalityMap>
<CubeDimensionalityMap
Name="POSITIONDH"
Dimensionality="POSITIONDH"
MappedDimension="POSITIONDH.POST_HIER.DETAIL"
JoinCondition="W_SYNM_RX_T_F.OWNER_POSTN_WID = POSTN_DH.ROW_WID"
Expression="POSTN_DH.ROW_WID">
</CubeDimensionalityMap>
<CubeDimensionalityMap
Name="PRODH"
Dimensionality="PRODH"
MappedDimension="PRODH.PRODHIER.DETAILLVL"
JoinCondition="W_SYNM_RX_T_F.MARKET_WID = W_PRODUCT_DH.ROW_WID"
Expression="W_PRODUCT_DH.ROW_WID">
</CubeDimensionalityMap>
</CubeMap>
<Organization>
<AWCubeOrganization
MVOption="NONE"
SparseType="COMPRESSED"
MeasureStorage="SHARED"
NullStorage="MV_READY"
CubeStorageType="NUMBER"
PrecomputePercent="35"
PrecomputePercentTop="0"
PartitionLevel="TIME.CALENDER.MONTHLY"
AW="&AW_NAME;">
<SparseDimension
Name="TIME"/>
<SparseDimension
Name="CUSTOMER"/>
<SparseDimension
Name="INS_PLAN_DH"/>
<SparseDimension
Name="LIST_OF_VALUES"/>
<SparseDimension
Name="POSITIONDH"/>
<SparseDimension
Name="PRODH"/>
<DefaultBuild>
<![CDATA[BUILD SPEC LOAD_AND_AGGREGATE
LOAD NO SYNCH,
SOLVE
)]]>
</DefaultBuild>
</AWCubeOrganization>
</Organization>
<Dimensionality
Name="TIME"
ETKeyColumnName="TIME"
Dimension="TIME">
</Dimensionality>
<Dimensionality
Name="CUSTOMER"
ETKeyColumnName="CUSTOMER"
Dimension="CUSTOMER">
</Dimensionality>
<Dimensionality
Name="INS_PLAN_DH"
ETKeyColumnName="INS_PLAN_DH"
Dimension="INS_PLAN_DH">
</Dimensionality>
<Dimensionality
Name="LIST_OF_VALUES"
ETKeyColumnName="LIST_OF_VALUES"
Dimension="LIST_OF_VALUES">
</Dimensionality>
<Dimensionality
Name="POSITIONDH"
ETKeyColumnName="POSITIONDH"
Dimension="POSITIONDH">
</Dimensionality>
<Dimensionality
Name="PRODH"
ETKeyColumnName="PRODH"
Dimension="PRODH">
</Dimensionality>
<Description
Type="LongDescription"
Language="AMERICAN"
Value="MKT SLS CUBE">
</Description>
<Description
Type="ShortDescription"
Language="AMERICAN"
Value="MKT SLS CUBE">
</Description>
<Description
Type="Description"
Language="AMERICAN"
Value="MKT SLS CUBE">
</Description>
<ConsistentSolve>
<![CDATA[SOLVE
SUM
MAINTAIN COUNT
OVER ALL
)]]>
</ConsistentSolve>
</Cube>
</Metadata>
+++++++++++++++++++++++
I dropped the AW and create new from exported XML and maintain all dimensions and then rebuild . Still have the issue :(
Any thing you can hightlight from above ?
Thanks,
DxP
Also I sustpect whethere it is a issue due to below error caused when I click on one of my Position_Hier view from AWM : even if I select that view it is throwing the error in SQL developer after displaying first couple of rows (while page down)
java.sql.SQLException: ORA-33674: Data block size 63 exceeds the maximum size of 60 bytes.
at oracle.olap.awm.util.jdbc.SQLWrapper.execute(Unknown Source)
at oracle.olap.awm.querydialog.PagedQueryDialog$1.construct(Unknown Source)
at oracle.olap.awm.ui.SwingWorker$2.run(Unknown Source)
at java.lang.Thread.run(Thread.java:595)
Edited by: e_**** on Aug 17, 2011 8:41 PM -
Want to use sequence object of oracle when loading data in sql loader
Hi,
I want to use sequence when loading data in sqll loader, but the problem is i could not use sequence object of oracle to load the data by sql loader, i can use sequence of sql loader.
I want to use sequence object because in later entries this sequence object will be used.If i use sequence of sql loader how can i use oracle sequence object
Is there any other optionI have a simillar problem, I also want to use a sequence when loading data by the SQL Loader.
My control file is:
load data
infile '0testdata.txt'
into table robertl.tbltest
fields terminated by X'09'
trailing nullcols
(redbrojunos,
broj,
dolazak,
odlazak nullif odlazak=blanks,
komentar nullif komentar=blanks)
And the datafile is:
robertl.brojilo.nextval 1368 17.06.2003 08:02:46 17.06.2003 16:17:18
robertl.brojilo.nextval 2363 17.06.2003 08:18:18 17.06.2003 16:21:52
robertl.brojilo.nextval 7821 17.06.2003 08:29:22 17.06.2003 16:21:59
robertl.brojilo.nextval 0408 17.06.2003 11:20:27 17.06.2003 18:33:00 ispit
robertl.brojilo.nextval 1111 17.06.2003 11:30:58 17.06.2003 16:09:34 Odlazak na ispit
robertl.brojilo.nextval 6129 17.06.2003 14:02:42 17.06.2003 16:23:23 seminar
But all records were rejected by the Loader, for every record I get the error:
Record 1: Rejected - Error on table ROBERTL.TBLTEST, column REDBROJUNOS.
ORA-01722: invalid number -
Max # of rejected records in the UJFS file?
I am loading transactional data via the LoadInfoProvider standard DM package.
According to the package log, it tries to load 4,910,004 records. It rejects 780,639 of them.
When I download the rejected records file from UJFS, it only has 100,001 records. I've tried this a couple times and same result. Always a cap of 100,001 rejected records in the file.
Anyone knowif there's a parameter I can modify to have all rejected records in the log or am I hitting a system limit?
Thanks.
-AmyHi All,
I am trying to find a solution referring to some of old threads. While doing the transactional data loads,is there any way to make BPC accept all the records without validating for duplicate records.
We have tried Validate Records set to 'No' in transformation file and Max Reject count to 0. Still the duplicate records are getting rejected.
As we are doing data loads from a transactional system flat file, there are lot of records where all the fields are common for some records including amount. i.e. a similar sale for same quantity and product happened during the same period. How to address this?
Should n't the cube be aggregating for similar records? Your thoughts please.
Thanks,
Santosh -
Resize the number of rejected record in sqlldr
Hi,
I am using sqlldr to load my data. However, it is known that the data has lots of error. So when I load my data, after 50 rejected records, the sqlldr stops. It wouldn' load anymore.
Question: is there a way to specify the number of rejected record in sqlldr?
For sure the file that I need to load has max 200 bad records.
Thank you very much.Set the ERRORS parameter to a high value.
-
Error 1 when loading flat file in BW 7.0
Hi,
The flat file structure is same as the transfer structure. Its a csv file and i also checked about the delimiters and stuff.The flat is not open and it is closed while i am loading it. The same file gets loaded if i try in another laptop with my id.If i use my colleague's id on my system also...it doest work...so, the basic problem is with my laptop. I know its nor related to type of data or transfer structure. Its some settings on my laptop which got changed automatically. If i install some other softwares like mozilla firefox or yahoo msg-will that create a problem? I am not at all understanding why its like this. Please help.The error msgs i get when i try to load the flat file -
Error 1 when loading external data
Diagnosis
Error number 1 occurred when loading external data:
1. Error when reading the file (access rights, file name, ...)
2. File size or number of records does not correspond to the data in the control file
3. Error when generating the IDoc
4. File contains invalid data (errors with an arithmetic operation or data conversion)
Procedure
Check whether you have the required access rights and whether the data in the control file is correct (file names, record length, number of records, ...). Correct the data in the control file if necessary and check the data file for invalid data (values of the wrong type, values in the wrong format for conversion exit,...). Check whether the file has headers that have not been specified.
Error when opening the data file C:\vikki1.csv (origin C)
Message no.
Diagnosis
File C:\ vikki1.csv (origin C) could not be opened.
Origin:
A : Application server
C : Client workstation
Procedure
Check whether the file entered exists and is not been used by other applications.Hi! Vikki,
Error 1 means your flat file is open while uploading the data..
your flat file should be closed while uploading data in BW.
that is why it is saying "Error when opening the file..".
first close that file n then upload..it will work.
rest of the things are ok!..
I hope this will help you.
Regards,
khyati. -
RE: REJECTED RECORDS IN XI OR BW
I found a bug in the process between XI and BW.
Some rejected records occur when a transaction has a Voided line item.
I think I've found what may be a large part of what's contributing to these rejected records. It looks like if a transaction has a voided line item (but the transaction itself isn't voided) the whole transaction could be rejected by XI/BW because they are looking at the other codes (such as discount reason code, tax codes, etc) but they are ignoring the fact that VOID_CD = 1 for those line items.
For example, there may be a Return transaction with 3 line items, two of them have return reason codes, but one does not because it has been voided.
How should this be handled in SAP? Does SAP care about the voided items in general? Should XI skip validation on those ones and "push them through"? Or is there a special area that voided items can be sent to?
where should i need the changes? On the mapping process in XI or in the BAPI ZPOSDW_BAPIINPUT_PRE in BW?
We donu2019t want the transaction to be rejected..only the void item to not be processed.(ours is idoc to file scenario)
**Please help out in this issue**
Thanks in advance,
Amarnath Reddy>>I found a bug in the process between XI and BW.
You can report this to SAP by opening a OSS message on service marketplace:)
>>We don't want the transaction to be rejected..only the void item to not be processed
XI considers the who payload as one single message,if any condition fails,it will drop the whole payload.
You need to split your payload into several payloads with each one containing only 1 line item.in that case only the ones who are wrong will be voided,rest all will go through properly.
This can be done via modules on sender side,but since you are using IDOC as sender,you can't configure modules.
Try asking the same question in BW/BI forum,its possible they have some way to handle this in BW.
Thanks
Aamir
Maybe you are looking for
-
Installation problem w/ Java Run Time Environment 1.2.2
Steps to reproduce error: 1.On Windows 98 I install Java Run Time Environment 1.2.2 2.I try to launch Java Plug-in Control Panel from Windows Start menu Error message: JAVAW caused an invalid page fault in module SYMCJIT.DLL at 0167:500bf974. Registe
-
Prevent Duplicates from Inserting from File in Stored Procedure
CREATE TABLE dbo.Logins [ID] numeric(10, 0) NOT NULL Primary Key, [Dr_FName] varchar(50) NOT NULL, [Dr_LName] varchar(50) NOT NULL, [Login] varchar(50) NOT NULL GO CREATE TABLE [dbo].[CRIS_USER] ( [id] numeric(10, 0) NOT NULL Primary Key, [login_id]
-
Object Array problem in Websphere WebServices
Hi, Can someone help me out with a situation that I am stuck with in WebServices. I have a WebService which returns a DTO which has a getter and setter for an array of another type of DTO object. Sample:- public class MyDTO extends AnotherDTO impleme
-
Can't upload files to internet
I've been having trouble with my MacBook. It's running OS X 10.6.7 with a 2.4 GHz Core 2 Duo. Whenever I try to upload files to the internet via any protocol, be it through the browser, a bittorrent client or anything that I've tried, if the upload s
-
Changing the Tax Code in the GL Master
Hi Friends, The users have created a GL Account with only 'Input Tax' allowed in posting. Now they want the 'Output Tax' also to be activated. When we try to change the Tax Code to allow all types of Taxes, the system gives an error message - "Tax Co