How to Achieve this Data Load into Cube
Hi Experts,
Could you please update me on how to achieve this
I got a History data (25-35 Million Records) for about 8 years to be loaded to BW.
What is the best approach to be followed
1) Load everything into one cube and create aggregates or
2) Create 4 different cubes (with same data model) and load load 2 years of data into each cune (2Years * 4 Cubes = 8 Years Data) and develop a multicube on top of 4 cubes
If so how can i load data into respective cubes
Ex: Lets assure i got data from 31.12.2007 to 01.01.2000 which is 8 years of data
Now i created 4 Cubes--C1,C2,C3,C4 & C5
How can i specifically
load data from 01.01.2000 to 31.12.2001 (2 Years) to C1
load data from 01.01.2002 to 31.12.2003 (2 Years) to C2
load data from 01.01.2004 to 31.12.2005 (2 Years) to C3
load data from 01.01.2006 to 31.12.2007 (2 Years) to C4
load data from 01.01.2008 to 31.12.2010 (2 Years) to C5 (Currently Loading)
Please advise the best approach to be followed and why
Thanks
If you already have the cube C5 being loaded and the reports are based on this cube, then if you donot want to create additional reports, you can go ahead and load the history data into this cube C5.
What is your sourcesystem and datasource?. Are there selection conditions (in your infopackage) available to specify the selections? If so, you can go ahead and do full loads to the current cube.
For query performance, you can create aggregates on this cube based on the fiscal period / month / year ( whichever infoobject is used in the reports)
If your reports are not based on timeperiod, then multicube query will work as parrallelized sub queries and so there will be 4 dialog processes occupied on your BW system everytime the query is hit.
Also any changes that you want to make in cube will have to be copied to all cubes, so maintenance may be a question.
If there are enough justification, then approach 2 can be taken up
Similar Messages
-
How to make data loaded into cube NOT ready for reporting
Hi Gurus: Is there a way by which data loaded into cube, can be made NOT available for reporting.
Please suggest. <removed>
ThanksSee, by default a request that has been loaded to a cube will be available for reporting. Bow if you have an aggregate, the system needs this new request to be rolled up to the aggregate as well, before it is available for reporting...reason? Becasue we just write queries for the cube, and not for the aggregate, so you only know if a query will hit a particular aggregate at its runtime. Which means that if a query gets data from the aggregate or the cube, it should ultimately get the same data in both the cases. Now if a request is added to the cube, but not to the aggregate, then there will be different data in both these objects. The system takes the safer route of not making the 'unrolled' up data visible at all, rather than having inconsistent data.
Hope this helps... -
AWM Newbie Question: How to filter data loaded into cubes/dimensions?
Hi,
I am trying to filter the amount of data loaded into my dimensions in AWM (e.g., I only want to load like 1-2 years worth of data for development purposes). I can't seem to find a place in AWM where you can specify a WHERE clause...is there something else I must do to filter data?
ThanksHi there,
Which release of Oracle OLAP are you using? 10g? 11g?
You can use database views to filter your dimension and cube data and then map these in AWM
Thanks,
Stuart Bunby
OLAP Blog: http://oracleOLAP.blogspot.com
OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
OLAP on OTN: http://www.oracle.com/technology/products/bi/olap/index.html
DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html -
How to delete the data loaded into MySQL target table using Scripts
Hi Experts
I created a Job with a validation transformation. If the Validation was failed the data passed the validation will be loaded into Pass table and the data failed will be loaded into failed table.
My requirement was if the data was loaded into Failed database table then i have to delete the data loaded into the Passed table using Script.
But in the script i have written the code as
sql('database','delete from <tablename>');
but as it is an SQL Query execution it is rising exception for the query.
How can i delete the data loaded into MySQL Target table using scripts.
Please guide me for this error
Thanks in Advance
PrasannaKumarHi Dirk Venken
I got the Solution, the mistake i did was the query is not correct regarding MySQL.
sql('MySQL', 'truncate world.customer_salesfact_details')
error query
sql('MySQL', 'delete table world.customer_salesfact_details')
Thanks for your concern
PrasannaKumar -
How to insert this date colum into some other table
Hi all,
I have to write a procedure to select the data from one table & insert date column data into another table,the data is in different date formats,but in the second table date column format should be (mm/dd/yyyy) .
example:
if the date in first table is 1992 it should be modify to "01-01-1992" & insert into the second table.But the values should be different(below are the different types).
RELEASE_DATE
1992
1962
July 1987
Aug 1968
10/30/06
6/1/2005
2004
7.25.1951
12/12/2006
12/1/2005
1992
2003
2005
1958
2002
11/11/03
1/1/91
50-21-2001
10.28.1991
Please any body can help.<FONT FACE="Arial" size=2 color="2D0000">
Thanks..! for giving the data type..
Here is an example..
SQL> desc atestdate
Name Null? Type
CHARDATE VARCHAR2(10)
ACTDATE DATE
SQL> select * from atestdate;
1992
July 1987
Aug 1968
10/30/06
6/1/2005
2004
7.25.1951
12/12/2006
12/1/2005
1992
2003
2005
1958
2002
11/11/03
1/1/91
50-21-2001
10.28.1991
18 rows selected.
SQL> select <font color="#0000ff">char_trim_x</font>(chardate) from atestdate;
1992
July-1987
Aug-1968
10-30-06
6-1-2005
2004
7-25-1951
12-12-2006
12-1-2005
1992
2003
2005
1958
2002
11-11-03
1-1-91
50-21-2001<font color="#8000ff">--? 50 can not be a month nor a date do not give junk values</font>
10-28-1991
18 rows selected.
<font color="#0000ff">char_trim_x</font> This is a function which I have created with REPLACE function
this brings to some extent of an orderly format..
<font color="#008000">But you have not yet posted your work ..</font>
-SK
</FONT> -
Hi,
When I run DTP, I get the error below on SM21. I checked note 1634716 - SYB: Lock timeout or deadlocks and 1933239 - SYB: Shortdumps with resource shortage and runtime error DBIF_SETG_SQL_ERROR but they didn't solve the issue. Any help will be appricated.
Database error 12205 at OPC
> [ASE Error SQL12205]Could not acquire a lock within the
> specified wait period. SERVER level wait period=1800 seconds
> spid=568, lock type=shared intent, dbid=4, objid=838472672,
> pageno=0, rowno=0. Aborting the transaction.#
Runtime error "DBIF_DSQL2_SQL_ERROR" occurred.
Thanks,I also get same error when I'm trying to collapse cube.
It always get the same error on RSDU_PARTITIONS_INFO_GET_SYB.
I found a correction on 1616762 - SYB: Fix collection for table partitioning but it didn't solve anything. It gives CX_SY_NATIVE_SQL_ERROR on same function. Correction is deleting "at isolation 1" line .
Any idea ? -
How is data loaded into the BCS cubes?
We are on SEM-BW 4.0 package level 13. I'm totally new to BCS from the BW view point. I'm not the SEM person but I support the BW side.
Can anyone explain to me or point me to documentation that explains how the data gets loaded into cube 0BCS_C11 Consolidation (Company/Cons Profit Center? I installed the delivered content and I can see various export data sources that were generated. However I do not see the traditional update rules, infosources etc.
The SEM person has test loaded some data to this cube and I can see the request under 'manage' and even display the content. However the status light remains yellow and data is not available for reporting unless I manually set the status to green.
Also, I see on the manage tab under Info-Package this note: Request loaded using the APO interface without monitor log.
Any and all assistance is greatly appreciated.
Thanks
DennyHi Dennis,
For reporting the virtual cube 0BCS_VC11 which is fed by 0BCS_C11 is used.
You don't need to concern about the yellow status. The request is closed automatically after reaching 50000 records.
About datastream - you right - the BW cube is used.
And if your BW has some cubes with information for BCS on a monthly basis, you may arrange a load from a data stream.
This BW cube I make as much similar to 0BCS_C11 by structure as possible -- for a smooth data load. The cube might be fed by another cube which contains information in another format. In update rules of the first cube you may transform the data for compatibility of the cubes structure.
Best regards,
Eugene -
How to create a report in bex based on last data loaded in cube?
I have to create a query with predefined filter based upon "Latest SAP date" i.e. user only want to see the very latest situation from the last load. The report should only show the latest inventory stock situation from the last load. As I'm new to Bex not able to find the way how to achieve this. Is there any time characteristic which hold the last update date of a cube? Please help and suggest how to achieve the same.
Thanks in advance.Hi Rajesh.
Thnx fr ur suggestion.
My requirement is little different. I build the query based upon a multiprovider. And I want to see the latest record in the report based upon only the latest date(not sys date) when data load to the cube last. This date (when the cube last loaded with the data) is not populating from any data source. I guess I have to add "0TCT_VC11" cube to my multiprovider to fetch the date when my cube is last loaded with data. Please correct me if I'm wrong.
Thanx in advance. -
How can i add the dimensions and data loading into planning apllications?
Now please let me know how can i add the dimensions and data loading into planning apllication without manuallly?
you can use tools like ODI or DIM or HAL to load metadata & data into planning applications.
The data load can be done at the Essbase end using rules file. But metadata changes should flow from planning to essbase through any of above mentioned tools and also there are many other way to achieve the same.
- Krish -
How to delete aggreagetd data in a cube without deleting the Aggregates?
Hi Experts,
How to delete aggreagetd data in a cube without deleting the Aggregates?
Regards
Alok KashyapHi,
You can deactivate the aggregate. The data will be deleted but structure will remain.
If you switch off the aggregates it wont be identified by the OLAP processor. report will fetch the data directly from the cube. Switching off the aggreagte won't delete any data,but temporarly the aggregate will not be availbale as if it is not built on the info cube. No reporting is not possible on swtiched off aggregates. The definition of the aggregate is not deleted.
You can temporarily switch off an aggregate to check if you need to use it. An aggregate that is switched off is not used when a query is executed.This aggregate will be having data from the previous load's. If the aggregate is switched off means it wont be available for reporting but data can be rolled up into it.
If u deactivate the aggregates the data will be deleted from the aggregates and the aggregate structure will remain the same.
The system deletes all the data and database tables of an aggregate. The definition of the aggregate is not deleted.
Later when you need those aggregate once again you have to create it from scratch.
Hope this helps.
Thanks,
JituK -
Data load into SAP ECC from Non SAP system
Hi Experts,
I am very new to BODS and I have want to load historical data from non SAP source system into SAP R/3 tables like VBAK,VBAP using BODS, Can you please provide steps/documents or guidelines on how to achieve this.
Regards,
MonilHi
In order to load into SAP you have the following options
1. Use IDocs. There are several standard IDocs in ECC for specific objects (MATMAS for materials, DEBMAS for customers, etc., ) You can generate and send IDocs as messages to the SAP Target using BODS.
2. Use LSMW programs to load into SAP Target. These programs will require input files generated in specific layouts generated using BODS.
3. Direct Input - The direct input method is to write ABAP programs targetting on specific tables. This approach is very complex and hence a lot of thought process needs to be applied.
The OSS Notes supplied in previous messages are all excellent guidance to steer you in the right direction on the choice of load, etc.,
However, the data load into SAP needs to be object specific. So targetting merely the sales tables will not help as the sales document data held in VBAK and VBAP tables you mentioned are related to Articles. These tables will hold sales document data for already created articles. So if you want to specifically target these tables, then you may need to prepare an LSMW program for the purpose.
To answer your question on whether it is possible to load objects like Materials, customers, vendors etc using BODS, it is yes you can.
Below is a standard list of IDocs that you can use for this purpose to load into SAP ECC system from a non SAP system.
Customer Master - DEBMAS
Article Master - ARTMAS
Material Master - MATMAS
Vendor Master - CREMAS
Purchase Info Records (PIR) - INFREC
The list is endless.........
In order to achieve this, you will need to get the functional design consultants to provide ETL mapping for the legacy data to IDoc target schema and fields (better to ahve sa tech table names and fields too). You should then prepare the data after putting it through the standard check table validations for each object along with any business specific conversion rules and validations applied. Having prepared this data, you can either generate flat file output for load into SAP using LSMW programs or generate IDoc messages to the target SAPsystem.
If you are going to post IDocs directly into SAP target using BODS, you will need to create a partner profile for BODS to send IDocs and define the IDocs you need as inbound IDocs. There are few more setings like RFC connectivity, authorizations etc, in order for BODS to successfully send IDocs into the SAP Target.
Do let me know if you need more info on any specific queries or issues you may encounter.
kind regards
Raghu -
Aggregating data loaded into different hierarchy levels
I have some problems when i try to aggregate a variable called PRUEBA2_IMPORTE dimensinated by time dimension (parent-child type).
I read the help in DML Reference of the OLAP Worksheet and it said the follow:
When data is loaded into dimension values that are at different levels of a hierarchy, then you need to be careful in how you set status in the PRECOMPUTE clause in a RELATION statement in your aggregation specification. Suppose that a time dimension has a hierarchy with three levels: months aggregate into quarters, and quarters aggregate into years. Some data is loaded into month dimension values, while other data is loaded into quarter dimension values. For example, Q1 is the parent of January, February, and March. Data for March is loaded into the March dimension value. But the sum of data for January and February is loaded directly into the Q1 dimension value. In fact, the January and February dimension values contain NA values instead of data. Your goal is to add the data in March to the data in Q1. When you attempt to aggregate January, February, and March into Q1, the data in March will simply replace the data in Q1. When this happens, Q1 will only contain the March data instead of the sum of January, February, and March. To aggregate data that is loaded into different levels of a hierarchy, create a valueset for only those dimension values that contain data. DEFINE all_but_q4 VALUESET time
LIMIT all_but_q4 TO ALL
LIMIT all_but_q4 REMOVE 'Q4'
Within the aggregation specification, use that valueset to specify that the detail-level data should be added to the data that already exists in its parent, Q1, as shown in the following statement. RELATION time.r PRECOMPUTE (all_but_q4)
How to do it this for more than one dimension?
Above i wrote my case of study:
DEFINE T_TIME DIMENSION TEXT
T_TIME
200401
200402
200403
200404
200405
200406
200407
200408
200409
200410
200411
2004
200412
200501
200502
200503
200504
200505
200506
200507
200508
200509
200510
200511
2005
200512
DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
-----------T_TIME_HIERLIST-------------
T_TIME H_TIME
200401 2004
200402 2004
200403 2004
200404 2004
200405 2004
200406 2004
200407 2004
200408 2004
200409 2004
200410 2004
200411 2004
2004 NA
200412 2004
200501 2005
200502 2005
200503 2005
200504 2005
200505 2005
200506 2005
200507 2005
200508 2005
200509 2005
200510 2005
200511 2005
2005 NA
200512 2005
DEFINE PRUEBA2_IMPORTE FORMULA DECIMAL <T_TIME>
EQ -
aggregate(this_aw!PRUEBA2_IMPORTE_STORED using this_aw!OBJ262568349 -
COUNTVAR this_aw!PRUEBA2_IMPORTE_COUNTVAR)
T_TIME PRUEBA2_IMPORTE
200401 NA
200402 NA
200403 2,00
200404 2,00
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
2004 4,00 ---> here its right!! but...
200412 NA
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
2005 10,00 ---> here must be 30,00 not 10,00
200512 NA
DEFINE PRUEBA2_IMPORTE_STORED VARIABLE DECIMAL <T_TIME>
T_TIME PRUEBA2_IMPORTE_STORED
200401 NA
200402 NA
200403 NA
200404 NA
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
2004 NA
200412 NA
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
2005 10,00
200512 NA
DEFINE OBJ262568349 AGGMAP
AGGMAP
RELATION this_aw!T_TIME_PARENTREL(this_aw!T_TIME_AGGRHIER_VSET1) PRECOMPUTE(this_aw!T_TIME_AGGRDIM_VSET1) OPERATOR SUM -
args DIVIDEBYZERO YES DECIMALOVERFLOW YES NASKIP YES
AGGINDEX NO
CACHE NONE
END
DEFINE T_TIME_AGGRHIER_VSET1 VALUESET T_TIME_HIERLIST
T_TIME_AGGRHIER_VSET1 = (H_TIME)
DEFINE T_TIME_AGGRDIM_VSET1 VALUESET T_TIME
T_TIME_AGGRDIM_VSET1 = (2005)
Regards,
Mel.Mel,
There are several different types of "data loaded into different hierarchy levels" and the aproach to solving the issue is different depending on the needs of the application.
1. Data is loaded symmetrically at uniform mixed levels. Example would include loading data at "quarter" in historical years, but at "month" in the current year, it does /not/ include data loaded at both quarter and month within the same calendar period.
= solved by the setting of status, or in 10.2 or later with the load_status clause of the aggmap.
2. Data is loaded at both a detail level and it's ancestor, as in your example case.
= the aggregate command overwrites aggregate values based on the values of the children, this is the only repeatable thing that it can do. The recomended way to solve this problem is to create 'self' nodes in the hierarchy representing the data loaded at the aggregate level, which is then added as one of the children of the aggregate node. This enables repeatable calculation as well as auditability of the resultant value.
Also note the difference in behavior between the aggregate command and the aggregate function. In your example the aggregate function looks at '2005', finds a value and returns it for a result of 10, the aggregate command would recalculate based on january and february for a result of 20.
To solve your usage case I would suggest a hierarchy that looks more like this:
DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
-----------T_TIME_HIERLIST-------------
T_TIME H_TIME
200401 2004
200402 2004
200403 2004
200404 2004
200405 2004
200406 2004
200407 2004
200408 2004
200409 2004
200410 2004
200411 2004
200412 2004
2004_SELF 2004
2004 NA
200501 2005
200502 2005
200503 2005
200504 2005
200505 2005
200506 2005
200507 2005
200508 2005
200509 2005
200510 2005
200511 2005
200512 2005
2005_SELF 2005
2005 NA
Resulting in the following cube:
T_TIME PRUEBA2_IMPORTE
200401 NA
200402 NA
200403 2,00
200404 2,00
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
200412 NA
2004_SELF NA
2004 4,00
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
200512 NA
2005_SELF 10,00
2005 30,00
3. Data is loaded at a level based upon another dimension; for example product being loaded at 'UPC' in EMEA, but at 'BRAND' in APAC.
= this can currently only be solved by issuing multiple aggregate commands to aggregate the different regions with different input status, which unfortunately means that it is not compatable with compressed composites. We will likely add better support for this case in future releases.
4. Data is loaded at both an aggregate level and a detail level, but the calculation is more complicated than a simple SUM operator.
= often requires the use of ALLOCATE in order to push the data to the leaves in order to correctly calculate the aggregate values during aggregation. -
How to Achieve this in SQL Query?
How to Achieve this ?
I have a table with numeric value populated like this
create table random_numeral (numerals Number(10));
insert into random_numeral values (1);
insert into random_numeral values (2);
insert into random_numeral values (3);
insert into random_numeral values (4);
insert into random_numeral values (5);
insert into random_numeral values (6);
insert into random_numeral values (56);
insert into random_numeral values (85);
insert into random_numeral values (24);
insert into random_numeral values (11);
insert into random_numeral values (120);
insert into random_numeral values (114);
Numerals
1
2
3
4
5
6
56
85
24
11
120
114
I want to display the data as follows
col1 / col2 / col3
1 / 2 / 3
4 / 5 / 6
11 / 24 / 56
85 / 114 / 120
Can anyone Help me?I hope there might be some simple way to do this and waiting for experts to reply.
Try the below query.
SQL> select * from random_numeral;
NUMERALS
1
2
3
4
5
6
56
85
24
11
120
NUMERALS
114
100
140
14 rows selected.
SQL> select a.numerals ||' / '||b.numerals||' / '||c.numerals from
2 (select numerals,rownum rn1 from
3 (
4 select numerals,mod(row_number() over(partition by 1 order by numerals),3)
5 from random_numeral
6 )
7 where rn=1) a,
8 (select numerals,rownum rn1 from
9 (
10 select numerals,mod(row_number() over(partition by 1 order by numerals),3)
11 from random_numeral
12 )
13 where rn=2) b,
14 (select numerals,rownum rn1 from
15 (
16 select numerals,mod(row_number() over(partition by 1 order by numerals),3)
17 from random_numeral
18 )
19 where rn=0) c
20 where a.rn1=b.rn1(+)
21 and b.rn1=c.rn1(+)
22 /
A.NUMERALS||'/'||B.NUMERALS||'/'||C.NUMERALS
1 / 2 / 3
4 / 5 / 6
11 / 24 / 56
85 / 100 / 114
120 / 140 /
SQL>Cheers,
Mohana -
Adding leading zeros before data loaded into DSO
Hi
In below PROD_ID... In some ID leading zeros are missing before data loaded into BI from SRM into PROD_ID. Data type is character. If leading zeros are missing then data activation of DSO is failed due to missing zeros and have to manually add them in PSA table. I want to add leading zeros if they're missing before data loaded into DSO.... total character length is 40.. so e.g. if character is 1502 then there should be 36 zeros before it and if character is 265721 then there should be 34 zeros. Only two type of character is coming either length is 4 or 6 so there will be always need to 34 or 36 zeros in front of them if zeros are missing.
Can we use CONVERSION_EXIT_ALPHPA_INPUT functional module ? As this is char so I'm not sure how to use in that case.. Do need to convert it first integer?
Can someone please give me sample code? We're using BW 3.5 data flow to load data into DSO.... please give sample code and where need to write code either in rule type or in start routine...Hi,
Can you check at info object level, what kind of conversion routine it used by.
Use T code - RSD1, enter your info object and display it.
Even at data source level also you can see external/internal format what it maintained.
if your info object was using ALPHA conversion then it will have leading 0s automatically.
Can you check from source how its coming, check at RSA3.
if your receiving this issue for records only then you need to check those records.
Thanks -
How to automate the data load process using data load file & task Scheduler
Hi,
I am doing Automated Process to load the data in Hyperion Planning application with the help of data_Load.bat file & Task Scheduler.
I have created Data_Load.bat file but rest of the process i am unable complete.
So could you help me , how to automate the data load process using Data_load.bat file & task Scheduler or what are the rest of the file is require to achieve this.
ThanksTo follow up on your question are you using the maxl scripts for the dataload?
If so I have seen and issue within the batch (ex: load_data.bat) that if you do not have the full maxl script path with a batch when running it through event task scheduler the task will work but the log and/ or error file will not be created. Meaning the batch claims it ran from the task scheduler although it didn't do what you needed it to.
If you are using maxl use this as the batch
"essmsh C:\data\DataLoad.mxl" Or you can also use the full path for the maxl either way works. The only reason I would think that the maxl may then not work is if you do not have the batch updated to call on all the maxl PATH changes or if you need to update your environment variables to correct the essmsh command to work in a command prompt.
Maybe you are looking for
-
Plug in Manager released today doesn't fix print settings issue
The download page says it improves reliability in Aperture and other apps. But it didn't fix the print settings issue in Aperture. I sent feedback on the problem. Try to hit the print settings button after hitting command P. If you have Aperture 2 an
-
Iphone 3gs - 4.3.3 - Baseband 05.16.02 - Sim card not supported problem
I recently updated my iphone 3gs to 4.3.3..Baeband changed to 05.16.02 ....It says tht NO SIM CARD SUPPORTED.. What i have to do now?
-
Photos iPhone and iPad air?
ive had a new iPad Air 2. Nightmare getting the photos form the old ipad to the new. Now it seems any pictures I take on my iphone are going into photos on the iPad air 2? Any advice would be greatly received. thanks
-
Can someone explain this behavior?? (session 0 with htp.init)
Hi, I've noticed that when you access a page using the Session 0 and that page contains an "htp.init" in a before header process, for some reason the session expires and creates a new one. I made an example on apex.oracle.com, but first let me explai
-
Checking locks in RVKRED77 / RVKREDSP / Z_RVKRED77_PARALLEL
We're facing challenges in credit master reorgs (I'm posting this for a colleague, so please bear with me). After looking at relevant notes (363343, 400311, 755395) and searching the forum, I wonder if anyone has found logic missing from the SAP sugg