Cube for PO Final Realease date
Hi all,
Is there any cube that contains Puchase Order and Purchase Requisition final release date?
Please update
Kind Regards,
AI.
hi,
if u r not getting the final release date , then u can make a generic DS based on table CDHDR table,
the Udate field will suffice ur need.
thanks
Edited by: Mrityunjay on Jan 11, 2011 5:43 PM
Similar Messages
-
Creating a cube for 2lis_11_vasth data source
hi everyone,
i am quite confused on how to create a cube for an r/3 data source for ex: 2lis_11_vasth. do i have to create my own cube by defining the individual info objects for each field or can i do it in business content? please let me know the procedure on how to do it in business content.
thank you very muchHi Vidyarthi
2LIS_11_VASTH updates Info Cubes 0SD_C13 (Service Level: Orders) and an ODS object , 0SD_O03 .
You can install the required Objects from Business Content and check with your client requirements and decide if you require any Customization .
Hope this helps.
Cheers
Raja -
Workbook Displays Data Not Present in the Cube for Non-Cumulative KF
Hi All,
I have a user that is reporting using noncumulative info object 0TOTALSTCK. If I look in the cube, the data starts in October and goes from there. When viewing the data, there are figures that display for the months prior to October even though there is no data in the cube.
The interesting thing is that when viewed in excel using the analyser, these figures display using square brackets. This is then throwing out the total values of the prior months as they are being added in.
Does anyone have an idea why this value would display even though there is no data in the cube for it?
Thanks in advance for any help supplied.Just thought I would share the solution to this.
It looks like the validity date for the cube was not set up properly. I fixed it using transaction RSDV.
Details on how to proceed can be found here:
http://help.sap.com/saphelp_nw70/helpdata/EN/02/9a6a1f244411d5b2e30050da4c74dc/frameset.htm -
Best practices for loading apo planning book data to cube for reporting
Hi,
I would like to know whether there are any Best practices for loading apo planning book data to cube for reporting.
I have seen 2 types of Design:
1) The Planning Book Extractor data is Loaded first to the APO BW system within a Cube, and then get transferred to the Actual BW system. Reports are run from the Actual BW system cube.
2) The Planning Book Extractor data is loaded directly to a cube within the Actual BW system.
We do these data loads during evening hours once in a day.
Rgds
GkHi GK,
What I have normally seen is:
1) Data would be extracted from APO Planning Area to APO Cube (FOR BACKUP purpose). Weekly or monthly, depending on how much data change you expect, or how critical it is for business. Backups are mostly monthly for DP.
2) Data extracted from APO planning area directly to DSO of staging layer in BW, and then to BW cubes, for reporting.
For DP monthly, SNP daily
You can also use the option 1 that you mentioned below. In this case, the APO cube is the backup cube, while the BW cube is the one that you could use for reporting, and this BW cube gets data from APO cube.
Benefit in this case is that we have to extract data from Planning Area only once. So, planning area is available for jobs/users for more time. However, backup and reporting extraction are getting mixed in this case, so issues in the flow could impact both the backup and the reporting. We have used this scenario recently, and yet to see the full impact.
Thanks - Pawan -
How the data is fetched from the cube for reporting - with and without BIA
hi all,
I need to understand the below scenario:(as to how the data is fetched from the cube for reporting)
I have a query, on a multiprovider connected to cubes say A and B. A is on BIA index, B is not. There are no aggregates created on both the cubes.
CASE 1: I have taken RSRT stats with BIA on, in aggregation layer it says
Basic InfoProvider *****Table type ***** Viewed at ***** Records, Selected *****Records, Transported
Cube A ***** blank ***** 0.624305 ***** 8,087,502 ***** 2,011
Cube B ***** E ***** 42.002653 ***** 1,669,126 ***** 6
Cube B ***** F ***** 98.696442 ***** 2,426,006 ***** 6
CASE 2:I have taken the RSRT stats, disabling the BIA index, in aggregation layer it says:
Basic InfoProvider *****Table Type *****Viewed at *****Records, Selected *****Records, Transported
Cube B *****E *****46.620825 ****1,669,126**** 6
Cube B *****F ****106.148337**** 2,426,030***** 6
Cube A *****E *****61.939073 *****3,794,113 *****3,499
Cube A *****F ****90.721171**** 4,293,420 *****5,584
now my question is why is here a huge difference in the number of records transported for cube A when compared to case 1. The input criteria for both the cases are the same and the result output is matching. There is no change in the number of records selected for cube A in both cases.It is 8,087,502 in both cases.
Can someone pls clarify on this difference in records being selected.Hi,
yes, Vitaliy could be guess right. Please check if FEMS compression is enabled (note 1308274).
What you can do to get more details about the selection is to activate the execurtion plan SQL/BWA queries in data manager. You can also activate the trace functions for BWA in RSRT. So you need to know how both queries select its data.
Regards,
Jens -
How the data is fetched from the cube for reporting
hi all,
I need to understand the below scenario:(as to how the data is fetched from the cube for reporting)
I have a query, on a multiprovider connected to cubes say A and B. A is on BIA index, B is not. There are no aggregates created on both the cubes.
CASE 1: I have taken RSRT stats with BIA on, in aggregation layer it says
Basic InfoProvider *****Table type ***** Viewed at ***** Records, Selected *****Records, Transported
Cube A ***** blank ***** 0.624305 ***** 8,087,502 ***** 2,011
Cube B ***** E ***** 42.002653 ***** 1,669,126 ***** 6
Cube B ***** F ***** 98.696442 ***** 2,426,006 ***** 6
CASE 2:I have taken the RSRT stats, disabling the BIA index, in aggregation layer it says:
Basic InfoProvider *****Table Type *****Viewed at *****Records, Selected *****Records, Transported
Cube B *****E *****46.620825 ****1,669,126**** 6
Cube B *****F ****106.148337**** 2,426,030***** 6
Cube A *****E *****61.939073 *****3,794,113 *****3,499
Cube A *****F ****90.721171**** 4,293,420 *****5,584
now my question is why is here a huge difference in the number of records transported for cube A when compared to case 1. The input criteria for both the cases are the same and the result output is matching. There is no change in the number of records selected for cube A in both cases.It is 8,087,502 in both cases.
Can someone pls clarify on this difference in records being selected.Hi Jay,
Thanks for sharing your analysis.
The only reason I could think logically is BWA is having information in both E and F tables in one place and hence after selecting the records, it is able to aggregate and transport the reords to OLAP.
In the second case, since E and F tables are separate, aggregation might be happening at OLAP and hence you see more number of records.
Our Experts in BWA forum might be able to answer in a better way, if you post this question over there.
Thanks,
Krishnan -
Multiple data sources and cubes for mining
We have two data sources:
1 - OLTP database for transactional operations
2 - Data Warehouse for analysis
We use change data capture to track changes in the OLTP and upload to DW each night.
Currently, we have one cube built on top of our DW for analysis and KPI's etc
However, if we wish to use the OLTP DB for data mining, can this done in the same solution using a new data source or do we need to create a new cube etc?Hi Darren,
According to your description, you are going to use the OLTP DB for data mining, what you want to know is can this done in the same solution using a new data source or do we need to create a new cube?
In your scenario, if the cube structure for data mining is same as original cube, then you needn't create anything, just edit the connection of the data source to point to the OLTP database. If the cube structure for data mining is not same as original
cube, then I am afraid you need to create a new cube.
Regards,
Charlie Liao
TechNet Community Support -
Error while entering the Final Process Date for an employee
Hi,
I am receiving the below mentioned Error Message when I try to enterthe Final Process date and save for an employee on the End Employment screen.
HR_51746_ASG_INV_ASG_ID (PROCEDURE=pay_us_tax_internal.maintain_tax_percentage)
Please let me know if anyone has an idea on the cause for this error message and how to fix this.
Thanks,
Jithendra
Edited by: user11688089 on Feb 6, 2013 3:11 AMWhat is the termination date on this employee and what Final process date are you using? Are there any payrolls that ran after his/her Term date. Because even though this employee has been terminated a while ago, if there are any payrolls run after that (even if this employee is skipped), the system still includes this emp in that run and doesn't allow you to enter a Final process date prior to that pay date
Either try to use a final process date which is in the next pay period after the latest payroll run or try running deltax.sql (removes tax records) on that assignment and see if that works
-Karthik -
Getting Objects in cubes for Query
Hi gurus,
I am using the GL Transaction Figures cube for a query and i needed to be able to filter the query with Account type object,but Account type object is not physically in the cube.
I was wondering if it is possible to add account type infoobject as a navigational attribute of the Charts of Account infoobject,Value type infoobject,version infoobject,GL Account infoobject etc and now put it as a navigational attribute in the cube as well.
If this is not going to work,can someone help with a better work around for this?
thanksNavigational attribute may work, but these many navigational attributes ??? That would make the report have multiple variables, one for each navigational attribute. On the other hand, talk to your business and ask them which navigational attribute is of most interest to them and you can live with variable prompt.
FYI, this is how a navigational attribute works. Lets say you have a navigational attribute Plant of Customer (0CUSTOMER_0PLANT). If you are interested in Plant 100, then during query execution, the SQL query would go to master data table to identify all the customers that belong to Plant 100. Then it would go to the cube to see which customers from the above list exist in the cube and finally report for those customers.
Hope this helps. -
When I checked out Verizon's data plans online, the only option I saw that did not involve automatic renewals was the option to pay $5 for a day of 300 MB. The other options (e.g., $20/month for 1GB) all appeared to involve automatic renewals every month unless I go in and cancel the plan. However, during a chat, a Verizon representative told me that auto renewal of the prepaid plans is an option that can be avoided, it just isn't listed in the online information. Has anyone found a way to only sign up for one month of data without having to cancel the next payment?
Also, when I looked online at a site that compared data plans, it stated that one needed to use the Verizon network every 4 months or else the sim card would be permanently deactivated. But in a couple of online discussions, the limit was listed as being 3 months. Verizon told me 6 months. Does anyone know which is currently correct?
Finally, I have heard that if the sim card does become deactivated, Verizon tends to insist on making one buy a new one with a regular data plan and not the prepaid options.Has anyone had any experience with this?GeekBoy.from.Illinois thank you for replying. I tried the Verizon support forum, but before I can ask a question there I need to register, and the first step in registering is entering my mobile phone number. Since I don't have a Verizon phone, I can't register and, therefore, can't ask any questions. I did look through previous questions and answers, but it only confirmed that Verizon employees do not give out consistently correct information. I guess I will try talking or chatting with two of three more of them and hope that whatever the majority answers are will be the correct ones.
-
How to get the Full and final settelment date from PC_payresults
Hi Experts..sairam.
We are preparing Functional specification for a report on Full and final settelement.
We need to extract the full and final settelement date from payresults.
Full and final settelement would be an offcycle run.
How can we identify the particular Offcycle is meant for full and final settelment.
Full and final settelement can be done after date of relieving.
Please share your ideas to get the field names and logic to fetch.
Thanks in advance.
Regards,
Sairam.Hi Praveen,
I found it from HRPY_RGDIR through SE11.
But im in confusion in thinking on Logic to find the Full and final settelment processed date.
First of all..Report has to check WPBP table weather Employment status is Zero.
Later..It has to read the HRPY_RGDIR table.
Here what the system to check.What conditions to be validated to fetch the Full and final settelment processed date.Will it be FPPER and INPER as 00000000 or shall we do it from offcycle reason.
Please share your ideas.Thanks in advance.
Regards,Sairam. -
How to find rows in F table of a cube for a given request ID
Can someone tell me how to find the rows in the F table of a cube for a given request id?
Hi,
Copy the Request ID number of that cube and go to manage of the cube and select contents tab, select Infocube content,
select list of objects to be displayed with results and in that screen give Request ID at Request ID row and at bottom Max no. of Hits keep blank and execute at the top.
this will display all the data rows loaded with that request. this is records from cube.
from Fact table, go to SE11, give /BIC/FXXXX "XXXX" is cube name for custom cube.
get the SID of that cube from the Cube contents as mentioned above, for field select for output along with Request ID select Request SID also.
from SE11 give table /BIC/DXXXXXP "XXXX" is cube name for custom cube. P for Packet Dimension. here pass reqest SID in 'SID_0REQUID' and get value for DIMID.
pass this DIMID into KEY_XXXXP, this give the fact table rows for that request DIM ID.
hope this helps
Regards
Daya Sagar -
Delta extraction for FI (AR & AP) data sources
I tried extracting the data from 2 standard SAP data sources under FI module for Accounts Receivable.
0FI_AR_10(Customer payment history via delta extraction)
0FI_AR_5 (customer payment history)
The data to be extracted into 0FIAR_C05(payment history) info cube.I assigned the 2 data sources to an info source and pumped the data into the cube via an ODS.Now the problem is with delta extraction.No records are picked in the delta extraction.So I checked in extractor checker RSA3 with delta update mode.I got the following error.
"Could not determine BW release of logical system"
Could some one resolve the issue?Thanks in advance.
Thanks & Regards
VinayHi ,
Please go through 493422 note once . see if suits to your requirement follow it accordingly .
Thanks ,
Anil -
Whats the fieldname and table name for purchase order delivery date
hi all,
whats the fieldname and table name for purchase order delivery date
thanks and regradsEKET-EINDT is the delivery date according to the schedule lines.
for example the line item has 100 qty.
it is sent in three schedules (40, 40, 20).
Then EKET will have 3 records for one PO Line item.
The final delievry data is the EKET-EINDT for the 3rd schedule line item.
Regards,
Ravi -
Validity Table not updating for 0IC_C03 while updating data
Hi,
1.Validity table not updating for 0IC_C03 while updating data in my BW 7.4 With HANA data base?
Key fields : 0Plant
0Calday
if you run this programe after loading data - RSDG_CUBE_VALT_MODIFY it is updating.
2. I am not getting no marker update option in non-cumulative Info cube 0ic_c03 manage tab or in DTP tabs check as per 7.4 modifications?
and 2LIS_03_BX in DTP I am getting below this option only
Can you please give me solution for this issues.
Regards
UmashankarHi Uma,
Please go through the below link which might be helpful.
Not able to Edit Validity Table : RSDV
Marker Update Option is available under Collapse tab of Info cube.
Thanks,
Karan
Maybe you are looking for
-
why are some of my still images (jpeg) Ok in my timeline, but skewed ofter output?
-
Select Tool (blk arrow) doesn't work correctly-illustrator cs2
Hi, all. when you click on an object with black arrow, it converts to free transform automatically. so as many people might do, I used black arrow to re-size or rotate etc etc. rather than using free transform tool. but it suddenly not working. Did I
-
How do I get calendar in iOS7 to look up address when entered into location field of appt?
With Mavericks on iMac, when I start to enter a location into a calendar event, the address gets looked up, completed and a map shows up on the event. The map is there when that event is sync'd with iPhone or iPad running iOS7. But on iPhone or iPad,
-
Source determination with referance to quota arrangement
Dear all I had created quota for vendor A & Vendor B for a material M100, I had also defined quota arrangement usage in SPRO ( 3), While creation of PR for this material M100, I had selected the tick for source determination, But still system is not
-
Dear All, In LC documents VX11N ,while creating LC the allowed draft percentage is given as 10 percent and lets suppose for example the value of LC is 3000 .And all the value of Lc is consumed ,Now the LC should consider the 10 percentage over draft