What is RT , BT in HR Reporting ? How the Data is Populated into RT, BT ?
Hi
Iam Debugging an HR ( used LDB )Report. In that Report 'RT' is used.
What is the Meaning of 'RT'. And how the Data is populated in to that RT ?
What is the Meaning of 'BT'. And how the Data is populated in to that BT ?
Kindly clarify my doubts.
Regards,
N.L.
Hi nl,
1. These are related to payroll results.
2. Whenver salary is processed,
VAST & varied amount of information needs
to be stored.
3. Hence, sap uses the concept of CLUSTER table.
4. when salary is processed,
some wage types,amounts etc
are generated.
ie. RESULTS are generated.
the table name is RT
5. Same way, BANK Transfer
ie. bank code, name, amount etc.
also needs to be stored.
Its table name is BT.
Similary there are other tables also viz WPBP etc.
6. Payroll data can be retrived using
macros and also using FM.
7. Below is the technique
DATA: myseqnr LIKE hrpy_rgdir-seqnr.
DATA : mypy TYPE payin_result.
DATA : myrt LIKE TABLE OF pc207 WITH HEADER LINE.
SELECT SINGLE seqnr FROM hrpy_rgdir
INTO myseqnr
WHERE pernr = mypernr
AND fpper = '200409'
AND srtza = 'A'.
IF sy-subrc = 0.
CALL FUNCTION 'PYXX_READ_PAYROLL_RESULT'
EXPORTING
clusterid = 'IN'
employeenumber = mypernr
sequencenumber = myseqnr
CHANGING
payroll_result = mypy
EXCEPTIONS
illegal_isocode_or_clusterid = 1
error_generating_import = 2
import_mismatch_error = 3
subpool_dir_full = 4
no_read_authority = 5
no_record_found = 6
versions_do_not_match = 7
error_reading_archive = 8
error_reading_relid = 9
OTHERS = 10.
myrt[] = mypy-inter-rt.
READ TABLE myrt WITH KEY lgart = '1899'.
IF sy-subrc = 0.
entl-cumbal = myrt-betrg.
MODIFY entl.
cumul = entl-cumbal.
ENDIF.
regards,
amit m.
Similar Messages
-
Can we execute the Reporting while the data is loading into that ODS/Cube.
Hi Friends,
Can we execute the reports on particular ODS/InfoCube in the following cases
1) When the data is loading into that ODS/Infocube.
2) When we are Archiving the data from that ODS/Infocube
Thanks & Regards,
Shaliny. MHi Shaliny,
First of all you are in the wrong forum, in Business Intelligence Old Forum (Read Only Archive) you will find better support.
In case you are loading data in an infocube you will be able to have report only until the request that has the icon ready for reporting filled. In case of an ODS object i don't think you will be able to have valid reporting since the ODS data firstly needs to be activated.
Nevertheless please post your question in the above link.
Kostas -
How the data populated into tables like USR01,USR02 etc
Hi,
I have one theoritical doubt. How the data is populated into tables like USR01, USR02 etc after creating the
user using SU01. Let me know the process behind it.
Rgds,
Chandra.Hi Chinna,
When you create users using SU01, SU10 transaction codes, it uses BAPI_USER_CREATE1 which will update the data in the respective tables.
Same way BAPI_USER_CHANGE is used when you modify any existing users.
Hope this answers!!
Warm Regards,
Raghu -
How the data is fetched from the cube for reporting - with and without BIA
hi all,
I need to understand the below scenario:(as to how the data is fetched from the cube for reporting)
I have a query, on a multiprovider connected to cubes say A and B. A is on BIA index, B is not. There are no aggregates created on both the cubes.
CASE 1: I have taken RSRT stats with BIA on, in aggregation layer it says
Basic InfoProvider *****Table type ***** Viewed at ***** Records, Selected *****Records, Transported
Cube A ***** blank ***** 0.624305 ***** 8,087,502 ***** 2,011
Cube B ***** E ***** 42.002653 ***** 1,669,126 ***** 6
Cube B ***** F ***** 98.696442 ***** 2,426,006 ***** 6
CASE 2:I have taken the RSRT stats, disabling the BIA index, in aggregation layer it says:
Basic InfoProvider *****Table Type *****Viewed at *****Records, Selected *****Records, Transported
Cube B *****E *****46.620825 ****1,669,126**** 6
Cube B *****F ****106.148337**** 2,426,030***** 6
Cube A *****E *****61.939073 *****3,794,113 *****3,499
Cube A *****F ****90.721171**** 4,293,420 *****5,584
now my question is why is here a huge difference in the number of records transported for cube A when compared to case 1. The input criteria for both the cases are the same and the result output is matching. There is no change in the number of records selected for cube A in both cases.It is 8,087,502 in both cases.
Can someone pls clarify on this difference in records being selected.Hi,
yes, Vitaliy could be guess right. Please check if FEMS compression is enabled (note 1308274).
What you can do to get more details about the selection is to activate the execurtion plan SQL/BWA queries in data manager. You can also activate the trace functions for BWA in RSRT. So you need to know how both queries select its data.
Regards,
Jens -
How to load the data from informatica into bw & how to report the data
Hi friends,
how to load the data from informatica into bw & how to report the data
using cognos.(i.e how to access the data in sap bw using cognos 8 BI suite).
Thanks,
madhu.Inorder to report BW data into Cognos you can extract data from using Open Hub to the DB table from which Cognos reads.
For BW informatic integration refer following docs:
http://www.aman.co.il/aman/pfd/DataInteg_BR.q103cd.pdf.pdf
http://h71028.www7.hp.com/enterprise/cache/3889-0-0-225-121.html
http://devnet.informatica.com/learning/ePresentations.asp
http://72.14.203.104/search?q=cache:C741L86Q19oJ:devnet.informatica.com/showcase/resources/Essbase_DataSheet.pdfinformaticapowerconnect(BI)&hl=en&gl=in&ct=clnk&cd=3
http://www.informatica.com/customers/utilities_energy/fpl_group.htm
http://www.informatica.com/solutions/resource_center/technote_sapbw_65241004.pdf#search=%22Informatica%20to%20Bw%22 -
Hi Experts,
How the data is stored in Info cube and DSO...in the back end what will happen???
I mean Cube contain Fact table and Dimension tables How the data will store and what will happen in the backend???
Regards,
Swetha.Hi,
Please check :
How is data stored in DSO and Infocube
InfoCubes are made up of a number of InfoObjects. All InfoObjects (characteristics and key figures) are available independent of the InfoCube. Characteristics refer to master data with their attributes and text descriptions.
An InfoCube consists of several InfoObjects and is structured according to the star schema. This means there is a (large) fact table that contains the key figures for the InfoCube, as well as several (smaller) dimension tables which surround it. The characteristics of the InfoCube are stored in these dimensions.
An InfoCube fact table only contains key figures, in contrast to a DataStore object, whose data part can also contain characteristics. The characteristics of an InfoCube are stored in its dimensions.
The dimensions and the fact table are linked to one another using abstract identification numbers (dimension IDs) which are contained in the key part of the particular database table. As a result, the key figures of the InfoCube relate to the characteristics of the dimension. The characteristics determine the granularity (the degree of detail) at which the key figures are stored in the InfoCube.
Characteristics that logically belong together (for example, district and area belong to the regional dimension) are grouped together in a dimension. By adhering to this design criterion, dimensions are to a large extent independent of each other, and dimension tables remain small with regards to data volume. This is beneficial in terms of performance. This InfoCube structure is optimized for data analysis.
The fact table and dimension tables are both relational database tables.
Characteristics refer to the master data with their attributes and text descriptions. All InfoObjects (characteristics with their master data as well as key figures) are available for all InfoCubes, unlike dimensions, which represent the specific organizational form of characteristics in one InfoCube.
http://help.sap.com/saphelp_nw04s/helpdata/en/4c/89dc37c7f2d67ae10000009b38f889/frameset.htm
Check the threads below:
Re: about Star Schema
Differences between Star Schema and extended Star Schem
What is the difference between Fact tables F & E?
Invalid characters erros
-Vikram -
How the data is fetched from the cube for reporting
hi all,
I need to understand the below scenario:(as to how the data is fetched from the cube for reporting)
I have a query, on a multiprovider connected to cubes say A and B. A is on BIA index, B is not. There are no aggregates created on both the cubes.
CASE 1: I have taken RSRT stats with BIA on, in aggregation layer it says
Basic InfoProvider *****Table type ***** Viewed at ***** Records, Selected *****Records, Transported
Cube A ***** blank ***** 0.624305 ***** 8,087,502 ***** 2,011
Cube B ***** E ***** 42.002653 ***** 1,669,126 ***** 6
Cube B ***** F ***** 98.696442 ***** 2,426,006 ***** 6
CASE 2:I have taken the RSRT stats, disabling the BIA index, in aggregation layer it says:
Basic InfoProvider *****Table Type *****Viewed at *****Records, Selected *****Records, Transported
Cube B *****E *****46.620825 ****1,669,126**** 6
Cube B *****F ****106.148337**** 2,426,030***** 6
Cube A *****E *****61.939073 *****3,794,113 *****3,499
Cube A *****F ****90.721171**** 4,293,420 *****5,584
now my question is why is here a huge difference in the number of records transported for cube A when compared to case 1. The input criteria for both the cases are the same and the result output is matching. There is no change in the number of records selected for cube A in both cases.It is 8,087,502 in both cases.
Can someone pls clarify on this difference in records being selected.Hi Jay,
Thanks for sharing your analysis.
The only reason I could think logically is BWA is having information in both E and F tables in one place and hence after selecting the records, it is able to aggregate and transport the reords to OLAP.
In the second case, since E and F tables are separate, aggregation might be happening at OLAP and hence you see more number of records.
Our Experts in BWA forum might be able to answer in a better way, if you post this question over there.
Thanks,
Krishnan -
Report on the dates and the times that software updates went out to partucular machines
Hello all,
I need to know if there is a way to report on the dates and times that certain software updates installed on certain machines. I see a report that could be it but it does not show any results. The report that I am talking about is under Software update - Distribution status. When I try to run any of those reports I get no matching records could be found so I guess I have 2 questions.Yes, I know this is an old post, I’m just trying to clean them up.
There is no report that will show you when a SU was installed on a PC. At best you can use the last change date but they is no reliable for several reasons. As for your second question, I’m not sure what report or category you are looking
at. You will need to provide more details.
http://www.enhansoft.com/ -
I need the Log Report for the Data which i am uploading from SAP R/3.
Hi All,
I am BI 7.0 Platform with Support Patch 20.
I need the Log Report for the Data which i am uploading from SAP R/3.
I extract the DATA from R/3 into BI 7.0 DSO where I am mapping the GL Accounts with the FS Item. In the Transformation i have return a routine on the FS Item InfObject . I am checking the Gl code into Z table for the FS Item .
I capture the FS item from the Z table then update this FS item to Infobject FS item.
Now i need to stop the Data upload if i do not find the GL code in the Z table, and generate report for all GL code for which the FS item is not maintained in the Z table.
Please suggest.
Regards
nileshHi.
Add a field that you will use to identify if the GL account of the record was found in the Z table or not. Fx, create ZFOUND with length 1 and no text.
In your routine, when you do the lookup, populate ZFOUND with X when you found a match (sy-subrc = 0) and leave it blank if you don't find a match. Now create a report filtering on ZFOUND = <blank> and output the GL accounts. Those will be the ones not existing in the Z table, but coming in from your transactions.
Regards
Jacob -
Entire Scenario how the data is being process.
Hi,
I need the full scenario in detail, when the sender adapter pick the file from the source directory, how the data is passed to IS and how the data is passed to adapter engine and how the adapter engine process the data and how the data is send to adapter framework and wat all the steps adapter framework perform and on wat step the audit logs is maintain, how messaging, logging and queing will done in AFW and after processing the data in adpter engine how the data is being passed to Integration Engine and how the pipeline steps will get execute and how the data is been transfered to receiver.
All others steps being process while sending the data from sender system to receving steps and how the data is process internally and where audit log is maintain etc.Hi,
Please see the below links
see the below links to helps you lot
http://help.sap.com/saphelp_nw2004s/helpdata/en/fd/16e140a786702ae10000000a155106/content.htm
/people/siva.maranani/blog/2005/05/25/understanding-message-flow-in-xi
http://help.sap.com/saphelp_nw2004s/helpdata/en/6a/a12241c20af16fe10000000a1550b0/content.htm
http://help.sap.com/saphelp_nw2004s/helpdata/en/e4/6019419efeef6fe10000000a1550b0/content.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/327dc490-0201-0010-d49e-e10f3e6cd3d8
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/34a1e590-0201-0010-2c82-9b6229cf4a41
Regards
Chilla -
How the data is entered in the customized table
Hi,
In implemenation scenario when we create generic extraction , how the data is entered
in the customized table if it is huge data ( around 5000 records)
Regards,
VivekHi Vivek,
Follow bellow steps:
1.Goto RSO2.
Choose Datasource from bellow of Three
a). Transaction Data
b). Master data Attributes
c). Master data Text
2.Specify Application component(SD/MM..)
3.There are three extraction methods to fill datasource.
4.Select extraction method extracts the data from a transparent table or database view.
5.Select Extraction from View, then we have to create the View.
a).Specify the view name.
b).Choose the view type (Database view) from bellow mentioned views.
i). Database view.
ii). Projection view.
iii).Maintainance view.
iv). Help view.
6. Specify Tables and Join Conditions and define view fields.
7. Assign View to Datasource
8. Once you specify view in Data source, the extract structure will generate.
9. you can check the data in RSA3.
Regards,
Suman -
How the data extraction happens from HR datasources from R/3 to BW system.
Hello All
How the data extraction happens from HR datasources from R/3 to BW system.Incase of delta records ( for CATS datasources ) ,Is there any flow like LO .
Incase of Full and delta loads how does the data will be taken from R/3 to BW,DO we need to fill setup tables ?
Searched forum but couldnt able to find the relevant one.
Thankyou
ShankarHi Shankar.
HR Datasources do not have setup tables . Though before implementation, certain customizations should be done and the delta loads have dependency on other data sources. Also you must have implemented Support Package SAPKH46C32, or have made the relevant corrections in SAP Note 509592.
Follow this link for details on customization and dependencies for all CATS datasources.
http://help.sap.com/saphelp_nw70/helpdata/en/86/1f5f3c0fdea575e10000000a114084/frameset.htm
Regards,
Swati -
How the data gets replicated from CRM to ISU
Hello All,
How the data gets replicated from CRM to ISU?
Would appreciate documents send to [email protected]
Regards,
RemiHere is the link!
http://help.sap.com/saphelp_crm50/helpdata/en/c8/b0a68afbb3624cbabeb5ea12a8c639/frameset.htm
Cheer,
Daniel
http://sapro.blogspot.com -
Hello
I am currently on SCOM 2007 R2 CU6 and Window Server Operating System MP version 6.0.6989.0 (I cannot use the latest version of the MP as we still have some Windows 2000 Servers we need to support, yes I know :( )
Any way the issue is, I have never found the Logical Disk performance counter data very reliable from SCOM.
For example, I have a Windows 2008 R2 Server and when looking at a local Logical Disk (which holds an SQL temp DB on a busy SQL Server) and look at the performance counter
The SCOM collection rule is called "Collection Rule for Average Disk Seconds per Transfer"
The actual Windows Perfmon counter is called "Avg. Disk Bytes/Transfer"
if you look at the description of the above Perfmon counter it is described as
"Avg. Disk Bytes/Transfer is the average number of bytes transferred to or from the disk during write or read operations."
The problem I have is as follows:
The resulting SCOM performance chart over several days (which has a scale ox 1x) states the value never reach 3 (e.g. maximum wa s 2.7 say). I cannot believe the a drive holding the tempDB databases for a busy SQL Server does not transfer more then 2.7 "bytes"
of data at a given to to its tempDB databases!
Indeed when I look at Permon on the Server and looks at this counter over say 20 minutes or so, the figure is often in the 10,000 or 30,000 bytes etc. It does fall back to 0 (zero) momentarily but mostly it is in the 1000s, or 10,000s etc.
Therefore when my boss says show me the "Avg. Disk Bytes/Transfer" and SCOM says it has not exceeded 2.7 over the last business week (i.e. the chart never peak above this value on the chart with scale 1x) he naturally does not believe it!!
Any advice please regarding the above. Is it the fact if the counter ever falls to zero it messes up the SCOM report charts?
Thanks
AAnotherUser
AAnotherUser__Create your own collection rule, to mirror the sample times, and what not. Look at the data from your rule vs the mp default rule. It probably has to do with the chart scale imho.
Regards, Blake Email: mengotto<at>hotmail.com Blog: http://discussitnow.wordpress.com/ If my response was helpful, please mark it as so, if it answered your question, then please also mark it accordingly. Thank you. -
What is Custom Dimentions in FI? How the fields get stored defined under it
Hi All,
I need to add 3 Custom fields in Standard FI screens.
I came to know about Custom Dimentions concept of FI and I added the fields in standard screen. It got reflected to standard Tables also but not in BAPI Structures or IDOCS.
What all I need to know is how do they get stored in standard tables on save of Document.
Does any BAPI or Function module gets called for it..?
Scenario::: These fields will come to SAP ECC from some 3rd Party via Interface.
So, now, in BAPI/ IDOC I'll have to manually handle those fields. Instead I was looking for some BAPI/ FM which handles Custom dimention fields.
Please let me know if problem is not clear.
Regards,
A Tewar.bogon just disappeared from SHARED on my iMac ... - although mystery remains as it still shows up in MBP ...
Maybe you are looking for
-
My ipod was stolen and i need to fin the serial code. how do i do that?
my ipod was stolen by some kid and i want to track it. and i need my serial code. how can i find it
-
I recently had trouble updating my itunes, when I did do it I kept getting an error message and because of that I had to uninstall everything than reinstall itunes. But since then my iphone, ipad, and ipod are not really syncing correctly. My PC sa
-
Random characters after computer name
Hello, We have a new issue with our agent deployment since we use MDT to deploy our workstations. When computer register to ZCM some random characters appear after computer name, for example: Computer name: corporate001 ZCM computer name: corporate00
-
How to make muse edit original file in photoshop?
On my mac, it always opens the jpg in preview. Guess I could always pre-open the jpg and save as a psd... that is a workaround that would just require more time however. In that case I could simply open it in ps and work on it, then update in muse.
-
I am using the 5D Mark II, I shoot around 100 shots them I import them into LR 2.7. After import the thumbnails initially look fine, once they start to render usually a few pictures throughtout will look corrupt, pink in color, smeared look...etc.