How the data extraction happens from HR datasources from R/3 to BW system.

Hello All
How the data extraction happens from HR datasources from R/3 to BW system.Incase of delta records ( for CATS datasources ) ,Is there any flow like LO .
Incase of Full and delta loads how does the data will be taken from R/3 to BW,DO we need to fill setup tables ?
Searched forum but couldnt able to find the relevant one.
Thankyou
Shankar

Hi Shankar.
HR Datasources do not have setup tables . Though before implementation, certain customizations should be done and the delta loads have dependency on other data sources. Also you must have implemented Support Package SAPKH46C32, or have made the relevant corrections in SAP Note 509592.
Follow this link for details on customization and dependencies for all CATS datasources.
http://help.sap.com/saphelp_nw70/helpdata/en/86/1f5f3c0fdea575e10000000a114084/frameset.htm
Regards,
Swati

Similar Messages

  • How to identify whether the data extracted is direct, queued, unserialized

    hi,
    how to identify whether the data extraction from r/3 is direct, queued and unseralized data.
    can anyone let me know abt it
    regds
    hari

    hI,
    Direct Delta: With this update mode, the extraction data is transferred with each document posting directly into the BW delta queue. In doing so, each document posting with delta extraction is posted for exactly one LUW in the respective BW delta queues.
    This update method is recommended for the following general criteria:
    a) A maximum of 10,000 document changes (creating, changing or deleting documents) are accrued between two delta extractions for the application in question. A (considerably) larger number of LUWs in the BW delta queue can result in terminations during extraction.
    b) With a future delta initialization, you can ensure that no documents are posted from the start of the recompilation run in R/3 until all delta-init requests have been successfully posted. This applies particularly if, for example, you want to include more organizational units such as another plant or sales organization in the extraction. Stopping the posting of documents always applies to the entire client.
    Queued Delta: With this update mode, the extraction data is collected for the affected application instead of being collected in an extraction queue, and can be transferred as usual with the V3 update by means of an updating collective run into the BW delta queue. In doing so, up to 10000 delta extractions of documents for an LUW are compressed for each DataSource into the BW delta queue, depending on the application.
    new queued delta
    This update method is recommended for the following general criteria:
    a) More than 10,000 document changes (creating, changing or deleting a documents) are performed each day for the application in question.
    b) In future delta initializations, you must reduce the posting-free phase to executing the recompilation run in R/3. The document postings should be included again when the delta Init requests are posted in BW. Of course, the conditions described above for the update collective run must be taken into account.
    Non-serialized V3 Update:With this update mode, the extraction data for the application considered is written as before into the update tables with the help of a V3 update module. They are kept there as long as the data is selected through an updating collective run and are processed. However, in contrast to the current default settings (serialized V3 update), the data in the updating collective run are thereby read without regard to sequence from the update tables and are transferred to the BW delta queue.
    unserialized v3 update
    This update method is recommended for the following general criteria:
    a) Due to the design of the data targets in BW and for the particular application in question, it is irrelevant whether or not the extraction data is transferred to BW in exactly the same sequence in which the data was generated in R/3.
    take a look Roberto's weblog series
    /people/sap.user72/blog/2004/12/16/logistic-cockpit-delta-mechanism--episode-one-v3-update-the-145serializer146
    /people/sap.user72/blog/2004/12/23/logistic-cockpit-delta-mechanism--episode-two-v3-update-when-some-problems-can-occur
    /people/sap.user72/blog/2005/01/19/logistic-cockpit-delta-mechanism--episode-three-the-new-update-methods
    /people/sap.user72/blog/2005/04/19/logistic-cockpit-a-new-deal-overshadowed-by-the-old-fashioned-lis
    https://weblogs.sdn.sap.com/pub/wlg/126 [original link is broken] [original link is broken] [original link is broken]
    doc
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328
    and oss note 505700
    Re: delta methods
    go throuth the previous thread
    Delta types
    hope it helps..

  • How the data is fetched from the cube for reporting

    hi all,
    I need to understand the below scenario:(as to how the data is fetched from the cube for reporting)
    I have a query, on a multiprovider connected to cubes say A and B. A is on BIA index, B is not. There are no aggregates created on both the cubes.
    CASE 1:  I have taken RSRT stats with BIA on, in aggregation layer it says
    Basic InfoProvider     *****Table type      ***** Viewed at          ***** Records, Selected     *****Records, Transported
    Cube A     *****             blank                 *****           0.624305         *****          8,087,502        *****             2,011
    Cube B     *****                     E   *****                      42.002653  *****                  1,669,126            *****                    6
    Cube B     *****                     F  *****                     98.696442    *****                  2,426,006       *****                    6
    CASE 2:I have taken the RSRT stats, disabling the BIA index, in aggregation layer it says:
    Basic InfoProvider     *****Table Type     *****Viewed at     *****Records, Selected     *****Records, Transported
    Cube B     *****E     *****46.620825     ****1,669,126****     6
    Cube B     *****F     ****106.148337****     2,426,030*****     6
    Cube A     *****E     *****61.939073     *****3,794,113     *****3,499
    Cube A     *****F     ****90.721171****     4,293,420     *****5,584
    now my question is why is here a huge difference in the number of records transported for cube A when compared to case 1. The input criteria for both the cases are the same and the result output is matching. There is no change in the number of records selected for cube A in both cases.It is 8,087,502 in both cases.
    Can someone pls clarify on this difference in records being selected.

    Hi Jay,
    Thanks for sharing your analysis.
    The only reason I could think logically is BWA is having information in both E and F tables in one place and hence after selecting the records, it is able to aggregate and transport the reords to OLAP.
    In the second case, since E and F tables are separate, aggregation might be happening at OLAP and hence you see more number of records.
    Our Experts in BWA forum might be able to answer in a better way, if you post this question over there.
    Thanks,
    Krishnan

  • How can I print the date and time in a photo from iPhoto

    How can I print the date and time in a photo from iPhoto

    You want to print them on their own? Can't be done. WIth the photo? Install this
    http://www.iborderfx.com/iborderfx/

  • How the data is stored in Info cube...in the back end what will happen???

    Hi Experts,
    How the data is stored in Info cube and DSO...in the back end what will happen???
    I mean  Cube contain Fact table and Dimension tables How the data will store and what will happen in the backend???
    Regards,
    Swetha.

    Hi,
    Please check :
    How is data stored in DSO and Infocube
    InfoCubes are made up of a number of InfoObjects. All InfoObjects (characteristics and key figures) are available independent of the InfoCube. Characteristics refer to master data with their attributes and text descriptions.
    An InfoCube consists of several InfoObjects and is structured according to the star schema. This means there is a (large) fact table that contains the key figures for the InfoCube, as well as several (smaller) dimension tables which surround it. The characteristics of the InfoCube are stored in these dimensions.
    An InfoCube fact table only contains key figures, in contrast to a DataStore object, whose data part can also contain characteristics. The characteristics of an InfoCube are stored in its dimensions.
    The dimensions and the fact table are linked to one another using abstract identification numbers (dimension IDs) which are contained in the key part of the particular database table. As a result, the key figures of the InfoCube relate to the characteristics of the dimension. The characteristics determine the granularity (the degree of detail) at which the key figures are stored in the InfoCube.
    Characteristics that logically belong together (for example, district and area belong to the regional dimension) are grouped together in a dimension. By adhering to this design criterion, dimensions are to a large extent independent of each other, and dimension tables remain small with regards to data volume. This is beneficial in terms of performance. This InfoCube structure is optimized for data analysis.
    The fact table and dimension tables are both relational database tables.
    Characteristics refer to the master data with their attributes and text descriptions. All InfoObjects (characteristics with their master data as well as key figures) are available for all InfoCubes, unlike dimensions, which represent the specific organizational form of characteristics in one InfoCube.
    http://help.sap.com/saphelp_nw04s/helpdata/en/4c/89dc37c7f2d67ae10000009b38f889/frameset.htm
    Check the threads below:
    Re: about Star Schema
    Differences between Star Schema and extended Star Schem
    What is the difference between Fact tables F & E?
    Invalid characters erros
    -Vikram

  • How the data is fetched from the cube for reporting - with and without BIA

    hi all,
    I need to understand the below scenario:(as to how the data is fetched from the cube for reporting)
    I have a query, on a multiprovider connected to cubes say A and B. A is on BIA index, B is not. There are no aggregates created on both the cubes.
    CASE 1: I have taken RSRT stats with BIA on, in aggregation layer it says
    Basic InfoProvider     *****Table type      ***** Viewed at      ***** Records, Selected     *****Records, Transported
    Cube A     ***** blank ***** 0.624305      ***** 8,087,502      ***** 2,011
    Cube B     ***** E ***** 42.002653 ***** 1,669,126      ***** 6
    Cube B     ***** F ***** 98.696442 ***** 2,426,006 ***** 6
    CASE 2:I have taken the RSRT stats, disabling the BIA index, in aggregation layer it says:
    Basic InfoProvider     *****Table Type     *****Viewed at     *****Records, Selected     *****Records, Transported
    Cube B     *****E     *****46.620825     ****1,669,126****     6
    Cube B     *****F     ****106.148337****     2,426,030*****     6
    Cube A     *****E     *****61.939073     *****3,794,113     *****3,499
    Cube A     *****F     ****90.721171****     4,293,420     *****5,584
    now my question is why is here a huge difference in the number of records transported for cube A when compared to case 1. The input criteria for both the cases are the same and the result output is matching. There is no change in the number of records selected for cube A in both cases.It is 8,087,502 in both cases.
    Can someone pls clarify on this difference in records being selected.

    Hi,
    yes, Vitaliy could be guess right. Please check if FEMS compression is enabled (note 1308274).
    What you can do to get more details about the selection is to activate the execurtion plan SQL/BWA queries in data manager. You can also activate the trace functions for BWA in RSRT. So you need to know how both queries select its data.
    Regards,
    Jens

  • How the data gets replicated from CRM to ISU

    Hello All,
    How the data gets replicated from CRM to ISU?
    Would appreciate documents send to [email protected]
    Regards,
    Remi

    Here is the link!
    http://help.sap.com/saphelp_crm50/helpdata/en/c8/b0a68afbb3624cbabeb5ea12a8c639/frameset.htm
    Cheer,
    Daniel
    http://sapro.blogspot.com

  • My mobile was stolen and I want to delete the data on it without deleting it from iCloud, how can I do that?

    My mobile was stolen and I want to delete the data on it without deleting it from iCloud, how can I do that?

    Welcome to the Apple Community.
    You can erase it with find my phone if you had enabled it, this doesn't wipe the data in your iCloud account.

  • How the data is entered  in the customized table

    Hi,
    In implemenation scenario when we create generic extraction ,   how the data is entered
    in the customized table if it is huge data  ( around 5000 records)
    Regards,
    Vivek

    Hi Vivek,
    Follow bellow steps:
    1.Goto RSO2.
    Choose Datasource from bellow of Three
    a). Transaction Data
    b). Master data Attributes
    c). Master data Text
    2.Specify Application component(SD/MM..)
    3.There are three extraction methods to fill datasource.
    4.Select extraction method extracts the data from a transparent table or database view.
    5.Select Extraction from View, then we have to create the View.
    a).Specify the view name.
    b).Choose the view type (Database view) from bellow mentioned views.
    i). Database view.
    ii). Projection view.
    iii).Maintainance view.
    iv). Help view.
    6. Specify Tables and Join Conditions and define view fields.
    7. Assign View to Datasource
    8. Once you specify view in Data source, the extract structure will generate.
    9. you can check the data in RSA3.
    Regards,
    Suman

  • 'Error 8 occurred when starting the data extraction program'

    Hello Experts,
    I am trying to pull master data (Full upload) for a attribute. I am getting an error on BW side i.e. 'The error occurred in Service API .'
    So I checked in Source system and found that the an IDOC processing failure has occurred. The failure shows 'Error 8 occurred when starting the data extraction program'.
    But when I check the extractor through RSA3, it looks fine.
    Can someone inform what might be the reason of the failure for IDOC processing and how can this be avoided in future. Because the same problem kept occurring later as well.
    Thanks
    Regards,
    KP

    Hi,
    Chk the idocs from SM58 of source system are processing fine into ur target system(BI system)...?
    Chk thru Sm58 and give all * and target destination as ur BI system and execute and chk any entries pending there?
    rgds,
    Edited by: Krishna Rao on May 6, 2009 3:22 PM

  • Invalid data status error during the data extraction

    Hi,
    while extracting capacity data from the SNP Capacity view to BW. i get the "invalid data status error" and the data extraction fails.
    when debugged the bad requests of the ODS object, i found that for a certain product(which has both positive and negative input and out qtys) co-product manufacturing orders were created. but this product was not marked as the co-product and functionally its fine.
    how can i rectify the data extraction problem..can you advice.
    Thanks,
    Dhanush

    Sir,
    In my company for some production order status some are having "errors in cost calculation" ie "cser" .how to deal these kind of errors.

  • How the Load balancing happens in CPO

    Hi All,
    On what bases the process engine selects the process or request and how the load balancing happens.

    Hi!
    I am a little confused by the question (as it refers to "request"), but I am going to assume that you are asking how a High Availability Process Orchestrator environment with several servers chooses which processes running on which server.
    The answer to that question is...
    In general, processes to be executed are split equally between all servers. The only piece of data being taken into account during process instance assignment is the current load on the servers (as counted by the number of top-level processes, not counting child processes). For example, suppose that there are 3 servers in the environment, and server A is running 5 top-level processes, servers B & C are running 3 top-level processes. When new process is started (e.g. on a schedule or manually or triggered via an external event), it will be assigned to either server B or server C for execution, because servers B & C have less load. If under the same circumstances (A:5, B:3, C:3), there are 4 processes started at the same time. When the work is distributed, the total expected work 5+3+3=11 (existing work) and 4 (new work) will be distributed equally with, with servers B&C each getting 2 new processes.
    This is a general load balancing algorithms used by the servers in HA environment to decide which server runs which process instance.
    There are other factors that come into play, as some processes/activities can only run on server A or server B for technical limitations (e.g. SAP work against a particular SAP System can only be executed from one server in the environment). When those come into play, the work may end up distributed unevenly.
    Note that available memory, CPU load, or disk space on servers are not directly taken into account during load distribution.

  • Entire Scenario how the data is being process.

    Hi,
    I need the full scenario in detail, when the sender adapter pick the file from the source directory, how the data is passed to IS and how the data is passed to adapter engine and how the adapter engine process the data and how the data is send to adapter framework and wat all the steps adapter framework perform and on wat step the audit logs is maintain, how messaging, logging and queing will done in AFW and after processing the data in adpter engine how the data is being passed to Integration Engine and how the pipeline steps will get execute and how the data is been transfered to receiver.
    All others steps being process while sending the data from sender system to receving steps and how the data is process internally and where audit log is maintain etc.

    Hi,
    Please see the below links
    see the below links to helps you lot
    http://help.sap.com/saphelp_nw2004s/helpdata/en/fd/16e140a786702ae10000000a155106/content.htm
    /people/siva.maranani/blog/2005/05/25/understanding-message-flow-in-xi
    http://help.sap.com/saphelp_nw2004s/helpdata/en/6a/a12241c20af16fe10000000a1550b0/content.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/e4/6019419efeef6fe10000000a1550b0/content.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/327dc490-0201-0010-d49e-e10f3e6cd3d8
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/34a1e590-0201-0010-2c82-9b6229cf4a41
    Regards
    Chilla

  • What is RT , BT in HR Reporting ? How the Data is Populated into RT, BT ?

    Hi
    Iam Debugging an HR ( used LDB )Report. In that Report 'RT' is used.
    What is the Meaning of 'RT'. And how the Data is populated in to that RT ?
    What is the Meaning of 'BT'. And how the Data is populated in to that BT ?
    Kindly clarify my doubts.
    Regards,
    N.L.

    Hi nl,
    1. These are related to payroll results.
    2. Whenver salary is processed,
       VAST & varied amount of information needs
       to be stored.
    3. Hence, sap uses the concept of CLUSTER table.
    4. when salary is processed,
       some wage types,amounts etc
       are generated.
       ie. RESULTS are generated.
       the table name is RT
    5. Same way, BANK Transfer
       ie. bank code, name, amount etc.
       also needs to be stored.
      Its table name is BT.
    Similary there are other tables also viz WPBP etc.
    6. Payroll data can be retrived using
       macros and also using FM.
    7. Below is the technique
      DATA: myseqnr LIKE hrpy_rgdir-seqnr.
      DATA : mypy  TYPE payin_result.
      DATA : myrt LIKE TABLE OF pc207 WITH HEADER LINE.
          SELECT SINGLE seqnr FROM hrpy_rgdir
          INTO myseqnr
          WHERE pernr = mypernr
          AND fpper = '200409'
          AND srtza  = 'A'.
          IF sy-subrc = 0.
            CALL FUNCTION 'PYXX_READ_PAYROLL_RESULT'
              EXPORTING
                clusterid                    = 'IN'
                employeenumber               = mypernr
                sequencenumber               = myseqnr
              CHANGING
                payroll_result               = mypy
              EXCEPTIONS
                illegal_isocode_or_clusterid = 1
                error_generating_import      = 2
                import_mismatch_error        = 3
                subpool_dir_full             = 4
                no_read_authority            = 5
                no_record_found              = 6
                versions_do_not_match        = 7
                error_reading_archive        = 8
                error_reading_relid          = 9
                OTHERS                       = 10.
            myrt[] = mypy-inter-rt.
            READ TABLE myrt WITH KEY lgart = '1899'.
            IF sy-subrc = 0.
              entl-cumbal = myrt-betrg.
              MODIFY entl.
              cumul = entl-cumbal.
            ENDIF.
    regards,
    amit m.

  • Standard SAP program name for the data extraction

    Please tell me the stadard SAP program  for the data extraction for Material, Vendor and Customer.

    you might want to explore tx. SXDA.

Maybe you are looking for

  • Out of memory error to view AVI files

    Hi everyone, I've imported 6 AVI files from the same source into the browser. All work fine except for 2 of them. When I click on those files in the browser to preview them in the viewer, I receive 2 messages: the first one is 'general error', the se

  • SnapMirror to SnapVault or SnapVault to SnapMirror

    Hi, I am looking for advice and confirmation as to whether this is possible. Here is our current setup. Primary Site1x FAS8020 - Test_Volume DR Site1x FAS8020 - Test_Volume_sm & Test_Volume_sv The problem we have with the above setup, is that both th

  • How can I get contacts or scanner to open from dock?

    Since I updated to OS X 10.9.2, I cannot open my scanner or contacts from the dock (as a matter of fact, I cannot open them at all!).  Please help! Thanks, Toni

  • Restore ipod playlists after crash

    I have a 2nd gen 10gb ipod. I have a new 20" intel imac that I bought in January. I backed up my itunes library folder in January, before I bought my new imac. Two weeks ago my new imac's HD's firmware got corrupted during a windows re-installation a

  • Wait or buy?

    No harm wondering out loud, and 7 yrs on Mac Pro. I would plan on 6-core, somethikng more than the 5870 probably (but you woudl get it with 5770 and have to upgrade on your own, no BTOs), and definitely 24GB+ RAM (but buy minimum from Apple of course