Retroactive postings on HR extracted to BI

Hi guys, we are implementing BI HR, we have a problem that all the content cubes bring through data posted in the End-for period, in our case we need both.  On the 0PY_PPC01 there is an ability to extract both time dimensions but it only collect data from certain wagetypes.  Please advise which content cubes to use in order to extract retro postings.
Thank you
Rochelle

Our issue was solved by some information that we received from SAP.
This response was provided by Thomas Berndt of SAP:
The problem here is an authorization issue: In order to extract
project versions flagged as "Global" (field VERSION_USAGE = 1) the
user trying to read that versions need to be provided with
authorization CPRO_VSHDR. This works fine for user SAPUPDATE for
instance, but user ALEREMOTE which does the extraction, is not
sufficiently authorized for this, at least reading authorization needs
to be given.
This might not be straight forward, but is according to the design of
the versions: Unless explicitely authorized, these global versions are
not accessible.
More details on autorizations for versions can be found the
configuration guide I attached on page 87.

Similar Messages

  • Reinit steps

    Hi Experts,
    When i do a reinit, in LBWE b4 initialisation what update should I check for job control,?
    and also
    after init and b4 doing delta what update method shld i select?/
    Thanks
    DV

    Hi DV
    You don't need to change update modes at every stage. Please select the approriate update mode before you load the delta postings.
    Read following
    This is where you specify the type of update used to update the data in delta postings :-
    a) Serialized V3 Update
    This is the conventional update method in which the document data is collected in the sequence of attachment and transferred to BW by batch job.
    The sequence of the transfer does not always match the sequence in which the data was created.
    b) Direct Delta
    This is where the extraction data is transferred directly from the document postings to the BW delta queue.
    The data is transferred in the order in which was created.
    c) Queued Delta
    In this mode, extraction data is collected from document postings in an extraction queue from which the data is transferred into the BW delta queue using a periodic collective run.
    The transfer sequence is the same as the sequence in which the data was created.
    d) Unserialized V3 Update
    This method is largely identical to the serialized V3 update. The difference lies in the fact that the sequence of document data in the BW delta queue does not have to agree with the posting sequence. We therefore recommend this method only when the sequence that data is transferred into BW does not matter (due to the design of the data targets in BW).
    Most commonly used update mode is Queued delta also supported by it's advantages.
    Regards
    Pradip

  • Difference Between Update Queue and Delta Queue

    Hi Experts ,
    Could u brief out the difference between Extraction Queue , Update Queue and Delta Queue  ?

    Hi
    a) Serialized V3 Update(If you set this as your delta mode then data will populated into extraction queue tables)
    This is the conventional update method in which the document data is collected in the sequence of attachment and transferred to BW by batch job.The sequence of the transfer does not always match the sequence in which the data was created.
    b) Queued Delta(If you set this as your delta mode then data will be populated into update queue tables)
    In this mode, extraction data is collected from document postings in an extraction queue from which the data is transferred into the BW delta queue using a periodic collective run. The transfer sequence is the same as the sequence in which the data was created
    Delta queue is a table which collects data from extraction queue/update queue tables based on the frequency you have set(daily/hourly etc)..from these tables we will get delta records to bw with infopackage delta run..
    Hope it helps
    Thanks
    Teja

  • Postings on R/3

    Dear Gurus, I have this user who has done some sample postings with only a order number (0COORDER) and no actuals and plan cost. T/C IW33 shows this order number. How will I know that which tables is getting populated by this postings. The datasource which is supposedto pick this up is 0CO_OM_OPA_6 which has a extract structure ICORDCSTA1. I just wanted to see why this order number is not getting picked up by this extractor. Urgent help is appreciated.

    Use
    This DataSource provides the actual costs and actual quantities posted on the internal order.
    Source :http://help.sap.com/saphelp_nw70/helpdata/en/86/b35a259a03fe42b29d1437a7fba1c6/frameset.htm
    If you want the plan record try RSA3 for 0CO_OM_OPA_1 and see if that record shows up.
    http://help.sap.com/saphelp_nw70/helpdata/en/86/b35a259a03fe42b29d1437a7fba1c6/frameset.htm

  • Generic Extraction (or) How to use table in R/3 system as datasource in BW?

    Hi all,
    Hope all are having a great day
    Can any1 tell me the steps, for Generic Extraction.
    It will be very helpful, if the steps are as much as simpler as posible.
    I know to create a simple cube in BW.
    I have this much knowledge, depending on this can any1 tell me the steps for Generic Extraction from R/3.
    examples wud be vry useful
    Regards,
    Sourav

    hi,
    Maintaining Generic DataSources 
    Use
    Independently of application, you can create and maintain generic DataSources for transaction data, master data attributes or texts from any kinds of transparent tables, database views, InfoSets of the SAP query or using a function module. As a result, you can make use of the generic extraction of data.
    Procedure
    Creating a Generic DataSource(RSO2)
           1.      Select the DataSource type and give it a technical name.
           2.      Choose Create.
    The creating a generic DataSource screen appears.
           3.      Choose an application component to which the DataSource is to be assigned.
           4.      Enter the descriptive texts. You can choose any text.
           5.      Choose from which datasets the generic DataSource is to be filled.
                                a.      Choose Extraction from View, if you want to extract data from a transparent table or a database view. Enter the name of the table or the database view.
    After generation, you get a DataSource whose extract structure is congruent with the database view or the transparent table view.
    For more information about creating and maintaining database views and tables, see the ABAP Dictionary Documentation.
                                b.      Choose Extraction from Query, if you want to use a SAP query InfoSet as the data source. Select the required InfoSet from the InfoSet catalog.
    Notes on Extraction Using SAP Query
    After generation, you now have a DataSource whose extract structure matches the InfoSet.
    For more information about maintaining the InfoSet, see the System Administration documentation.
                                c.      Choose Extraction using FM, if you want to extract data using a function module. Enter the function module and extract structure.
    The data must be transferred by the function module in an interface table E_T_DATA.
    Interface Description and Extraction Process Flow
    For information about the function library, see the ABAP Workbench: Tools documentation.
                                d.      With texts, you also have the option of extraction from domain fixed values.
           6.      Maintain the settings for delta transfer where appropriate.
           7.      Choose Save.
    When extracting, look at SAP Query: Assigning to a User Group.
    Note when extracting from a transparent table or view:
    If the extract structure contains a key figure field, that references to a unit of measure or currency unit field, this unit field must appear in the same extract structure as the key figure field.
    A screen appears in which you can edit the fields of the extract structure.
           8.      Editing the DataSource:
            Selection
    When scheduling a data  request in the BW Scheduler, you can enter the selection criteria for the data transfer. For example, you may want to determine that data requests are only to apply to data from the previous month.
    If you set the Selection indicator for a field within the extract structure, the data for this field is transferred in correspondence with the selection criteria in the scheduler.
            Hide field
    You should set this indicator to exclude an extract structure field from the data transfer. As a result of your action, the field is no longer made available in BW when setting the transfer rules and generating the transfer structure.
            Inversion
    Reverse postings are possible for customer-defined key figures. For this reason, inversion is only possible for certain transaction data DataSources. These include DataSources that have a field that is indicated as an inversion field, for example, the field update mode in the DataSource 0FI_AP_3. If this field has a value, then the data records are interpreted as reverse records in BW.
    Set the Inversion indicator if you want to carry out a reverse posting for a customer-defined field (key figure). The value of the key figure is then transferred in inverted form (multiplied by –1) into BW.
            Field only known in exit
    You can enhance data by extending the extract structure for a DataSource using fields in append structures.
    The indicator Field only known in Exit is set for fields of an append structure. In other words, by default these fields are not passed onto the extractor from the field list and selection table.
    Deselect the indicator Field Only Known in Exit to enable the Service API to pass on the append structure field to the extractor together with the fields of the delivered extract structures in the field list as well as in the selection table.
    9. Choose DataSource ® Generate.
    The DataSource is now saved in the source system.
    Maintaining Generic DataSources
    •        Change the DataSource
    To change a generic DataSource, in the initial screen of DataSource maintenance, enter the name of the DataSource and choose Change.
    You can change the assignment of a DataSource to an application component as well as the texts of a DataSource. Double-clicking on the name of the table, view, InfoSet or extract structure takes you to the appropriate maintenance screen. Here you can make changes required to add new fields. You can fully swap transparent tables and database views, but not InfoSets. If you return to the DataSource maintenance and choose Create, the screen for editing a DataSource appears. To save the DataSource in the SAP source system, choose DataSource  ® Generate.
    If you want to test extraction in the source system independently of a BW system, choose DataSource  ®  Test Extraction.
    •        Delta DataSource
    In the Change Generic DataSource screen, you can delete any DataSources that are no longer relevant. If you are extracting data from an InfoSet, delete the associated query. If you want to delete a DataSource, this must not be connected to a BW system.
    For more information about extracting using SAP Query, see Extraction using the SAP Query.
    hope this helps.
    assign point if so
    partha

  • E-Recruiting - Internal Candidates not displayed in candidate search, job postings not visible for internal candidates in job search

    Hi Friends,
    We are on EREC standalone model. Initial data transfer between HR to EREC master data using PFAL is done.
    All employees have got NA, CP, US, BP in HRP 1001 in EREC system.
    Change pointers are also activated in HCM system, now current master data changes are also in reflecting in EREC system thro IDOC posting.
    But when recruiter logs into portal and does a candidate search, no internal candidates are appearing ?
    1) What needs to be done for internal candidates to visible during candidate search ?
    2) Also while logged as internal candidate, released requisitions are not visible for internal candidates while they try to search for internal job postings ?
    Kindly provide your inputs.
    Regards,
    ER.

    Hi,
    In SLG1, getting the below error messages for multiple times.
    "Error while calling content extraction class CL_HRRCF_CEC_QUALI_WITH_PROFCY 
    The error occurred in program CL_HRRCF_SES_BUSOBJ_FROM_SPTYPCM001 line 96  
    Qualification 52000001 does not exist
    The incorrect HR object has the key 01NA60000029 "
    "The error occurred in program CL_HRRCF_ALE_EE_INBOUND"
    Also please state how to differentiate internal and external candidate search pages.
    Regards,
    ER.

  • How to extract the historical data from R/3

    hi
    I am extracting data from R/3 through LO Extraction. client asked me to enhance the data source by adding field. i have enhanced the field and wrote exit to populate the data for that field.
    how to extract the historical data into BI for the enhanced field. already delta load is running in BI.
    regards

    Hi Satish,
    As per SAP Standard also the best way is to delete whole data from the cube and then load the data from set up tables as you have enhanced the data source.
    After data source enhancement it is supported to load normally because you don't get any historical data for that field.
    Best way is to take down time from the users, normally we do in weekends/non-business hours.
    Then fill the set-up tables; if the data is of huge volume you can adopt parallel mechanism like:
    1. Load set-up tables by yearly basis as a background job.
    2. Load set-up tables by yearly basis with posting periods from jan 1st to 31st dec of any year basis as a background job.
    This can make your self easier and faster for load of set-up tables. After filling up set-up tables. You can unlock all users as there is no worries of postings.
    Then after you can load all the data into BI first into PSA and then into Cube.
    Regards,
    Ravi Kanth.

  • How to extract HRM master data from R/3 into LDIF file?

    Recently I have been asked to provide an extract from our R/3 system
    with some Human Resource master data. The extract has to be in the LDIF
    format (LDAP data interchange format). It is needed to import into a
    DirX metahub solution from Siemens.
    How can this be done most easily?
    (does SAP provide tools, can XI do this?) or do we have to write a
    customized abap to do this?
    Thanks in advance
    Kind regards
    Alex Veen

    Hi Satish,
    As per SAP Standard also the best way is to delete whole data from the cube and then load the data from set up tables as you have enhanced the data source.
    After data source enhancement it is supported to load normally because you don't get any historical data for that field.
    Best way is to take down time from the users, normally we do in weekends/non-business hours.
    Then fill the set-up tables; if the data is of huge volume you can adopt parallel mechanism like:
    1. Load set-up tables by yearly basis as a background job.
    2. Load set-up tables by yearly basis with posting periods from jan 1st to 31st dec of any year basis as a background job.
    This can make your self easier and faster for load of set-up tables. After filling up set-up tables. You can unlock all users as there is no worries of postings.
    Then after you can load all the data into BI first into PSA and then into Cube.
    Regards,
    Ravi Kanth.

  • LO Extraction approx time duration for PO Items (2lis_02_itm)

    Hi All,
    We are supposed to use 2lis_02_itm which is a purchase order line items data and this has to be initialized in our organization as previously there were no one using it. But, as we have to use LO extraction for PO item, we have been trying to prepare the cut over plan on how many hours approximately do we need to complete the extraction.
    As there should not be any postings during the LO extraction for PO items, we need to request to down the system for that duration. So, we are trying to estimate the time required from the start till the end of extraction. The data that we have is around in 1.3 Million records in PO header table.
    We tried to extract to the setup table in a LVT system and it took 39 hours to complete.
    I would like to know, on any of your experience, how long will it take and how much was the data load for your case.
    I need your expertise advice on this and is there any way we can initialize the system without downing the system during extraction?
    Appreciate your replies .
    Thanks in advance.

    Hi,
      There is no way for us to predict the run time, since it varies from system to system according to memory and other conditions.
    Also the header table is not a real indication of your runtime, you need to look at the line items (EKPO table). This is because any LO set-up program, always fills the lines items and these have the most data.
    This is what I suggest.
    1) Take dumps of the document ranges in the EKPO table.
    2) Split above document ranges into even chunks. For eg doc# 1 -100 contains 500 items. Doc # 101 -150 contains 500 items. The items should be more or less the same number.
    3) Take a subset of these, say 1000 records and do a set-up run without blocking the documents( there is a check box in the program). Schedule the setup as a background job. In this way, you get the exact runtime of the program for a 1000 records.You can then predict the run time for the ranges mentioned in step 2.
    4) You then need to create parallel jobs, each job contains a different document range( this is because setup tables can be filled in parallel). By optimizing the number of items and then number of parallel jobs you can get a small a time as possible and minimize the downtime.
    5) Once the jobs have completed, you can run an init w/o data transfer and then unlock the system. You can then do repair  fulls while the system is up, since the repairs would be done from the setup tables. Once all the loads are completed, you can begin running your delta ( This would already have started being captured, the moment you ran the setup w/o data transfer.
    Regards.

  • Extract logic/tables/sequence?

    Hello,
    I'm curious about the sequence in which different kinds of extraction take place leading to data loading into the BW system (either master- or trans. data).
    Is this a correct assumtion:
    1. Masterdata: during execution of the info package, the function module (alternatively, a view is used as source and no function module is needed) of the called data source goes into the source tables (ex. MARA), returns the data via the rules of the extract structure (incl. appends/user exits).
    2. Trans. data init:during execution of the info package, the function module (alternatively, a view is used as source and no function module is needed) of the called data source goes into the source tables (GLPCA), returns the data via the rules of the extract structure (incl. appends/user exits). After the load, a delta table is generated. Within f.e. SD, you have to generate and fill a setup table in order to be able to load th init.
    3. Trans data delta: new postings are added to the delta table (in PCA this happens automatically while in f.e. SD, you have to schedule this procedure) from which the new postings are fetched (via function module or view) on execution of the info package in BW.
    Is this the way it works in general? Are there any good documents on this topic? Please mail to [email protected]
    Regards,
    F C
    Message was edited by:
            F C

    1. For master data like material, the exractors behave all most the same way.
    2. For transaction data, it is diferent for LO comapred to non-LO extractors. for LO transaction data, there are some extra steps you need to do like fillng the set up tables, choosing a delta method, V1,V2, V3.
    3. For delta, like I said in 2 above, it is slighly different for LO extractors.
    There is lots of info on this subject in SDN. Search with key word LO or cockpit will be very useful.
    Ravi Thothadri

  • Please tell me the LO Extraction procedure

    Hi BI Experts, Good morning.
      I am very confusing about LO Extraction.
    Please can anyone tell me the procedure?
    Please dont give any links.
       Just tell me step by step.
    Thanks & Regards
    Anjali

    Hi
    The best material available on this subject is the Blog written by Roberto.
    /people/sap.user72/blog/2004/12/16/logistic-cockpit-delta-mechanism--episode-one-v3-update-the-145serializer146
    Anyway to begin with let me try to do give you some preliminary procedure.  As you may be aware that the LO Cockpit consists of many logistics related standard Datasources.
    In LO Cockpit, You can maintain and modify the logistics datasources and do  other related activities.
    To begin with you Access the LO Cockpit from T-CODE LBWE or from SBIW.
    Prior to this Transfer the Logistic Datasource from Business Content ( RSA5)
    Step 1 Maintain Extract structure :- The extract structures can be maintained by you or by SAP. From the list of fields that are available in the extract structure you can select the necessary fields or you can opt to enhance the structure with Zfields too. In the later case you need to do write exit to populate the data for the added Zfields.
    Step 2 Maintain Datasources :- Here you can maintain the general settings for the datasources like selection fields and so on.
    Step 3 Delete the Contents of the Setup table :- The Content of the setup table is used for delta initialization ( T CODE : LBWG)
    Step 4 Replicate and Activate the Datasource in BI System and Maintain the Data target, Transformation and the DTP settings.
    Step 5 Populate the setup table needs with historical data ( T CODE OLI*BW; where * refers to the identification for specific application say 3 for purchasing, 7 for Sales) or you can maintain the same from SBIW. Ensure that there are no postings done in the respective Application during this step.
    Step 6 Create Infopackage and Initializa delta in the Scheduler.
    Step 6 Maintain Job Control Parameters and Upload Mode.
    Step 7 Now you can schedule your V3 Job.
    Regards,
    Shyam

  • COPA Extraction

    Hi Friends,
    Im new2 BI7.
    1)Plz provide detailed info on hw 2 go abt for COPA extraction.
    2)I need 2 make Customer specific Datasources.So plz guide on this
    3)Since my existing Copa reps r frm Report painter/Abap,they just drill down 2 multiple screen,,Is multiple drill down functionality possible in BI?If so plz provide details on hw to make sample drill report.
    4)Wat are the crucial steps to take in Copa extraction.
    Plz provide any links/ppts/pdf's.
    It vll be highly appreciated.
    Thanks & Regards.

    CO-PA:
    CO-PA collects all the OLTP data for calculating contribution margins (Sales, cost of sales, overhead costs)
    CO-PA also has power reporting tools and planning functions however co-pa reporting facility is limited to
              Integrated cross-application reporting concept is not as differentiated as it is in BW.
         OLTP system is optimized for transaction processing and high reporting work load has negative impact on the overall performance of the system.
    Flow of Actual Values:
    During Billing in SD Revenues and Payments are transferred to profitability segments in Profitability Analysis at the same time sales quantities are valuated using standard cost of goods manufactured.
    In Overhead cost controlling primary postings is made to objects in overhead cost controlling and assigned to relevant cost object. Actual cost of goods manufactured also assigned to cost objects and at the same time performing cost centers are credited
    The Production Variances calculated for the cost objects i.e. the difference between the actual cost of the goods manufactured and the standard costs are divided into variance categories and settled to profitability segments (example production orders).
    What are the top products and customers in our different divisions. This is one of the typical questions that can be answered with CO-PA Module.
    Characteristics are the fields in an operating Concern according to which data can be differentiated in Profitability Analysis.
    Each characteristic in an operating concern has a series of valid characteristic values.
    Profitability is a fixed combination of valid characteristic values.
    Characteristics:
    Some Characteristics are predefined in Operating concern like Material, Customer, and Company Code. In addition to these fixed characteristics we can define up to 50 characteristics of our own. In most cases we will be able to satisfy our profitability analysis requirements with between 10 to 20 Characteristics.
    Value Fields:
    Key Figures like revenue, cost of goods sold, overhead costs are stored in value fields.
    Organizational Structure:
    The Value fields and Characteristics that are required to conduct detailed analysis vary from industry to Industry and between Individual Customers.
    In CO-PA we can configure structure of one or more operating concerns in each individual installation.
    An operating concern is an Organizational Structure that groups controlling areas together in the same way controlling areas Groups Company’s together
    Data Base Structures in CO-PA:
    Actual Line Item table:      CE1xxxx
    Plan Line Items: CE2xxxx
    Line Item Contain some information at document level that most cases is too detailed for analysis example  CO-PA Document Number, Sales document Number, Posting date.
    CO-PA Maintains summarization of data used by all CO-PA functions like reporting, planning, assessments, settlements and so on.
    Actual Line Item table:      CE1xxxx
    Plan Line Items: CE2xxxx
    Line Item Contain some information at document level that most cases is too detailed for analysis example CO-PA Document Number, Sales document Number, Posting date.
    CO-PA Maintains summarization of data used by all CO-PA functions like reporting, planning, assessments, settlements and so on.
    Segment Table: CE4xxxx
    The Characteristics that describes the market are first separated from the rest of the line Items.
    Each combination of characteristic vales is stored in profitability segment number. The Link between the profitability segment number and characteristic values is maintained in Segment Table.
    Segment Level: CE3xxxx
    The Value fields are summarized at profitability segment and period levels and stored together with these fields in Table CE3xxxx.
    This table contains total values of the period for each Profitability segment number.
    Storage Procedure:
    We can compare an operating Concern associated by segment table and segment level to an Info cube. Info cube comprises Dimension Table Segment Table) and the fact table (Segment Level).
    Unlike Fact table the segment level key contains other keys like processing type in addition to the key field from the segment table.
    Characteristics in Info Cube correspond to characteristics (Or Attributes) in Info Cube.
    Value fields can be regarded as a Key Figures.
    Summarization level in Operating Concern have the same function as aggregates for an Info Cube, the difference is that aggregates for Info Cube are managed together with the Info Cube Itself while summarization levels are updated at regular intervals usually daily.
    Line Items in CO-PA is comparable with Operational Data Store
    Data Staging Overview:
    To Provide Data Source for BW all CO-PA Data Sources must be generated in the Source System.
    Data Sources can be defined at Operating Concern and client level.
    Data Source contains the following Information
    Name of the Operating Concern
    Client
    Subset of the Characteristics
    Subset of the Value fields
    Time Stamp which data has already been loaded in to BW.
    Creating Data Source:
    Since Data Source is always defined at operating concern and Client levels a standard name is always generated which starts with 1_CO_PA_<%CL>_<%ERK . We can change this name if necessary however the prefix 1_CO_PA is Mandatory.
    Data Source-Segment Table:
    Characteristics from the segment table are the characteristics that are maintained in transaction KEQ3.
    By Default all characteristics are selected.
    Data Source-Accounting Base:
    When we generate CO-PA Data Source select accounting based option.
    Fields KOKRS, BUKRS, KSTAR are compulsory.
    There are no Characteristics available for line Items because accounting based data sources do not contain characteristics.
    There are no value fields or calculated key figures available
    KOKRS and PERIO must be selected as selection fields.
    Hope it will helps you........

  • Processing overdue error during delta extraction from datasource 0CO_OM_CCA

    Hi,
    I'm getting "Processing overdue error" in BW while extracting delta from datasource 0CO_OM_CCA_9. all other extraction jobs from R3 -> BW are successful. Even Delta init on this datasource is successful & problem is only with delta package.
    I appreciate if someone could provide information based on the following error details on this issue.
    here are the extraction steps we followed.
    Full load of fiscal years 2006 & 2007 Into transactional cube.
    load budget data into transactional cube.
    compression of the cube with "0 elimination"
    Delta Initialization with fiscal period selections 1/2008 to 12/9999
    all the above steps were successful.
    and when delta package is scheduled, we are getting following errors.
    BW system log
    BW Monitoring job is turning red with following message.
    Technical : Processing is overdue
    Processing Step : Call to BW
    Sending packages from OLTP to BW lead to errors
    Diagnosis
    No IDocs could be sent to the SAP BW using RFC.
    System response
    There are IDocs in the source system ALE outbox that did not arrive in
    the ALE inbox of the SAP BW.
    Further analysis:
    Check the TRFC log.
    You can get to this log using the wizard or the menu path "Environment -> Transact. RFC -> In source system".
    Removing errors:
    If the TRFC is incorrect, check whether the source system is completely
    connected to the SAP BW. Check especially the authorizations of the
    background user in the source system.
    R3 Job Log
    Even after BW job turns to red, R3 job continues to run for 2 hours and
    eventually gets cancelled with an ABAP dump & here is the log.
    Job
    started
    Step 001 started (program SBIE0001, variant &0000000110473, user ID
    BWREMOTE) DATASOURCE = 0CO_OM_CCA_9
    Current Values for Selected Profile Parameters
    abap/heap_area_nondia.........2000000000 *
    abap/heap_area_total..........2000000000 *
    abap/heaplimit................20000000 *
    zcsa/installed_languages......EDJIM13 *
    zcsa/system_language..........E *
    ztta/max_memreq_MB...........2047 *
    ztta/roll_area................6500000 *
    ztta/roll_extension...........2000000000 *
    ABAP/4 processor: SYSTEM_CANCELED
    Job cancelled
    Thanks,
    Hari Immadi
    http://immadi.com
    SEM BW Analyst

    Hi Hari,
    We were recently having similar problems with the delta for CCA_9.  And activating index 4 on COEP resolved our issues.
    Yes, by default there is a 2 hour safety interval for the CCA_9 DataSource.  You could run this extractor hourly but at the time of extraction you will only be pulling postings through 2 hours prior to extraction time.  This can be changed for testing purposes but SAP strongly discourages changing this interval in a production environment.  SAP does provide an alternative described in OSS note 553561.  You may check out this note to see if it would work for your scenario.
    Regards,
    Todd

  • 0HR_PT_1 Extraction issue No entry in HR table T569R for 0106

    hi experts,
    we are implementing Time and Labour cube, and trying to extracting from ECC to BW cube (0PT_C01).
    when i extract data from 0HR_PT_1 data source the PSA Request is in yellow for a long time. when i check the issue in ECC. Its giving the below issues.
    No entry in HR table T569R for 0106
    No entry in HR table T569R for 0105
    No personal work schedule for personnel number 00013090
    No personal work schedule for personnel number 00013145
    No personal work schedule for personnel number 00500009
    No personal work schedule for personnel number 00500026
    How to rectify the above issues and the load make successful?
    Regards
    venuscm

    Hi,
    Do the below things, as per the note 696836.
    By using the view V_T569R, maintain the retroactive categories
    1. 05 'Earliest time data carry-over'
    2. 06 'Latest time data carry-over'
    which constitute the time frame, in which the system will perform an extraction. Outside this time frame, the system will neither select any data nor calculate any delta. SAP recommends that you choose a period that includes approximately the current year. You have to maintain and update this time window periodically.
    Regards,
    Anil Kumar Sharma. P

  • Problem in Data extraction for NEW GL DSO 0FIGL_O10

    Hi ,
    I am facing Problem in extraction of records from SAP to BW.
    I have installed Business Content of NEW GL DSO  0FIGL_O10.
    When I extract the Data from SAP R/3, to this DSO  ( 0FIGL_O10 )  the reocrds are getting over written
    For Example  When I go the the Mange Option ( InfoProvider Administration)  the transferred Records and the Added Records are not same.  The Added records are less then the Transfered reocords.
    This is happening becuase of Key Filed Definations.
    I have 16 Characterisics in the KEY FIELD, which the maximum that I can have. But the Data comming from is Unique in some casses.
    As result the data get added up in the DSO, hence my balances are not matching with SAP R/3 for GL Accounts.
    There are total 31 Characteristics in the Datasource (0FI_GL_10) . Of which 16 Charactheristics i can include in the Key field area.
    Please suggest some solution.
    Regards,
    Nilesh Labde

    Hi,
    For safety, the delta process uses a lower interval setting of one hour (this is the default setting). In this way, the system always transfers all postings made between one hour before the last delta upload and the current time. The overlap of the first hour of a delta upload causes any records that are extracted twice to be overwritten by the after image process in the ODS object with the MOVE update. This ensures 100% data consistency in BW.
    But u can achive ur objective in different manner::
    Make a custom info object ZDISTINCT and populate it in transformation using ABAP code. In ABAP try and compound the values from different charactersitcs so that 1 compounded characterstic can be made. Use ZDISTINCT in ur DSO as key
    Just a thought may be it can solve ur problem.
    Ravish.

Maybe you are looking for

  • How can I change the Country and Credit card Apple ID without loose the app installed ?

    Hi, I move from Brazil to Qatar and keep using my account at Brazil since I have a credit card there and the old address. Today Ireceived a message when I was updating my app at my iPad that I should change my payment method that was rejected (???),

  • Post-iTunes re-install file problem

    After re-installing iTunes on my Sony laptop, I get this error: "The iTunes library .itl file is locked, on a locked disc, or you do not have write permission for this file".  What should I do?

  • Laptop satellite

    My laptop was running updates through Microsoft essentials it said not to turn off computer well we had a power outage so when I turned it back on it was stuck on loading updates I've tried rebooting just about everything even trying to load a window

  • Merge to HDR Error

    I posted this in the photoshop forum and haven't received an answer. Maybe you guys in the script forum can tell me whats happening. I'm attempting to run the merge to HDR. Everytime I click on save this is the error message I receive. Error 8000: Ca

  • Check overdue items 2!!!!!!!!!!!!!!!!!1

    i have a new version of my program which im gonna post in full here and illd like someone to provide de code to make buttom"check overdue items " to work taking in consideration that a book is borrowed only for 15 day max from the date it was borrowe