Data Load Faild : Urgent

Hi BW guru,
There are some COPA loads failed,
the following is the error message
"The selected source of data is locked by another process"
"Errors in source system"
please advice any one.
Thanks
kumar.

hai,
As it has been telling that source syatem is locked
so u can be knowing the data source behaviour so once u run the delta again
first time it comes as 0 from 0 record
then again you have to schedule the same ip once again
then the record will come.
Thanks
VIkram

Similar Messages

  • Data Loading Question - Urgent

    Dear Experts,
      I have to load 0FI_GL_4.  I want to do an Init with Data Transfer. But,
    there are 120+ million records.  I want to split the data load into smaller chunks filtered by Fiscal Period For E.g:  199701 - 199712
    199801 - 199812......200801 - 200812. 
    I would really appreciate if someone can guide me how to load the data in chunks. We are in BI 7.0.
    Thanks.
    Regards,
    Jamspam
    Edited by: jamspam jamspam on May 27, 2008 4:42 PM

    Hi,
    Try to see the record count for some time period like 6 months or a year and split them in such a way that you have a reasonable record count for each data set like 10-15 million and peform multiple inits for those.
    You can use RSA3 for this.
    Multiple init  
    Hope this helps.
    Thanks,
    JituK

  • Data loading error- URGENT

    Hi all,
    One of the job that is pulling data from R/3 has failed and the error message that I see in Status section is as below, can you please tell what I should do for a successfull data load.  I appreciate your help.
    Request still running
    Diagnosis
    No errors could be found. The current process has probably not finished yet.
    System response
    The ALE inbox of the SAP BW is identical to the ALE outbox of the source system
    and/or
    the maximum wait time for this request has not yet run out
    and/or
    the batch job in the source system has not yet ended.
    Current status
    in the source system
    thanks,
    Sabrina.

    Hi all,
    I rescheduled the datapackage and I can see all the records extracted properly but I guess ODS Activation is taking time.  The status bar in monitor says
    Data successfully updated
    Diagnosis
    The request has been updated successfully.
    InfoSource : Z_PARTNER_FUNCTION
    Data type : Transaction Data
    Source system: CRPCLNT300
    Step by step analysis has everything set to Green, but there is no Error Analysis tab.
    but in Details tab....
    Extraction, Transfer, Subseq.processing and ODS Activation is still in yellow.  Also the total job status on the left hand side is still yellow.  What should i do now, please let me know.
    thanks all
    sabrina.

  • Data load error--urgent

    hi friends ,
    we are geting data load error as; Process 000087 for deterermined fro infoobject DUVTYPE started in STATUS TAB and in DETAILS TAB  update(251new/160 chnaged): Errors occured.
      what should i  know  from information and what are the reasons for this  load failure and possible rectification methods.
    This  load is through  process chains and aslo a delta load.
    Can i delete the request and start the infopackage manually? also will it make my Process chain Green automatically or  should i repeat from the subsequent process?
    Plz help.
    I award points.
    regards,
    siddartha

    Hi siddharta,
    look in the details of the monitor to see the error. Be very cautious with deltas! They can not be repeated every time. it depends on the datasource! What you can do is delete the failed request from all(!) the data targets meanwhile!
    Regards,
    Jzergen

  • Ods to cube  data load verry urgent

    Hallo ALl,
    i have started the int from ods to cube (data packet 1200000) and its talking hell of time there are 85 data packet, can some suggest me how can i seep up the data load.
    regards sanjeev

    hi
    1) Stop the load first.check the job log and st22 if htere any shor dumps
    2) go to the *8 gnerated infosource...
    3) createa New IP.with the option as PSA and datatarget
    4)  it will take little time but it wont fail..
    5) this will process package by package...
    hope it helps
    regards
    AK

  • 0HR_PY_PP_1..posting index data load failure-urgent

    Hi All,
    We  are extracting the postng index data(delta mode) from r/3 data source 0hr_py_pp_1 to the ODS 0PY_PP_1.It is in QA system.
    When i trigger the infopackge , the data is load successful with zero records(as there is no new delta).If i trigger the IP next time, its throwing the error message as below
    Slection criteria TRANSFERDATE is missing with the data rrequest and errorin the source system.
    could anyone let me know , is it possible to load the payroll data second time after the successful updation of the load for the first time.
    If not knldy let me know are there any SNOTEs to be applied.
    Currenly on r/3 side , pacthing exercise is going on.Is there any impact due to this on BW end.
    Please suggest me how to go about this as it is very urgent issue.
    Many Thanks,
    Manjula

    Hi,
    Have you tried to extraction at RSA3?
    if you get the same error message then seek SAP HR Team help.
    Seems like there is some issue about Mentioned PERNR and payroll assignment.
    HR Team can explain about it.
    if possible try run info pack with other PERNR selection and see. it may pulls data.
    Thanks

  • Flat file data loading - Important - Urgent

    Hi all,
    I need a solution to carry out in my project.
    1) I am loading historical data (Header & Line item) from flat file to sales overview cube (0SD_C03) through Infosources 2LIS_11_VAHDR &
       2LIS_11_VAITM.
    2) in header infosource, the datatype for 0SALESEMPLY is NUMC.
        but the data is alpha numeric values (example : 150AR) in the flatfile for this infoobject.
        so it is loading only the numerical value to infocube ( i.e., 150)
    3) so in this situation what has to done to load the entire value (i.e, 150AR) to infocube for this object.
    I have another situation for the same infocube and loading data from same header flatfile:
    1) in header flatfile i have data for Local currency and Document currency fields.
        but the data is not loading for these two fields in theinfocube and the inobjects 0LOC_CURRCY and ODOC_CURRCY are not  
        available  in the infocube.
    2) so in this situation how can i add these two objects and pull data into these objects.
    Pls do let me know asap and update me your contact no so that i can reach you directly by phone.
    Thanks & Regards,
    Kishore

    Hi,
    for first question you need to convert the IO type to Character type. and give appropriate length based on the max.length of that field in flat file. and do all necessary steps like activating IS and all.
    for second question you need to add 2 fields in cube level. b4 that pl veryfy whther the IO s are available in your system or not. if not create them first and assign in IC and realod the data again.
    regards
    ramnaidu

  • Data Loading problem (URGENT!!!)

    Hi Gurus
    I am trying to load data to ods 0fiar_o03 with data source 0fi_ar_4.in this case we have all settings for ods side . But the system is throwing the following message. Even data is not coming to PSA
    "ERROR Diagnosis
    The data request was a full update.
    In this case, the corresponding table in the source system does not
    contain any data.
    System response
    Info IDoc received with status 8.
    Procedure
    Check the data basis in the source system."
    Please let me know the possible cause of error.
    **Points Assured***
    Regards

    Check the Idocs in BD87. if u find idocs stucked up then execute them manually. Check the RFC connection.
    Khaja

  • Master data loading error(urgent)

    hi all,
      when i am loading master data by using 0INFO_REC_ATTR data source i am getting error"Value 'CV14AL1BM ' for characteristic 0MATERIAL is in external format ".AND
    "Record 1 :0MATERIAL : Data record 1 ('5300000000 '): Version 'CV14AL1BM ' is not valid "
    please any one help me.
    Thanks
    sada.k
    Message was edited by: sadasivarao kotaru

    Hi sada,
    if you will put your code in the start routine of the transfer rules, just put this piece of code in there:
    FORM STARTROUTINE
      USING    G_S_MINFO TYPE RSSM_S_MINFO
      CHANGING DATAPAK type TAB_TRANSTRU
               G_T_ERRORLOG TYPE rssm_t_errorlog_int
               ABORT LIKE SY-SUBRC. "set ABORT <> 0 to cancel datapackage
    $$ begin of routine - insert your code only below this line        -
    DATA: l_s_datapak_line type TRANSFER_STRUCTURE,
          l_s_errorlog TYPE rssm_s_errorlog_int.
    call function 'CONVERSION_EXIT_XXXX_INPUT'
    exporting input = datapak-matnr
    importing output = datapak-matnr.
    abort <> 0 means skip whole data package !!!
      ABORT = 0.
    $$ end of routine - insert your code only before this line         -
    ENDFORM.
    This should really solve your issue.You only have to make sure that XXXX is the name of the conversion exit for 0material in the maintenance screen for 0material.
    Siggi

  • Data Load Failure Urgent....!!!!!!!!

    Hi All,
    When we are loading standard info cube 0pur_c01 following error occured.
    Cube :0PUR_C01
    INFOSOURCE : 2LIS_02_SCL.
    Time conversion from 0CALDAY to 0FISCPER (fiscal year ST ) failed with value
    05060905.
    Please propose a solution for this.
    Points Promised
    Regards
    Mahesh kumar

    Hi Mahesh,
    The problem lies in the fact that the fiscal year variants are not maintained properly in the BW customizing or has not been transferred from the source system to the BW.
    Generally the update rules calculate 0FISCPER from 0CALDAY based on the 0FISCVAR entry.
    This is usually a direct update in the update rules.
    If there is no entry in the incoming data record for fiscal year variant then the system can not calculate the correct period.
    You can test this by entering a constant for the fiscal variant in the update rules or
    by reviewing the PSA data being loaded.
    Another possiblity is that the fisacl variant definition has not been
    transferred from the source system:
    -> RSA1 (Admin workbench)
       -> Source System
          -> Select Source system
             -> Right click for context menu
                -> Transfer Global Settings (to update fiscal variants)
    Try this.
    Hope this will solve your problem.
    Regards,
    Sreedhar

  • Data load problem-urgent

    HI all,
    I delete the requests from 05.07.2008 to 09.07.2008 from the ods.
    but the request of 05.07.2008 is also deleted from psa unfortunately.the request is displating in the moniter screen.
    when iam reloading the data of 06.07.2008 and at the time of activating it is not activating and showing a msg like activate the request of 5th.
    can some one please help how to get the data which is not there in psa but the request number is displaying in monitor screen,
    thanks in advance
    sridath
    Edited by: sridath on Jul 11, 2008 8:37 AM

    Hi ,
    Sridath
    I have already  replied your query to do repair full req
    and that is the way to solve ur issue but u have not assigned any points to me
    but in order to overcome this error now u have to delete all previous req from the manage
    data target screen
    Assign some good points if answer helpfull
    Regards ,
    Subash Balakrishnan

  • Urgent help require-ASO data loading

    I am working on 9.3.0 and want to do incremental data loading so that it wont affect my past data.I am still not sure that it is doable or not .Now the question is
    do I need to design a new aggragation after loading data.
    Thanks in advance

    Hi,
    The ASO cube will clear off all the data , if you make any structural changes to the cube ( i.e if you change your outline ,and you can find out what exactly can clear off hte data ,for example , if you add a new member ,then ,it clears off , and in other case ,if you just add a simple formula to the existing member , it might not clear off the data ).
    If you dont want to effect the past data ,and yet you want to incrementally load , then ensure that all the members are already in the otl( take care of time dimension , let all the time members be there in place) and then you can load in ,using the option of 'add to existing values' while you load .
    But remember , you can only do ,without any structural changes. Else , you need to load all together again
    Its good if you design aggregation , as it helps in retrieval performance, but its not mandatory.
    Sandeep Reddy Enti
    HCC

  • Urgent: Master Data Load

    Hi All
    I have activated some master data info objects but i cant find them in the infosource area to load master data (attrib, text and hierarchies) although in the info object definition the application component is displayed. Can someone tell me the reason behind this?
    Thanks in Advance
    ~Alise

    dear alise
      while activating the infoobject make it 'data flow before and after' this will make sure its application area also been activated.
      now coming back to ur query on master data load. once u activate infoobject with 'dataflow before and after', its infosource will also be activated from the content version to active, now in definition of infosource u will find communication structure and datasource assignement (for text, attributes and hierarchy) for the source system. Select them as per ur requirement, make sure field mapping is ok and activate Infosource for text, attribute and hierarchy.
      Now u can create the infopackage for these elements (different infopackages for text, attribute and hierarchy) and laod master data to infoobject via PSA so that u can see the data in PSA.
      confirm that ur data has been reached to the table properly.
      i hope this could resolve ur problem.
      kindly assign the points if it helps.

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Error regarding data load into Essbase cube for Measures using ODI

    Hi Experts,
    I am able to load metadata for dimensions into Essbase cube using ODI but when we are trying same for loading data for Measures encountring following errrors:
    Time,Item,Location,Quantity,Price,Error_Reason
    '07_JAN_97','0011500100','0000001001~1000~12~00','20','12200','Cannot end dataload. Essbase Error(1003014): Unknown Member [0011500100] in Data Load, [1] Records Completed'
    '28_JAN_97','0011500100','0000001300~1000~12~00','30','667700','Cannot end dataload. Essbase Error(1003014): Unknown Member [0011500100] in Data Load, [1] Records Completed'
    '28_JAN_97','0011500100','0000001300~1000~12~00','500','667700','Cannot end dataload. Essbase Error(1003014): Unknown Member [0011500100] in Data Load, [1] Records Completed'
    Can anyone look into this and reply quickly as it's urgent requiremet.
    Regards,
    Rohan

    We are having a similar problem. We're using the IKM SQL to Hyperion Essbase (DATA) knowledge module. We are mapping the actual data to the field called 'Data' in the model. But it kicks everything out saying 'Unknown Member [Data] in Data Load', as if it's trying to read that field as a dimension member. We can't see what we missed in building the interface. I would think the knowledge module would just know that the Data field is, um, data; not a dimension member. Has anyone else encountered this?
    Sabrina

Maybe you are looking for