Data load in Production Issue

Hi All !
1) Loaded data from datasource 2LIS_11_VAITM to PSA. Successfully.
2) While loading PSA to data target, the activation fails due to invalid characters.
I have few options:
1) It has 200 datapackets. Correcting manually is not possible due two huge volume.
2) Added the invaliid character in RSKC. Though I dont see any other approach.
3) Writing the routine is also not possible as its a high priority.
Is the RSKC approach is OK to use? Please advise.
With Regards
Rekha

Hi Rekha,
In earlier same issue I was also faced and resolved it by RSKC.
I think you go ahead with 2)......not  a problem at all.
Coding or FM is somewhat time consuming process for this small issue.
Thanks-RK

Similar Messages

  • Data Load in Production

    Hi Gurus !
    I need to perform the data load activities for year 2007, 2008, 2009 & 2010. I already performed deleting & filling of the SETUP tables. But its pulling only 0 records in the BI.
    Please reply at the earliest. Its very critical. Points will be assigned when it gets resolved. Thanks.
    With Regards
    Rekha

    Hi ,
    I need to perform the data load activities for year 2007, 2008, 2009 & 2010. I already performed deleting & filling of the SETUP tables. But its pulling only 0 records in the BI.
    As you are loading data for 2007,2008,2009 & 2010 it is selective loading with respect to year I hope and it is a init load.
    Check R/3 - Bw connection.
    Data need to be checked in RSA3.
    Replicate the data source.
    Regards,
    Sultan.

  • Master data loading special character issue in bw

    Hi ,
    while extracting Material master data from r/3 system due to special characters load is getting failed even after enabling in RSKC.
    is there any length restriction  in RSKC to enable special characters like 20 or 30 only.
    If i specify the "ALL_CAPITAL " in RSKC what exactly it will do .
    Regards
    Ravi

    Hi,
    No small letters are permitted for characteristic values (in the InfoObject maintenance). It depends, however, on the language being used as to whether a character is capitalized.
    It is guaranteed that the characters ' !"%&''()*+,-./:;<=>?_0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ' are capitalized in all languages. Thus these characters are permitted for characteristic values.
    Expl:
    The bit combination, that stands for an 'Ö' in German, displays a small 'sch' in the Russian character set. In this way, a German 'Ö' is not capitalized in Russian.
    In the German character set, the bit combination shows a "pictured reflected P" for the Russian capitals "SCH". The conversion of small letters into capitals is, therefore, language-dependent.
    If you have a system, that delivers you permitted characters other than ' !"%&''()*+,-./:;<=>?_0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ', you can enter these characters here. The system the checks whether these characters are capitalized in all of the installed languages in your system . If this is the case, then these characters are also accepted in characteristic values.
    You should only add additional characters to the permitted characters, if you have already installed all the languages that you wish to use later. All languages that you install subsequently are not checked.
    It is possible that the validity of these additional characters is then not guaranteed, and the selectability of the entire master data is, therefore, put at risk.
    Edited by: venu pawar on Jan 7, 2010 1:22 PM

  • Master Data Load to APO from SAP MDM through SAP PI

    Hi SDners,
    This is a Parts Master Data Solution for one of the Largest Auto Manufacturer where SAP MDM will be the central hub of all the Global and Local attributes of Parts . The subscribing system is SAP SCM APO . Please advice if there is any possibility of direct integration between APO -MDM through PI. Does PI has the standard IDOC types to communicate with APO.
    Also does APO has some RFC/BAPI to do master data load for Product Master /n/sapapo/mat1
    Thanks,
    Prabuddha

    Hi,
    Check the LUWs in SM58 in source system and then execute it. Else check in BD87 and push the IDocs manually in ECC.
    Thanks
    Reddy
    Edited by: Surendra Reddy on Feb 24, 2009 10:59 AM

  • Data Load Issue "Request is in obsolete version of DataSource"

    Hello,
    I am getting a very strange data load issue in production, I am able to load the data upto PSA, but when I am running the DTP to load the data into 0EMPLOYEE ( Master data Object) getting bellow msg
    Request REQU_1IGEUD6M8EZH8V65JTENZGQHD not extracted; request is in obsolete version of DataSource
    The request REQU_1IGEUD6M8EZH8V65JTENZGQHD was loaded into the PSA table when the DataSource had a different structure to the current one. Incompatible changes have been made to the DataSource since then and the request cannot be extracted with the DTP anymore.
    I have taken the follwoing action
    1. Replicated the data source
    2. Deleted all request from PSA
    2. Activated the data source using (RSDS_DATASOURCE_ACTIVATE_ALL)
    3. Re transported the datasource , transformation, DTP
    Still getting the same issue
    If you have any idea please reply asap.
    Samit

    Hi
    Generate your datasource in R/3 then replicate and activate the transfer rules.
    Regards,
    Chandu.

  • Error in 0EMPLOYEE Master Data Load Issue

    Hi,
    We have 0EMPLOYEE Master Data. Due to our new development changes ralted to 0EMPLOYEE, we have scheduled 2 new info packages with Personnel number range. While creation of infopackages, we forget to main time interval from 01.01.1900 to 31.12.9999. Instead of this, the default range was selected as 24.04.2009 to 31.12.9999. Because of this selection in InfoPackage, the Employee Master Data Valid from date was changed to 24.04.2009 for all the employees in the master data after the data load.
    Even after i change this selection properly and loading the data also, its not correcting with correct valid from dates.
    Can you please advice, how can we fix this issue ASAP as its a production issue?
    Thanks!
    Best regards,
    Venkata

    > Even after i change this selection properly and loading the data also, its not correcting with correct valid from dates.
    May be for this you have the ONLY option to delete 0Employee master data and reload it again. For this you need to delete dependent transaction data also.
    Cheers,
    Sree

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Purchase order delivery shedule tabl , no Good issue date, loading date.?

    Hi Experts
      We found some Purchase order, in po item, it is without goods issue date, loading date and so on.
    Material has set ATP check on mrp3. what's possbile wrong? something wrong for matieral setup or others?
    Thanks
    Alice

    Hi
    Really thanks for your help. You mentioned " SD delivery of the sending plant is created and processed."
    Do you mean in material master , the sales view -delivery plant will impact this po GI date and Loading date? It maybe something wrong?
    Thanks
    Alice

  • Issue with Data Load Table

    Hi All,
           i am facing issue with apex 4.2.4 ,using the  Data Load Table concept's and in this look up used the
          Where Clause option  ,it seems to be not working this where clause ,Please help me on this

    hi all,
        it looks this where clause not filter with 'N'  data ,Please help me ,how to solve this or help me on this

  • Issues with ondemand Data loader

    Hello,
    We are facing 2 issues with on demand data loader.
    Issue 1
    While inserting 'Contacts' and 'Assets' if the 'Account' information is wrong, the records are created with out accounts even though "Account" is a required field.
    Issue 2
    While inserting records data loader is not checking for duplicates. So duplicate records are getting created.
    Kindly advise if anyone has come across similar issues. Thanks
    Dipu
    Edited by: user11097775 on Jun 20, 2011 11:46 PM

    Hello,
    We are facing 2 issues with on demand data loader.
    Issue 1
    While inserting 'Contacts' and 'Assets' if the 'Account' information is wrong, the records are created with out accounts even though "Account" is a required field.
    Issue 2
    While inserting records data loader is not checking for duplicates. So duplicate records are getting created.
    Kindly advise if anyone has come across similar issues. Thanks
    Dipu
    Edited by: user11097775 on Jun 20, 2011 11:46 PM

  • Most common BW data load errors in production and how to solve them ..

    Hi All,
    Most common BW data load errors in production and how to solve them ..
    Any doc on it ,if so plz send it across to this id [email protected]
    Thanks in advance.
    Rgrds
    shoba

    hi
    1) RFC connection lost.
    2) Invalid characters while loading.
    3) ALEREMOTE user is locked.
    4) Lower case letters not allowed.
    5) While loading the data i am getting messeage that 'Record
    the field mentioned in the errror message is not mapped to any infoboject in the transfer rule.
    6) object locked.
    7) "Non-updated Idocs found in Source System".
    8) While loading master data, one of the datapackage has a red light error message:
    Master data/text of characteristic 'so and so' already deleted .
    9) extraction job aborted in r3
    10) request couldnt be activated because theres another request in the psa with a smaller sid
    11) repeat of last delta not possible
    12) datasource not replicated
    13) datasource/transfer structure not active ´
    14) Idoc Or Trfc Error
    15. ODS Activation Error

  • TileList data load issue

    I am having an issue where the data that drives a tilelist
    works correctly when the tile list is not loaded on the first page
    of the application. When it is put on a second page in a viewstack
    then the tilelist displays correctly when you navigate to it. When
    the tilelist is placed in the first page of the application I get
    the correct number of items to display in the tilelist but the
    information the item renderer is supposed to display, ie a picture,
    caption and title, does not. The strange thing is that a Tree
    populates correctly given the same situation. Here is the sequence
    of events:
    // get tree is that data for the tree and get groups is the
    data for the tilelist
    creationComplete="get_tree.send();get_groups.send();"
    <mx:HTTPService showBusyCursor="true" id="get_groups"
    url="[some xml doc]" resultFormat="e4x"/>
    <mx:XMLListCollection id="myXMlist"
    source="{get_groups.lastResult.groups}"/>
    <mx:HTTPService showBusyCursor="true" id="get_tree"
    url="[some xml doc]" resultFormat="e4x" />
    <mx:XMLListCollection id="myTreeXMlist"
    source="{get_tree.lastResult.groups}"/>
    And then the data provider of the tilelist and tree are set
    accordingly. I tried putting moving the data calls from the
    creation complete to the initialize event thinking that it would
    hit earlier in the process and be done by the time the final
    completion came about but that didn't help either. I guess I'm just
    at a loss as to why the tree works fine no matter where I put it
    but the TileList does not. It's almost like the tree and the
    tilelist will sit and wait for the data but the item renderer in
    the tilelist will not wait. Which would explain why clicking on the
    tile list still produces the correct sequence of events but the
    visual component of the tilelist is just not working right. Anyone
    have any ideas?

    Ok, so if ASO value is wrong, then its a data load issue and no point messing around with the BSO app. You are loading two transactions to the exact same intersection. Make sure your data load is set to aggregate values and not overwrite.

  • Why the delivery date is the same date as 'transptn plan date" & loading date' & ' good issue' & GR end date'

    Hi Experts,
    why the delivery date is the same date as ‘transptn plan date” & loading date’ & ‘ good issue’ & GR end date’.
    in shipping tab i can see Planned Deliv. Time  170 Days ... wat could be the reason.
    Many Thanks:
    Raj Kashyap

    Hi Jurgen,,
    Thanks for quick reply!!
    But i didnot find any things like that .. what could be the customizing .. and we are using GATP from APO side.
    \Raj Kashyap

  • Data Load Issue

    Dears,
    we are trying to distrigute the data load functionality to base users.
    However it seems that in our tests these kind of users can only use the option "replace by security".
    If they try to run the data load without having selected this option, the system returns the following error:
    "No write access to period December"
    which is very weird as the period December is available for input to the same users through data entry forms.
    Please help.

    This comes up when the user does not have write access to all the intersections in the subcube.
    Let's say you have security on Custom 1 with members A, B, and C, and a file with some data.
    If John only has Read Access to A, a replace by security will clear and load products B and C only.
    If John tries a Replace load, it will attempt to clear all members, however because John does not have access to clear product A, this causes the error you're looking.

  • Loading master data in BI production

    Dear Experts,
    While loading master data in BI production we are facing error when updating IDOCS.
    __MONITOR ADMINISTRATOR WORKBENCH__
    STATUS:
    ERROR:Error when updating IDOCS in source system.
    DIAGNOSIS:Errors have been reported in source system during IDOCS Update
    SYSTEM RESPONSE:Some IDOCS have error status
    PROCEDURE:
    Check the IDOCS in source system.You do this using the extraction monitor.
    ERROR:How you ressolve the errors depends on the error message you get.
    SO can anyone tell why this type of errors occured? What will be the reason behind this?
    Thanks in advance.
    Warm Regards,
    Bunty.

    Hi Bunty,
    Your Idoc error is a general error, the first of all will be to know what is the concrete error with the Idoc, for this reason, please do the following checks and let us know the results:
    1.-) Execute the tx.code SM58. BD87 and WE05 and give us your feedback, tell us for each tx,.code if some errors are shown.
    2.-) Do you display some error if you go to tx.code SM59 - Select RFC Destination "Source system" -> Authorization test?
    3.-) Please ensure that your remote user has the correct authorization profiles in both BW and R/3. You need to set your     
         profiles for ALEREMOTE as such:
       1) in BW -> the remote user (usually 'ALEREMOTE') must have
           profile 'S_BI-WHM_RFC'
       2) in R/3 -> the remote user (usually 'ALEREMOTE') must have
           profile 'S_BI-WX_RFC'
    This is explained in more detail in note 150315.
    Also, you must ensure that you have sufficient DIA processes in both your BW and R/3 systems. DIA work processes are necessary for the processing of IDOC's. And if there are not sufficient DIA work processes, the load may hang in status 'yellow'. Note 561880 gives more information about this:
      1. Make sure there is always sufficient DIA, that is, at least 1 DIA more than all other work processes altogether, for
          example,  8 DIA for a total of 15 work processes (see also note 74141)."
    4.- ) Please identify if the Idocs get hanged in the the source system, and if these Idocs cannot be processed manually in        Tcode SM58 please implement the correction from note 1280898. // I hope it helps you. Best regards. Fran.

Maybe you are looking for

  • How do I keep my contacts separate from other users under the same Apple ID?

    First, we have a single Apple ID that we use to log into iTunes and App Store.  We also have separate iPads.  Recently, my wife purchased an iPhone 4S.  During setup, she entered the Apple ID we use.  What happened next was that all of my contacts we

  • Trying to Configure AIR-CAP3602i from WLC Not working

    Hi, I posted a question similiar to this a few months back and got a repy that worked. So i tried the same thing this time but the AP is different this time. I'm trying to change the RF Channel and Tx power level on my AIR-CAP36021-E-K9 from my WLC W

  • More Processor Questions

    Looking at what processors I can put into a mini. Intel has a "Core 2 Extreme" line. Will these work? Will the "Core 2 Duo" line work? I don't see much difference in specs in the two lines... The quad core proc in the "Extreme" line looks very intere

  • Query for updating prices if product IDs same

    I have a large Windows SQL 2000 database of products that need to have 2007 prices. I have another Windows SQL 2000 database that has the correct prices that I can import into Windows SQL 2000 as a new table. I want to be able to UPDATE the Prices wh

  • Problems installing debugger version of flash player

    I tried to install the debugger version of flash player on a Windows 7 64 bit machine using Firefox as my browser, but got the following installation error:"The version of Adove Flashplayer that you are trying to install is not the most current versi