Issue in Data Loading in BW

Dear BW GURUS,
We are facing data loading in BW. When we start process chain. it is showing Job id.  But the data loading is not starting.  We could not trace the log files for the same.
Please throw some light on this.  Any pointers would be appreciable.
Thanks in advance.
Regards,
Mohankumar.G

Hi Mohan,
By buffering the number ranges, the system reduces the number of database reads to the NRIV table, thus speeding up large data loads.
The SAP BW system uses number ranges when loading master data or transactional data into BW. The system uses a unique master data number ID for each loaded record. Each new record reads the number range table NRIV to determine the next number in sequence. This ensures that there are no duplicate numbers. Not using unique number range values could compromise the datau2019s integrity.
Number ranges can cause significant overhead when loading large volumes of data as the system repeatedly accesses the number range table NRIV to establish a unique number range. If a large volume of data is waiting for these values, then the data loading process becomes slow.
One way to alleviate this bottleneck is for the system to take a packet of number range values from the table and place them into memory. This way, the system can read many of the new records from memory rather than repeatedly accessing and reading table NRIV. This speeds the data load.
Regards,
Vamsi Krishna Chandolu

Similar Messages

  • Issue with Data Load Table

    Hi All,
           i am facing issue with apex 4.2.4 ,using the  Data Load Table concept's and in this look up used the
          Where Clause option  ,it seems to be not working this where clause ,Please help me on this

    hi all,
        it looks this where clause not filter with 'N'  data ,Please help me ,how to solve this or help me on this

  • Issue with Data Load to InfoCube with Nav Attrivutes Turned on in it

    Hi,
    I am having a issue with loading data to a InfoCube. When i turn
    on the navgational attributes in the cube the data load is failing
    and it just says "PROCESSED WITH ERRORS". when i turn them off
    the data load is going fine. I have done a RSRV test both on
    the infoobject as well as the cube and it shows no errors. What
    could be the issue and how do I solve it.
    Thanks
    Rashmi.

    Hi,
    To activate a navigation attribute in the cube the data need not be dropped from the cube.
    You can always activate the navigation attribute in the cube with data in the cube.
    I think you have tried to activate it in the master data as well and then in the cube or something like that??
    follow the correct the procedure and try again.
    Thanks
    Ajeet

  • Issue with Data Load

    HI All,
    I am trying to load data from CRM System to ODS. I am coming across warning
    "Processing (data packet): No data "   ( Yellow Clolor )
    Rest all is fine.(Requests (messages): Everything OK , Extraction (messages): Everything OK ,
    Transfer (IDocs and TRFC): Everything OK .
    I checked in CRM Box in RSA3 and i can see around 30 records for this extractor.
    I checked the IDOCs in BE87 and every thing looks fine . can some one help in this issue .
    Thanks,
    Abraham

    In RSMO i dont see any data records beeing Pulled, i tried loading data to the PSA but it gives back the same Warning messages in the Details Tab .
    Requests (messages): Everything OK
    Extraction (messages): Everything OK
    Transfer (IDocs and TRFC): Everything OK
    Processing (data packet): No data ( This is in Yellow Color ) 
    But it says that the data Load is Successfull though it is pulling 0 records .

  • Latest PowerQuery issues with data load to data models built with older version + issue when query is changed

    We have a tool built in excel + Powerquery version 2.18.3874.242 - 32 Bit (No PowerPivot) using data load to data model (not to workbook). There are data filters linked to excel cells, inserted in OData query before data is pulled.
    The Excel tool uses organisational credentials to authenticate.
    System config: Win 8.1, Office 2013 (32 bit)
    The tool runs for all users as long as they do not upgrade to PowerQuery_2.20.3945.242 (32-bit).
    Once upgraded users can no longer get the data to load to the Model. Data still loads to the Workbook but the model breaks down. Resetting load to data model erases all measures.
    Here are the exact errors users get:
    1. [DataSource.Error] Cannot parse OData response result. Error: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
    2. The Data Model table could not be refreshed: There isn't enough memory to complete this action. Try using less data or closing other applications. To increase memory available, consider ......

    Hi Nitin,
    Is this still an issue? If so, can you kindly provide the details that Hadeel has asked for?
    Regards,
    Michael Amadi
    Please use the 'Mark as answer' link to mark a post that answers your question. If you find a reply helpful, please remember to vote it as helpful :)
    Website: http://www.nimblelearn.com, Twitter:
    @nimblelearn

  • Common issues in data load monitoring...

    Hi all,
    Can u please tell me what are the common issues that occur during data load monitoring and how to rectify them??

    This is a bit dependent on your system and landscape. but to mention some of issues:
    - Deadlock in cube because the inxedes were not dropped before loading data into cube
    - Unallowed signs in master data prevent system from creating SID
    - Duplicate values in master data load (can be avoided by setting a flag in DTP)
    - Loading delta package without having intitialized before
    - Source package delivers key figure without units (this happens when laoding into a cube or an DSO with SID generation)
    - Missing replication of data source after transporting data source in source system
    Regards
    Aban

  • Issue with Data Loading

    Hi Experts,
    Need your kind help in overcoming the following issue.
    We are doing a full load to an ODS daily from flat file system. Due to certain change request the update rule was changed in between and later on we found out that it was not correct. So the update rule was corrected again. Now its working fine. But still some requests are there where the data has been updated in ODS with the faulty update rule. All the data fields in the update rule for the ODS are using overwrite functionality. The problem lies here is we are not sure if the faulty records in the ODS have not been updated by the following requests. As we are not sure of the exact nature of the data coming in.
    From this ODS one Infocube and one more ODS is being updated. The Key figure calculation in the Cube uses the 'maximum', 'minimum' & 'Addition' calculations.
    And the load from ODS to cube is delta.
    Kindly let me know what can we do to close the above issue.
    I was planning to delete the concerned requests from ODS and reconstruct it. But that way I can correct the data in ODS only. I am not sure if the changes made to the ODS data will be properly relayed to the cube in the proper fashion. please let me know if it will be correct?
    All the flat file data is still residing in the PSA. Also kindly let me know if there is any method to check the PSA for the requests spanning the whole period apart from going to each request and check it one by one.
    Thanks in advance.

    Hi Experts,
    My concern was Say I delete the requests from the day the data was loaded with thw wrong update rule, I have the data sitting in the PSA and I can reconstruct them. I understand that the data can be corrected in ODS.
    But in the cube, some key figures are there with the update method as 'addition'. So say I delete the content of the cube and reload it from the ODS all over again, wont the key figures get corruipted yet again?
    Also as I was thinking, say we delete the respective request from the cube as well, and reconstruct the data in the ODS. can I fire the delta request again to capture the changed records to be loaded to cube. to be exact will all the data from the date of the data corruption(as I am going to delete the requests from the cube from that day also) will be loaded to the cube?
    Also another thing is will it be a better idea to reconstruct a request in the ODS and run the delta package for the cube.. And go on ..
    Kindly suggest..
    Thanks in advance !!!

  • Issue with Data Load from R/3

    Hi,
    I have created a generic data source in R/3 based on the option Extraction from SAP Query. the SAP Query (Infoset) created in the TCODE SQ02 is based on the tables BKPF and BSEG. I have generated a Data source on this and successfully replicate dinto BW but when i trigger the load uing infopackage it immediately fails and gives the following messages.
    *If the source system is a Client Workstation, then it is possible that the file that you wanted to load was being edited at the time of the data request. Make sure that the file is in the specified directory, that it is not being processed at the moment, and restart the request.*
    +Job terminated in source system --> Request set to red+
    Can you point where the issues is. I have tested the source system connection and it is fine as other extractors are working fine and data can be pulled
    Thanks
    Rashmi.

    Hi Rashmi,
    Try the following:
    - RSA3 in source system -> test extraction OK?
    - Shortdump (ST22) found in BW or R/3?
    - Locate the job of the extraction in R/3 (SM37) and look at the joblog...
    Hope this leads you to the cause...
    Grtx
    Marco

  • Issue: No Data loaded from Query - MDX, SQL issue

    Hello,
    I'm having a problem with a report based on Query with a hierarchy (0GLACCEXT Hierarchie, Data in Cube 0FIGL_V10), first version of query had 3 filters, but I also tried to load the data without filters thinking it coused the problem - didn't help to fix the issue.
    If I put some characteristics into the report  - the fields are empty - i did "browse data" in the database and found out that only the data for the "year" was fetched by the SQL expression even though I have some similar reports with hierarchies based on 0GL_Account and data is fetched correctly in those ones.
    I tried MDXTEST on SQL expression
    "SELECT {[Measures].[D5LABJ7EYP982LUFOJAXZ3Z3E]} ON COLUMNS,  NON EMPTY [0GLACCEXT                     CVIS].MEMBERS DIMENSION PROPERTIES [0GLACCEXT                     CVIS].[40GLACCEXT] ON ROWS FROM [0FIGL_V10/ZEN_FIGL_V10_Q0001]"
    Data is being loaded.
    What else can I try?
    ps: just found out that data is not fetched when i assign a hierarchie no matter which one (could it be the issue due to the size of hierarchy?)
    Edited by: Elizaveta Nikitina on May 5, 2009 3:33 PM

    Please re-post if this is still an issue to the Data Connectivity - Crystal Reports Forum or purchase a case and have a dedicated support engineer work with you directly

  • Issue with Data Loading between 2 Cubes

    Hi All
    I have a Cube A which has huge amount of data. Around 7 years of data. This cube is on BWA. In order to empty out space from this Cube we have created a new Cube B.
    We have now starting to load data from Cube A to Cube B based on created on. But we are facing a lot of memory issues hence we are unable to load data for a week in fact. As of now we are loading based on 1 date which is not useful as it will take lot of time to load 4 years of data.
    Can you propose some alternate way by which we can make this data transfer faster between 2 cubes ? I though of loading Cube B from DSO under Cube A but that's not possible as the DSO does not have that much old data.
    Please share your thoughts.
    Thanks
    Prateek

    HI SUV / All
    I have tried running based on Parallel process as there are 4 for my system. there are no routines between Cube to Cube. There is already a MP for this cube. I just want to shift data for 4 years from this cube into another.
    1) Data packet as 10, 000 - 8 out of some 488 packets failed
    2) Data packet as 20, 000 - 4 out of some 244 packets failed
    3) Data packet as 50,000 - Waited for some 50 min. No extraction. So killed the job.
    Error : Dump: Internal session terminated with runtime error DBIF_RSQL_SQL_ERROR (see ST22)
    In ST22:
    Database error text........: "ORA-00060: deadlock detected while waiting for
      resource"
    Can you help resolving this issue or some pointers ?
    Thanks
    Prateek

  • Data load stuck from DSO to Master data Infoobject

    Hello Experts,
    We have this issue where data load is stuck between a DSO and master data infoobject
    Data uploads from DSO( std) to master data infoobject.
    This Infoobject has display and nav attributes in it which are mapped from DSO to Infoobject.
    Now we have added a new infoobject as attribute to the master data infoobject and made it as NAV attri.
    Now when we are doing full load via DTP the load is stuck and is not processing.
    Earlier it took only 5 mns of time to complete the full load.
    Please advise what could be the reason and cause behind this.
    Regards,
    santhosh.

    Hello guys,
    Thanks for the quick response.
    But its nothing proceeding further.
    The request is still running.
    earlier this same data is loaded in 5 mns.
    Please find the screen shot.
    master data for the infoobjects are loaded as well.
    I can see in SM50 the process at P table of the infoobject the process is.
    Please advise.
    Please find the detials
    Updating attributes for InfoObject YCVGUID
    Start of Master Data Update
    Check Duplicate Key Values
    Check Data Values
    Process time dependent attributes- green.
    No Message: Process Time-Dependent Attributes- yellow
    No Message: Generates Navigation Data- yellow
    No Message: Update Master Data Attributes - yellow
    No Message: End of Master Data Update - yellow
    and nothing is going further in Sm37
    Thanks,
    Santhosh.

  • Related Data Loading

    Hi,
    I am new to BI, I am going to work support project. Mainly monitoring the process chain & data loading.
    So please anyone tell me , what things i have to keep in mind for data loading and tell some real time scenarios and Issues regarding data loading please....
    <removed by moderator>.
    <removed by moderator>.
    Thanks & Regards,
    San
    Edited by: Siegfried Szameitat on Nov 17, 2008 12:27 PM

    Hi,
    Take a look at  the links below.
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/08f1b622-0c01-0010-618c-cb41e12c72be
    Data load errors - basic checks
    Process Chain Errors
    Regards.

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Data Load Issue "Request is in obsolete version of DataSource"

    Hello,
    I am getting a very strange data load issue in production, I am able to load the data upto PSA, but when I am running the DTP to load the data into 0EMPLOYEE ( Master data Object) getting bellow msg
    Request REQU_1IGEUD6M8EZH8V65JTENZGQHD not extracted; request is in obsolete version of DataSource
    The request REQU_1IGEUD6M8EZH8V65JTENZGQHD was loaded into the PSA table when the DataSource had a different structure to the current one. Incompatible changes have been made to the DataSource since then and the request cannot be extracted with the DTP anymore.
    I have taken the follwoing action
    1. Replicated the data source
    2. Deleted all request from PSA
    2. Activated the data source using (RSDS_DATASOURCE_ACTIVATE_ALL)
    3. Re transported the datasource , transformation, DTP
    Still getting the same issue
    If you have any idea please reply asap.
    Samit

    Hi
    Generate your datasource in R/3 then replicate and activate the transfer rules.
    Regards,
    Chandu.

  • Purchase order delivery shedule tabl , no Good issue date, loading date.?

    Hi Experts
      We found some Purchase order, in po item, it is without goods issue date, loading date and so on.
    Material has set ATP check on mrp3. what's possbile wrong? something wrong for matieral setup or others?
    Thanks
    Alice

    Hi
    Really thanks for your help. You mentioned " SD delivery of the sending plant is created and processed."
    Do you mean in material master , the sales view -delivery plant will impact this po GI date and Loading date? It maybe something wrong?
    Thanks
    Alice

Maybe you are looking for

  • Iphone 5-can't hear anyone in phone mode

    I bought my Iphone 5 and three weeks later, one week after I could send it back, I began to have problems where I cannot hear anyone when i call them. I can't hear the ring, I can't hear them say  hello, nothing.  They can hear me clearly.  I've been

  • Over heating and battery drain on iPhone 4 after iOS 7.1 update

    I am facing heavy battery drain and overheating after updating my iPhone 4 to IOS 7.1 The battery discharges @ 1% for every minute - when not in use / idle. The handset stays uncomfortably warm. The background app refresh, location services, screen b

  • Date in PDF Filename when Emailing Report

    Hi all, Sorry if there is another thread asking the same thing - I couldn't find one. I am trying to send a report from BI Publisher to email addresses. Everything is working fine but I now want the output filename to contain today's date. Is there a

  • Message not being delivered.

    i'm working in my test lab system. I have 2008 DC with Exchange 2007, 2010 and now 2013. before installing the 2013 message delivery between 2007 and 2010 worked without issues. however after installing 2013 message delivery between 2010 to 2007 stop

  • RG 23 A Register

    Hi Gurus After capturing and posting excise invoice, how to see the entries made by the system in the RG 23 A Register? Thanks in Advance Radha Krishna