Query on Data loading

Hi All,
I am loading the data to ODS and the data loading is happening (The Status is showing in Yellow) . While loading, can i schedule the data again.Previous request is already in Yellow status. What will happen if i schedule the data load again.
Pls suggest.
Thanks,
Jelina.

Try to understand loading process to ODS -
Extract Structure
PSA
Trasfer structure
Communication Structure
Through Update rule to New Table
Activation Process
Data updated in Active & Change Log table.
In your case second load will process till Active Table & if First load is still yellow by the time of activation of second load then second load will fail as active table is locked by first load.
If first load is completed then second will go through successfully.
Hope this helps.
Regards,
Viresh

Similar Messages

  • Query in data Loading.

    Hi All,
    I  have a query on data loading.
    Right now the delta is running in BI Production.Suppose some records are missing in ODS,then i can go for select conditions in Info package and selecting the repair Full request.
    Query No 1 : Instead of running full repair request,Will system allow to run Full Load for loading in ODS.
    Query No 2 : Instead of running full repair request,Will system allow to run Full Load for loading in Cube.
    Query No 3 : Can i run delta load, once the Full load is done in Cube and ODS.
    Thanks,
    Jelina

    Hi there,
    The full repair is nothing more than a full load, with the difference that it doesn't mess up with your current deltas, i.e., you can run a full repair without spoiling your deltas.
    So instead of a full load, why not the full repair since it is still a full load extraction?
    Also keep in mind that a full repair over the ODS, if the ODS is set to overwrite, you have no problem bringing data that already exists in the ODS, it will simple overwrite the same data, now for the cube is different. If you bring a full it will add the data that already exist, so in that case you will have to selective delete from the cube the data you will bring once more with the full repair so you won't have any duplicates.
    In all these cases the full repair works without any problem. And after the full repairs the deltas work without any issue.
    Diogo.

  • Query regarding data loading from xls

    Hi
    I want to read data ( integers , only one column) from xls file. I donot want to load it in a table otherwise I could have tried using loader. But what I need to do is I have lakhs of rows in excel sheet and I need to pick them up in a query . I cannot create a table also as working on production . I s there any way I can pick it directly from the excel sheet as I it large volume and I cannot keep them in ( in.. clause) also .

    I want to read data ( integers , only one column)
    from xls file. I donot want to load it in a table
    otherwise I could have tried using loader. But what
    I need to do is I have lakhs of rows in excel sheet
    and I need to pick them up in a query . I cannot
    create a table also as working on production . I s
    there any way I can pick it directly from the excel
    sheet as I it large volume and I cannot keep them
    in ( in.. clause) also .Lakhs of rows!!!! You do realise that an excel spreadsheet is limited to 65536 rows? I'm right in thinking 1lakh = 100000 aren't I?

  • Data load from R/3....(problem)

    Hello all,
    I have a query regarding data load.
    Business scenario  is :
    1) I had taken "init delta"  2 months ago....had started deltas at that time...
    2) somehow...i need to delete whole data....
    3) present RSA7  in R/3 :  Delta queue : 0 records
                                           Repeat Delta : 53000 records.
    4)reloaded whole data till date in BW.....through process chain ( Full load )..
    5) Now...my question is ....
    Should i need to do "init delta " again......and start delta ?????
    Will there be any missing data ????
    what will happen to "repeat delta queue data" ????
    Please tell me the steps ....i should take..
    Thanks...

    hello,
    One more thing .....
    I have to load fiscal period wise....
    April 2007  is : 001.2007 and 013.2006 as special period.....
    Now i have loaded till March 2007 : (012.2006)......
    So before starting deltas.....Is this sequence for loading data is correct ??
    1) 013.2006  to  016.2006 <b>(data for these periods may exist or may not )</b>
    2) 001.2007 to 002.2007 <b>(Current Open month)</b> ....(Full load)
    3) Then Init delta ......then delta ...???
    Please help me....its urgent .....
    Thanks....

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Data loaded to Power Pivot via Power Query is not yet supported in SSAS Tabular Cube

    Hello, I'm trying to create a SSAS Tabular cube from a data loaded to Power Pivot via Power Query (SAP BOBJ connector) but looks like is not yet supported.
    Any one tried this before? any workaround that make sense?
    The final goal is pull data from SAP BW, BO Universe (using PowerQuery) and be able to create a SSAS Tabular cube.
    Thanks in advance
    Sebastian

    Sebastian, 
    Depending on the size of the data from Analysis Services, one work around could be to import the data into into Excel and then make an Excel table and then use the Excel table as a data source. 
    Reeves
    Denver, CO

  • Reference date of last data load in query

    Is it possible to referece the date of the data load in a query via user exit / formula variable or some other way?
    We have a requirement to display data in a query based on when the data was last loaded. If the data load has not occured this month, then only records from the previous month should come into the result set.
    Has anyone ever done anything like this? Or can you think of a way using SY-DATUM in the update rules that would allow you to segment data on the front end?
    Thanks!
    Adam

    Defenitely there would be a better way.
    For short term you can use a date filed in cube which is filled using your update rule as sy-datum.
    Later you can fiter out while running the query
    try using the Infopackage load tables.
    if 2004s, try using any of RSDDSTAT*
    Mey

  • Cannot load query "ZBM_M020_Q001" (data provider "DP_1": {2})

    I'm getting this error message when I try to run any of my bookmarks in this new bi system
    My url looks like:
    http://<myserver>:<myport>/irj/servlet/prt/portal/prtroot/pcd!3aportal_content!2fcom.sap.pct!2fplatform_add_ons!2fcom.sap.ip.bi!2fiViews!2fcom.sap.ip.bi.bex?QUERY=REP_20090803204848&BOOKMARK=4EPI80Q37TF1VDU8EBTPT8RSY
    I get this message: Cannot load query "ZBM_M020_Q001" (data provider "DP_1": )
    When I click on "Information" I see the message: "Document class parameters are incomplete "
    If I leave out the bookmark like:
    http://<myserver>:<myport>/irj/servlet/prt/portal/prtroot/pcd!3aportal_content!2fcom.sap.pct!2fplatform_add_ons!2fcom.sap.ip.bi!2fiViews!2fcom.sap.ip.bi.bex?QUERY=REP_20090803204848
    executes successfully.
    If i create a new bookmark using this query and then execute it it executes fine.
    This new BI Sys was created from Production, and somehow these bookmarks are not activated.
    I looked in RSWR_DATA and I can see that all of them are type 'B', bookmarks.
    Anyone know how this issue can be resolved so these bookmarks are usable?
    Mike
    Edited by: Michael Hill on Mar 30, 2010 5:41 PM

    No it does not work. The bookmark is corrupted with DP_1.
    I agree that the URL you have proposed should work without the query though. In fact you can put anything you want and it will work becuase the relevant information is stored in the bookmark, e.g the following works too!
    ?query=xyz&BOOKMARK=4EPI80Q37TF1VDU8EBTPT8RSY
    I include the query in the url so I can remember what query the bookmark belongs to.
    This pm we should have the fix transported to that system and I can adjust the bookmark.
    Thanks for your reply!
    Mike

  • Query Execution during Data Loads (extarction)

    I think BI 7.0 permitts that but I would still like to confirm from Gurus.
    Can we/users continue to access data or execute queries while extraction is going on?? Can we load data during query execution?
    What are the pros and cons of doing that??
    Always appreciative of your help.
    Suresh

    Hi,
    Query execution will not really hamper data loading or vice versa. But freshly loaded data would not be available for reporting before it gets activated in the infoprovider. Also in case of a cube, if the 'delete overlapping requests' step is to be performed, there could be erroneous looking data in the report till the time this step runs - that is, between the time when the new load has come in and the old request is deleted. That is why loads are best scheduled when users are not working on the system.

  • Query has to display completed quarters and Data loaded month.

    Hi All,
    I have  2 issues in  BEX .
    1. I need to display  data  loaded month(0calmonth):
    Ex: Jan  2007  data has loaded in May 2007, my query has to display  for  3 months JAN 2007, Dec 2006, November 2006.
    2. completed quarters :  if we are in mid of 3 quarter, query has to display Quarter1 and quarter2 (completed)  ,nothing has to display in quarter 3.
    Thanks in advance for your inputs.

    1. you mentioned data loded month... but you are pointing Calender month that data belongs to? Are you want display based on Calender Month that data belongs to (Jan 2007) or Loded Month (May 2007) or Last Loded Month?
    2. create User Exit variable on Quarter or Month IO. in the User Exit Code, check which Quarter that current month belongs to? if it is mid of 3rd quarter, pass Quarter 1 and 2 (or for month 1 to 6).
    hope this helps.
    Nagesh Ganisetti.
    REMOVED

  • Latest PowerQuery issues with data load to data models built with older version + issue when query is changed

    We have a tool built in excel + Powerquery version 2.18.3874.242 - 32 Bit (No PowerPivot) using data load to data model (not to workbook). There are data filters linked to excel cells, inserted in OData query before data is pulled.
    The Excel tool uses organisational credentials to authenticate.
    System config: Win 8.1, Office 2013 (32 bit)
    The tool runs for all users as long as they do not upgrade to PowerQuery_2.20.3945.242 (32-bit).
    Once upgraded users can no longer get the data to load to the Model. Data still loads to the Workbook but the model breaks down. Resetting load to data model erases all measures.
    Here are the exact errors users get:
    1. [DataSource.Error] Cannot parse OData response result. Error: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
    2. The Data Model table could not be refreshed: There isn't enough memory to complete this action. Try using less data or closing other applications. To increase memory available, consider ......

    Hi Nitin,
    Is this still an issue? If so, can you kindly provide the details that Hadeel has asked for?
    Regards,
    Michael Amadi
    Please use the 'Mark as answer' link to mark a post that answers your question. If you find a reply helpful, please remember to vote it as helpful :)
    Website: http://www.nimblelearn.com, Twitter:
    @nimblelearn

  • Data load failure with datasource Infoset Query

    Dear Experts,
    I had a data load failure today,where i am getting data from the datasource which was built on Infoset Query.
    we had a source system upgrade and when i checked Infoset query in the development of Source sytem  im getting the below message :
    "Differences in field EKES-XBLNR
         Output Length in Dictionary : 035
         Output Length in Infoset : 020
    Message was saying Adjust the INFOSET,I dont have authorisation to create the transport in the source system.I requested the respective person to Adjust the Infoset and also to regenerate the same and move to production system.
    I think this will solve my problem,Please correct me if am wrong.
    Regards,
    Sunil Kumar.B

    Hi Suman,
    i am still facing the problem even after adjusting the Infoset.The problem is we are facing the short dump with length mismatch.
    when i checked the Infoset we are taking the field xblnr from table EKES and data element for teh field was XBLNR_LONG(char-35) but when i checked the datasource in RSA2 the dataelement for XBLNR was showing as BBXBL(char20).
    i think this was causing the problem and i checked in SQ02 we will take the field directly from the table and how there is chance to change the dataelement.
    Please help me to correct the same.
    Regards,
    Sunil Kumar.B

  • How to create a report in bex based on last data loaded in cube?

    I have to create a query with predefined filter based upon "Latest SAP date" i.e. user only want to see the very latest situation from the last load. The report should only show the latest inventory stock situation from the last load. As I'm new to Bex not able to find the way how to achieve this. Is there any time characteristic which hold the last update date of a cube? Please help and suggest how to achieve the same.
    Thanks in advance.

    Hi Rajesh.
    Thnx fr ur suggestion.
    My requirement is little different. I build the query based upon a multiprovider. And I want to see the latest record in the report based upon only the latest date(not sys date) when data load to the cube last. This date (when the cube last loaded with the data) is not populating from any data source. I guess I have to add "0TCT_VC11" cube to my multiprovider to fetch the date when my cube is last loaded with data. Please correct me if I'm wrong.
    Thanx in advance.

  • Data load to DSO

    Hi guys...
    Suppose I have two Datasources that are mapped to a infosource and this infosource is mapped to one dso(all objects until DSO are emulated from 3.x to 7.x)...when I load data,I assume that I have to use two infopackages and I get data into DSO in two requests.I have few questions about this,assuming I have only these two requests in my DSO:
    1.When I tried to create a query directly on DSO in query designer... I couldnot find the infoobject 0REQUESTID in query designer...then how can I do if I want to see data request by request rather than all together?
    2.Suppose the DSO gets data like below:
    Fields in DSO:X1,X2,Y1,Y2,Y3    [X1,X2 are characteristics and also keys,Y1,Y2,Y3 are keyfigures]
    Data feeded by Datasource 1 :   X1  X2  Y1
                                                         a     b     10
    Data feeded by Datasource 2 :   X1   X2   Y2   Y3
                                                         a      b     20    30
    so when I load data,I will load data in two requests and these are the only two requests I have in my DSO....then how will data look in DSO.....does it gets stored in two seperate rows or single row?how is it shown in a query result?
    If the keys are not matched,how will the data be shown for keyfigures that are not loaded by that request?
    3.I know that in DSO,We have two options:Overwrite/Addition....how will be the data loading be in following situation:
    Datasource 1 feeds like this in Request 1:
    X1 X2  Y1
    a   b     10
    Datasource 2 feeds like this in Request 2:
    X1  X2  Y1  Y2 Y3
    a    b      30  40   50
    how will the result be shown in our two options Addition and Overwrite?will request 2 overwrite or add up data in Y1?
    Thanks.

    Hi guys...
    Suppose I have two Datasources that are mapped to a infosource and this infosource is mapped to one dso(all objects until DSO are emulated from 3.x to 7.x)...when I load data,I assume that I have to use two infopackages and I get data into DSO in two requests.I have few questions about this,assuming I have only these two requests in my DSO:
    1.When I tried to create a query directly on DSO in query designer... I couldnot find the infoobject 0REQUESTID in query designer...then how can I do if I want to see data request by request rather than all together?
    Request-ID is only a part of the new data table - after activation of your data your request will get lost. If you want to see whats happening, load you data request by request and activate your data after each request
    2.Suppose the DSO gets data like below:
    Fields in DSO:X1,X2,Y1,Y2,Y3 X1,X2 are characteristics and also keys,Y1,Y2,Y3 are keyfigures
    Data feeded by Datasource 1 : X1 X2 Y1
    a b 10
    Data feeded by Datasource 2 : X1 X2 Y2 Y3
    a b 20 30
    so when I load data,I will load data in two requests and these are the only two requests I have in my DSO....then how will data look in DSO.....does it gets stored in two seperate rows or single row?how is it shown in a query result?
    If the keys are equal, you will have only one dataset in your DSO
    If the keys are not matched,how will the data be shown for keyfigures that are not loaded by that request?
    Then you will have two datasets in your DSO
    3.I know that in DSO,We have two options:Overwrite/Addition....how will be the data loading be in following situation:
    Datasource 1 feeds like this in Request 1:
    X1 X2 Y1
    a b 10
    Datasource 2 feeds like this in Request 2:
    X1 X2 Y1 Y2 Y3
    a b 30 40 50
    how will the result be shown in our two options Addition and Overwrite?will request 2 overwrite or add up data in Y1?
    If you choose overwrite, you will get 30 - if you choose addition, you will get 40
    Thanks.

  • Data loading after field enhancement.

    Dear all,
    We are using BI7.00 and in one of our data source, a new field has to be enabled. Our people are under the impression that without downtime, the previous data which is available in the Target and the PSA can have values for the new field also.
    I could not perceive the possibility. Experts suggestion required in this regard. Can you kindly provide answers for the following questions.
    1) Can enhancement be done to the data source without deletion of setup table?
    2) Can the delta queue be as it is without stopping the delta pull process i.e., the process chain and the background jobs.
    3) If the field is enhanced, can the value of the field be loaded to all the data which is previously loaded to the PSA and the Target.
    Request Experts to provide apt solution so that field enhancement can take place without disturbing any of the data loads.
    I went through the forum posts and was able to find something about export data source and Loop back principles - these suggests that my requirement is possible.
    I do not know the process. Can experts provide step by step suggestion to my query.
    Regards,
    M.M

    Hello Magesh,
    1)Enhancement cannot be done if there are records in the set up tables.
    2)When an enhancement is done...delta queue also needs to be empty...so you will have to stop the collective running jobs...lock the system and empty the delta queue by scheduling the delta package twice....then only the transports to production will go succesful.
    3)Until you fill the set up tables again and do a historial loads...the old values for the new added field will not appear..
    If you just do an init without data transfer and schedule new delta loads...then the new added fields will contain values from that day and changes to them...previously loaded values to BW will remain as it is...to have the values for newly added fields you need to load the history through full repair loads by filling the set up tables first.
    Follow the following steps to load only the new values for the added fields
    1)Lock the system
    2)schedule the collective update job through job control so that all the records are in the delta queue and no records or LUW are left in LBWQ for that data source.
    3)Schedule the delta infopackage twice so that even the queue for repeat delta is also empty.
    4) do the transports and then delete the old init and do a new init without data transfer.
    5)schedule the normal delta.
    To have history for the added fields
    1)Lock the system and
    2)Delete the old init and clear the LBWQ from LUW's
    3)Do the transports
    3)Fill the set up tables and do init without data transfer for the data source.
    4)Unlock the system
    5)Do the full repair loads to the BW data targets
    6)Schedule the delta loads.
    Thanks
    Ajeet

Maybe you are looking for

  • Type Mismatch error while calling a Java Function from Visual Basic 6.0...

    Hi, I'm having a problem in calling the Java Applet's Function from Visual Basic. First, I'm getting the handle of the Java Applet and components of it using "Document.Applets(n)" which is a HTML function. I'm calling this function from Visual Basic.

  • Developer 10g error Java.Lang class not found

    Hi My application is live on static IP but when i try to access it from some systems i get the error after downloading Jinitiator Java.lang class not found exception:Oracle forms engine main is this problem of browser or something else? how can i sol

  • Not able to use SRW.USER_EXIT in oracle forms.

    Hi, Below is my requirement in oracle forms: I have a query form, which has Inventory Category Set field and Category 'From' and 'To' fields. Based on the values selected by the user for Category Set, possible combinations for Categories should be di

  • HP Mini 110-1030NR keyboard

    Replacement keyboards for the orginal keyboard on my HP Mini 110 have exhibited a common fault, the <Left Control> key is stuck on (not mechanically stuck however). I have verified fault with 3rd party keyboard test programs. Is it the driver, or som

  • My dvd player dvp-ns55p not playing some discs that it used to

    I have a dvd player model # DVP-NS55P. I recently bought Rio2. It played several times and then started taking forever to load the disc to the point where I now get the message: "cannot load disc". The dvd is not scratched and is in perfec condition.