Unexpected query results during large data loads from BCS into BI Cube

Gurus
We have had an issue occur twice in the last few months but its causing our business partners a hard point.  When they send a large load of data from BCS to the real time BI Cube the queries are showing unexpected results.  We have the queries enabled to report on yellow requests and that works fine it seems the issue occurs as the system is processing the closing of the request and opening the next request.  Has anyone encountered this issue if so how did you fix it?
Alex

Hi Alex,
There is not enough information to judge. BI queries in BCS may use different structure of real-time, basic, virtual cubes and multiproviders:
http://help.sap.com/erp2005_ehp_02/helpdata/en/0d/eac080c1ce4476974c6bb75eddc8d2/frameset.htm
In your case, most likely, you have a bad design of the reporting structures.

Similar Messages

  • Data loading from flat file to cube using bw3.5

    Hi Experts,
                       Kindly give  me the detailed steps with screens  about Data loading from flat file to cube using bw3.5
           ...............Please

    Hi ,
    Procedure
    You are in the Data Warehousing Workbench in the DataSource tree.
           1.      Select the application components in which you want to create the DataSource and choose Create DataSource.
           2.      On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy.
    The DataSource maintenance screen appears.
           3.      Go to the General tab page.
                                a.      Enter descriptions for the DataSource (short, medium, long).
                                b.      As required, specify whether the DataSource builds an initial non-cumulative and can return duplicate data records within a request.
                                c.      Specify whether you want to generate the PSA for the DataSource in the character format. If the PSA is not typed it is not generated in a typed structure but is generated with character-like fields of type CHAR only.
    Use this option if conversion during loading causes problems, for example, because there is no appropriate conversion routine, or if the source cannot guarantee that data is loaded with the correct data type.
    In this case, after you have activated the DataSource you can load data into the PSA and correct it there.
           4.      Go to the Extraction tab page.
                                a.      Define the delta process for the DataSource.
                                b.      Specify whether you want the DataSource to support direct access to data.
                                c.      Real-time data acquisition is not supported for data transfer from files.
                                d.      Select the adapter for the data transfer. You can load text files or binary files from your local work station or from the application server.
    Text-type files only contain characters that can be displayed and read as text. CSV and ASCII files are examples of text files. For CSV files you have to specify a character that separates the individual field values. In BI, you have to specify this separator character and an escape character which specifies this character as a component of the value if required. After specifying these characters, you have to use them in the file. ASCII files contain data in a specified length. The defined field length in the file must be the same as the assigned field in BI.
    Binary files contain data in the form of Bytes. A file of this type can contain any type of Byte value, including Bytes that cannot be displayed or read as text. In this case, the field values in the file have to be the same as the internal format of the assigned field in BI.
    Choose Properties if you want to display the general adapter properties.
                                e.      Select the path to the file that you want to load or enter the name of the file directly, for example C:/Daten/US/Kosten97.csv.
    You can also create a routine that determines the name of your file. If you do not create a routine to determine the name of the file, the system reads the file name directly from the File Name field.
                                  f.      Depending on the adapter and the file to be loaded, make further settings.
    ■       For binary files:
    Specify the character record settings for the data that you want to transfer.
    ■       Text-type files:
    Specify how many rows in your file are header rows and can therefore be ignored when the data is transferred.
    Specify the character record settings for the data that you want to transfer.
    For ASCII files:
    If you are loading data from an ASCII file, the data is requested with a fixed data record length.
    For CSV files:
    If you are loading data from an Excel CSV file, specify the data separator and the escape character.
    Specify the separator that your file uses to divide the fields in the Data Separator field.
    If the data separator character is a part of the value, the file indicates this by enclosing the value in particular start and end characters. Enter these start and end characters in the Escape Charactersfield.
    You chose the; character as the data separator. However, your file contains the value 12;45 for a field. If you set u201C as the escape character, the value in the file must be u201C12;45u201D so that 12;45 is loaded into BI. The complete value that you want to transfer has to be enclosed by the escape characters.
    If the escape characters do not enclose the value but are used within the value, the system interprets the escape characters as a normal part of the value. If you have specified u201C as the escape character, the value 12u201D45 is transferred as 12u201D45 and 12u201D45u201D is transferred as 12u201D45u201D.
    In a text editor (for example, Notepad) check the data separator and the escape character currently being used in the file. These depend on the country version of the file you used.
    Note that if you do not specify an escape character, the space character is interpreted as the escape character. We recommend that you use a different character as the escape character.
    If you select the Hex indicator, you can specify the data separator and the escape character in hexadecimal format. When you enter a character for the data separator and the escape character, these are displayed as hexadecimal code after the entries have been checked. A two character entry for a data separator or an escape sign is always interpreted as a hexadecimal entry.
                                g.      Make the settings for the number format (thousand separator and character used to represent a decimal point), as required.
                                h.      Make the settings for currency conversion, as required.
                                  i.      Make any further settings that are dependent on your selection, as required.
           5.      Go to the Proposal tab page.
    This tab page is only relevant for CSV files. For files in different formats, define the field list on the Fields tab page.
    Here you create a proposal for the field list of the DataSource based on the sample data from your CSV file.
                                a.      Specify the number of data records that you want to load and choose Upload Sample Data.
    The data is displayed in the upper area of the tab page in the format of your file.
    The system displays the proposal for the field list in the lower area of the tab page.
                                b.      In the table of proposed fields, use Copy to Field List to select the fields you want to copy to the field list of the DataSource. All fields are selected by default.
           6.      Go to the Fields tab page.
    Here you edit the fields that you transferred to the field list of the DataSource from the Proposal tab page. If you did not transfer the field list from a proposal, you can define the fields of the DataSource here.
                                a.      To define a field, choose Insert Row and specify a field name.
                                b.      Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BI.
                                c.      Instead of generating a proposal for the field list, you can enter InfoObjects to define the fields of the DataSource. Under Template InfoObject, specify InfoObjects for the fields in BI. This allows you to transfer the technical properties of the InfoObjects into the DataSource field.
    Entering InfoObjects here does not equate to assigning them to DataSource fields. Assignments are made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field.
                                d.      Change the data type of the field if required.
                                e.      Specify the key fields of the DataSource.
    These fields are generated as a secondary index in the PSA. This is important in ensuring good performance for data transfer process selections, in particular with semantic grouping.
                                  f.      Specify whether lowercase is supported.
                                g.      Specify whether the source provides the data in the internal or external format.
                                h.      If you choose the external format, ensure that the output length of the field (external length) is correct. Change the entries, as required.
                                  i.      If required, specify a conversion routine that converts data from an external format into an internal format.
                                  j.      Select the fields that you want to be able to set selection criteria for when scheduling a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage.
                                k.      Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage.
                                  l.      Under Field Type, specify whether the data to be selected is language-dependent or time-dependent, as required.
           7.      Check, save and activate the DataSource.
           8.      Go to the Preview tab page.
    If you select Read Preview Data, the number of data records you specified in your field selection is displayed in a preview.
    This function allows you to check whether the data formats and data are correct.
    For More Info:  http://help.sap.com/saphelp_nw70/helpdata/EN/43/01ed2fe3811a77e10000000a422035/content.htm

  • Data load from BPC into BI 7 cubes for Bex reporting

    Hi,
    Can we pull BPC planning data into the BI cubes so that Bex reports can be created for combined data from BPC cubes and BI cubes?
    Also can we create Bex reports on BPC cubes just like we create for BI reporting cubes?
    Let me also give an example of the scenario we face.
    We have actuals data in a BI 7 basic cube(Actuals cube). Planning is done in BPC, so I understand  that the planned data gets automatically stored in a BPC cube in BI7 system. I want to load the data from this BPC cube to BI Actuals cube and create a combined Bex report to show actuals and plan data.
    Please let me know.
    Thanks,
    Archana

    AS of now, if you report data in the BPC7NW cubes through BEx, you may not get the same result as you get from BPC reports. The reason being BEx won't do the same calculations (for example sign reversals) that BPC client would do. In addition, you won't be able to report the results of the dimension member formulas when you report through BEx. If you have to get data out from BPC cubes to other cubes, you can go through SQE (shared query engine) and that way, yyour results will match with what you get in BPC reports. Once you get data out through SQE into other cubes, you can use BEx to report if you wish.
    More functionality will be available in near future to accomplish BEx reporting for BPC data.
    Regards
    Pravin

  • Need to help in initial data loading from ISU into CRM

    One our client has requirement as
    All ISU data applicable (BP, BA, the appropriate technical data, Contracts, Products, Product Configuration and their correspoinding price Keys and Price Amount) to CRM  should be loaded into CRM as the part of the intial load.
    Eventhough ECRM_GENERATE_EVERH  in ISU but its documentation is not available.
    Is any provision like Report or RFC or function module present in SAP.
    I would appreciate  for you all quick reply with positive and appropriate solution. mailto [email protected]

    Got my answer.We can clear the data using MDX.

  • Need to load large data set from Oracle table onto desktop using ODBC

    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.

    hillelhalevi wrote:
    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.
    Use Oracle's free Sql Developer
    http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html
    You can just issue a query like this
    SELECT /*csv*/ * FROM SCOTT.EMP
    Then just save the results to a file
    See this article by Jeff Smith for other options
    http://www.thatjeffsmith.com/archive/2012/05/formatting-query-results-to-csv-in-oracle-sql-developer/

  • Number of parallel process definition during data load from R/3 to BI

    Dear Friends,
    We are using Bi7.00. We have a requirement in which i should increase the number of parallel process during data load from R/3 to BI.  I want to modify this for a particular data source and check.Can experts provide helpful answers for the following question.
    1) When load is taking place or have taken place, where can we see how many parallel process that particular load has taken.
    2) Where should i change the setting for the number of parallel process for data load (from R/3 to BI) and not within BI.
    3) How system works and what will be net result of increasing or decreasing the number of parallel process.
    Expecting Experts help.
    Regards,
    M.M

    Dear Des Gallagher,
    Thank you very much for the useful information provided. The following was my observation.
    From the posts in this forum, i was given to understand that the setting for specific data source can be done in the infopackage and DTP level, i carried out the same and found that there is no change in the load, i.e., system by default takes only one parallel process even though i maintained 6.
    Can you kindly explain about the above mentioned point. i.e.,
    1) Even though the value is maintained in the infopackage level , will system consider it or not. -> if not then from which transaction system is able to derive the 1 parallel process.
    Actually we wanted to increase the package size but we failed because i could not understand what values have to be maintained  -> can you explain in detail
    Can you calrify my doubt and provide solution?
    Regards,
    M.M

  • How to find the data loaded from r/3 to bw

    hi
    how to find the data loaded from r/3 to bw is correct . i am not able to find which feild in the query is connected to which feild in the r/3 . where i am geting the data from r/3 . is there any process to find which feild  and table the data is comming from . plz help
    thanks in advance to u all

    Hi Veda ... the mapping between R/3 fields and BW InfoObjects should take place in Transfer Rules. Other transformation could take place in Update Rule.
    So you could proceed this way: look at InfoProvider Data Model and see if the Query does perform any calculation (even with Virtual keyfigures / chars). Than go back to Update Rules and search for other calculation / transformation. At least there are Tranfer Rule and eventually DataSource / Extraction Enhancements.
    As you can easily get there are many points where you have to look for ... it's a quite complex work but very usefull.
    Once you will have identified all mappings / transfromation see if BW data matchs R/3 (considering calculations ...)
    Good job
    GFV

  • How to Improve large data loads ?

    Hello Gurus,
    Large data loads at my client long hours. I have tried using the recommedations from various blogs and SAP sites, for control parameters for DTP's and Infopackages. I need some viewpoints on what are the parameters that can be checked in the Oracle and Unix systems. I would also need some insight on:-
    1) How to clear log files
    2) How to clear any cached up memory in SAP BW.
    3) Control parameters in Oracle and Unix for any improvements.
    Thanks in advance.

    Hi
    I think those work should be performed by the BASIS guys.
    2)U can delete the cache memory by using the Tcode : RSRT and then select the cache monitor and then delete.
    Thanx & Regards,
    RaviChandra

  • Need to generate multiple error files with rule file names during parallel data load

    Hi,
    Is there a way that MAXL could generate multiple error files during parallel data load?
    import database AsoSamp.Sample data
      connect as TBC identified by 'password'
      using multiple rules_file 'rule1' , 'rule2'
      to load_buffer_block starting with buffer_id 100
      on error write to "error.txt";
    I want to get error files as this -  rule1.err, rule2.err (Error files with rule file name included). Is this possible in MAXL? 
    I even faced a situation , If i hard code the error file name like above, its giving me error file names as error1.err and error2.err. Is there any solution for this?
    Thanks,
    DS

    Are you saying that if you specify the error file as "error.txt" Essbase actually produces multiple error files and appends a number?
    Tim. 
    Yes its appending the way i said.
    Out of interest, though - why do you want to do this?  The load rules must be set up to select different 'chunks' of input data; is it impossible to tell which rule an error record came from if they are all in the same file?
    I have like 6 - 7 rule files using which the data will be pulled from SQL and loaded into Essbase. I dont say its impossible to track the error record.
    Regardless, the only way I can think of to have total control of the error file name is to use the 'manual' parallel load approach.  Set up a script to call multiple instances of MaxL, each performing a single load to a different buffer.  Then commit them all together.  This gives you most of the parallel load benefit, albeit with more complex scripting.
    Even i had the same thought of calling multiple instances of a Maxl using a shell script.  Could you please elaborate on this process? What sort of complexity is involved in this approach.? Did anyone tried it before?
    Thanks,
    DS

  • Data loading from DSO to Cube

    Hi,
    I have a question,
    In book TBW10 i read about the data load from DSO to InfoCube
    " We feed the change log data to the InfoCube, 10, -10, and 30 add to the correct 30 value"
    My question is cube already have 10 value, if we are sending 10, -10 and 30 Values(delta), the total should be 40 instead of 30.
    Please some one explaine me.
    Thanks

    No, it will not be 40.
    It ll be 30 only.
    Since cube already has 10, so before image ll nullify it by sending -10 and then the correct value in after immage ll be added as 30.
    so it ll be like this 10-10+30 = 30.
    Thank-You.
    Regards,
    Vinod

  • Error is data loading from 3rd party source system with DBCONNECT

    Hi,
    We have just finished an upgrade of SAP BW 3.10 to SAP NW 7.0 EHP1.
    After the upgrade, we are facing a problem with data loads from a third party Oracle source system using DBConnect.
    The connection is working OK and we can see the tables in the source system. But we cannot load the data.
    The error in the monitor is as follows:
    'Error message from the source system
    Diagnosis
    An error occurred in the source system.
    System Response
    Caller 09 contains an error message.
    Further analysis:
    The error occurred in Extractor .
    Refer to the error message.'
    But, unfortunately, the error message has no further information.
    If we look at the job log in sm37, the job finished with the following log -                                                                               
    27.10.2009 12:14:19 Job started                                                                                00           516          S 
    27.10.2009 12:14:19 Step 001 started (program RSBATCH1, variant &0000000000119, user ID RXSAHA)                    00           550          S 
    27.10.2009 12:14:23 Start InfoPackage ZPAK_4FMNJ2ZHNNXC6HT3A2TYAAFXG                                              RSM1          797          S 
    27.10.2009 12:14:24 Element NOAUTHORITYCHECK is not available in the container                                     OL           356          S 
    27.10.2009 12:14:24 InfoPackage ZPAK_4FMNJ2ZHNNXC6HT3A2TYAAFXG created request REQU_4FMXSQ6TLSK5CYLXPBOGKF31G     RSM1          796          S 
    27.10.2009 12:14:24 Job finished                                                                                00           517          S 
    In a BW 3.10 system, there is no  message related to element NOAUTHORITYCHECK. So, I am wondering if this is something new in NW 7.0.
    Thanks in advance,
    Rajib

    There will be three things to get the errors like this
    1.RFC CONNECTION FAILED
    2.CHECK THE SOURCE SYSTEM
    3.CHECK IT OUT WITH Oracle Consultants WEATHER THEY ARE FILLING UP THE LOADS.TELL THEM TO STOP
    4.CHECK I DOC PROCESSING
    5.FINALLY MEMORY ISSUES.
    6.CATCH THE DATA SOURCE FIRST CHANGE IT AND THEN ACTIVATE AND RUN THE LOAD
    7.LAST IS MEMORY ISSUE.
    and also Check the RFC connection in SM59 If  it is ok then
    check the SAP note : 692195 for authorization
    Santosh

  • How we can automate the data loading from BI-BPC

    Dear  Guru's
    Thanks for watching this thread,my question is
                  How we can load the data from BI7.0 to BPC.My environment is SAP-BI 7.0 and BPC is 7.5 MS version and 2008SQL.
    How we can automate the data loading from  BI- BPC Ms version.Is manual flat file load is mandatory in ms version.
    Thanks in Advance,
    Srinivasan.

    Here are some options
    1) Use standars packages and schedule them :
        A) Openhub masterdata file into a flat file/ BPC App server  and Schedule the package - Import Master Data from a Data File and  other relevent packages.
    2 ) Using Custom Tasks in Custom Packages ( SSIS)
    Procedure
    From the Microsoft SQL Server Business Intelligence Developer Studio, open the Microsoft SSIS folder.
    Create a new package, or select an existing package to modify.
    Choose  Task  Register Custom Task .
    In the Task Location field, browse for the target .dll file.
    Note
    By default, the .dll files are stored in BPC/Websrvr/bin.
    End of the note.
    Enter a task description, select an appropriate icon, then click OK.
    Drag the icon to the designer window. Enter data as required.
    Save the package.

  • Data load from R3 is slow and gives rfc connection error

    Hi all, is there any debug capabilties available on BI 7.0 when it comes to debug data load from R3 using rfc connection ( aleremote and bwiremote users )
    any ideas where can I look for solution.
    thanks.

    Hi,
    Check the connection between the R/3 and BW.
    or It may be the network problem contact system admin or basis people.
    thanks,
    dru

  • Summing up key figure in Cube - data load from ODS

    Hello Gurus,
    I am doing a data load from ODS to cube.
    The records in ODS are at line-item level, and all needs to be summed up at header level. There is only one key-figure. ODS has header and line-item fields.
    I am loading only header field data ( and not the item-field data) from ODS to Cube.
    I am expecting only-one record in cube ( for all the item-level records) with all the key-figures summed up. But couldn't see that.
    Can anyone please explain how to achieve it.
    I promise to reward points.
    =====
    Example to elaborate my point.
    In ODS
    Header-field  item-field   quantity
    123                301          10
    123                302           20
    123                303           30
    Expected record in Cube
    Header-field       Quantity
       123                    60  
    ====================
    Regards,
    Pramod.

    Hello Oscar and Paolo.
    Thanks for the reply. I am using BW 7.0
    Paolo suggested:
    >>If you don't add item number to cube and put quantity as adition in update rules >>from ODS to cube it works.
    I did that still I get 3 records in cube.
    Oscar Suggested:
    >>What kind of aggregate do you have for your key figure in update rules (update >>or no change)?
    It is "summation". And it cannot be changed. (Or at least I do not know how to change it.)
    There are other dimensions in the cube - which corresponds to the field(s) in ODS.
    But, I just mentioned these two (i.e. header and item-level) for simplicity.
    Can you please help?
    Thank you.
    Pramod.

  • BPC:: Master data load from BI Process chain

    Hi,
    we are trying to automatize the master data load from BI.
    Now we are using a package with:
    PROMPT(INFILES,,"Import file:",)
    PROMPT(TRANSFORMATION,%TRANSFORMATION%,"Transformation file:",,,Import.xls)
    PROMPT(DIMENSIONNAME,%DIMNAME%,"Dimension name:",,,%DIMS%)
    PROMPT(RADIOBUTTON,%WRITEMODE%,"Write Mode",2,{"Overwirte","Update"},{"1","2"})
    INFO(%TEMPNO1%,%INCREASENO%)
    INFO(%TEMPNO2%,%INCREASENO%)
    TASK(/CPMB/MASTER_CONVERT,OUTPUTNO,%TEMPNO1%)
    TASK(/CPMB/MASTER_CONVERT,FORMULA_FILE_NO,%TEMPNO2%)
    TASK(/CPMB/MASTER_CONVERT,TRANSFORMATIONFILEPATH,%TRANSFORMATION%)
    TASK(/CPMB/MASTER_CONVERT,SUSER,%USER%)
    TASK(/CPMB/MASTER_CONVERT,SAPPSET,%APPSET%)
    TASK(/CPMB/MASTER_CONVERT,SAPP,%APP%)
    TASK(/CPMB/MASTER_CONVERT,FILE,%FILE%)
    TASK(/CPMB/MASTER_CONVERT,DIMNAME,%DIMNAME%)
    TASK(/CPMB/MASTER_LOAD,INPUTNO,%TEMPNO1%)
    TASK(/CPMB/MASTER_LOAD,FORMULA_FILE_NO,%TEMPNO2%)
    TASK(/CPMB/MASTER_LOAD,DIMNAME,%DIMNAME%)
    TASK(/CPMB/MASTER_LOAD,WRITEMODE,%WRITEMODE%)
    But we need to include these tasks into a BI process chain.
    How can we add the INFO statement into a process chain?
    And how can we declare the variables?
    Regards,
    EZ.

    Hi,
    i have followed your recomendation, but when i try to use the process /CPMB/MASTER_CONVERT, with the parameter TRANSFORMATIONFILEPATH and the root of the transformation file as value, i have a new problem. The value only have 60 char, and my root is longer:
    \ROOT\WEBFOLDERS\APPXX\PLANNING\DATAMANAGER\TRANSFORMATIONFILES\trans.xls
    How can we put this root???
    Regards,
    EZ.

Maybe you are looking for

  • Create Timer Jobs dynamically using code (on the fly)

    What I am trying to accomplish here is different from the usual "Create and deploy a new custom timer job". So please co operate. Scenario : I have around 200 lists in my site. Each list gets its data from a third part web service (cutom code / not B

  • Screen Sharing and Video Chat

    Hi! I have 2 computers right next to eachother, and I felt like testing out the new Video Effects and Screen Sharing abilities of Leopard. So, I logged on to each of the computers in iChat (with 2 different accounts), and attempted to do this. Howeve

  • I really, really, really need the orange firefox button back.

    I cannot use Firefox anymore if the orange Firefox button is not brought back. The annoyance and anger caused from this is seriously at the point that it causes me physical pain. I would like to make some observations regarding this change. 1. Only y

  • Linksys e2000 router Open NAT troubleshooting multiple xbox 360's

    Hello, I've been searching for about 4 hours now how to fix my NAT problems. I thought it had it working on one xbox in my bedroom, as it said OPEN NAT. But, when I looked at my Xbox in the living room, it said MODERATE NAT. How can I get these both

  • JDBC Adapter (Receiver)

    Hi All, Am using a scenario where in Receiver is "JDBC Adapter" , it should update my Acess.mdb file placed in the XI Server path But after config i get below error : Message processing failed. Cause: com.sap.aii.af.ra.ms.api.RecoverableException: JD