Problem during  Data Warehouse Loading (from staging table to Cube)

Hi All,
I have created a staging Module in owb to load my flat files to my staging tables.I have created an Warehouse module to load my staging tables to Dimension and Cube that I have created.
My senario:
I have a temp_table_transaction which had loaded my flat files to it .This table had loaded with 168,271,269 milion record as through this flat file.
I have created a mapping in owb which loaded my temp_table_transaction which has join with other tables and some expression and convert function that these numbers fill to a new table called stg_tbl_transaction in my staging module.Running this mapping takes 3 hours and 45 minutes with this configue of my mapping:
Default operation mode in running parameter of Mapp config=Set based
My dimesion filled correctly but I have two problem when I want to transfer my staging table to my Cube:
#1 Problem:
i have created a cube is called transaction_cube with owb and it generated and deployed correctly.
i have created a map to fill my cube with 168,271,268 milon recodes in staging table was called stg_tbl_transaction and deployed it to server (my cube map operating mode is set based)
but after running this map it did not complete after 9 hour and I forced to cancel my running's map by kill its sessions .I want to know this time for loading this capacity of data is acceptable or for this capacity of data we should spend more time.Please let me know if anybody has any Issue.
#2 Problem
To test my map I have created a map with configure set based in operation modes and select my stg_tbl_transaction as source with 168,271,268 records in it and I have created another table to transfer and load my data in it.I wanted to test the time we should spend on this simple map but after 5 hours my data had not loaded in new table.I want to know where is my problem.Should I have set something in configue of map or anothe things.Please guide me about these problems.
CONFIGURATION OF MY SERVER:
i run owb on two socket xeon 5500 series with 192 GB ram and disks with RAID 10 Array
Regards,
Sahar

For all of you
It is possible to load from Infoset to Cube we did it, and it was ok.
Data are really loaded from Infoset (Cube + master dat) to cube.
When you create a transformation under a cube Infoset is proposed, and it works fine ....
Now the process is no more operationnal and i don't understand why .....
Load from infoset to cube is possible, i can send you screen shot if you want ....
Christophe

Similar Messages

  • Load from Staging ODS to Cube

    We have a staging ODS which has been loading to a cube via a delta mechanism.  By mistake someone has carried out a full load. We have deleted the full request.  Is the delta mechanism corrupt and how can we collect the requests which were supposed to be updated (there is no selection options in the package)? Thanks

    I faced a similar issue recently. I would
    1) Delete the full load request from cube
    2) Uncheck the 'data mart status' of that request in the ODS
    3) Delete that request in ODS
    4) Create a new infopackage on the ODS InfoSource (8ODSXXXX) with 'init without data' option
        - before scheduling the infopackage, delete the initialization in the scheduler by going    to (Scheduler/Initialization Options for Source System)
    5) Execute the package
    Now every time a new load comes into the ODS you will be all set.
    Hope this helps,
    Justin

  • Delta records are not loading from DSO to info cube

    My query is about delta loading from DSO to info cube. (Filter used in selection)
    Delta records are not loading from DSO to Info cube. I have tried all options available in DTP but no luck.
    Selected "Change log" and "Get one request only" and run the DTP, but 0 records got updated in info cube
    Selected "Change log" and "Get all new data request by request", but again 0 records got updated
    Selected "Change log" and "Only get the delta once", in that case all delta records loaded to info cube as it was in DSO and  gave error message "Lock Table Overflow" .
    When I run full load using same filter, data is loading from DSO to info cube.
    Can anyone please help me on this to get delta records from DSO to info cube?
    Thanks,
    Shamma

    Data is loading in case of full load with the same filter, so I don't think filter is an issue.
    When I follow below sequence, I get lock table overflow error;
    1. Full load with active table with or without archive
    2. Then with the same setting if I run init, the final status remains yellow and when I change the status to green manually, it gives lock table overflow error.
    When I chnage the settings of DTP to init run;
    1. Select change log and get only one request, and run the init, It is successfully completed with green status
    2. But when I run the same DTP for delta records, it does not load any data.
    Please help me to resolve this issue.

  • Selective Deletion and Data Load from Setup Tables for LIS Extractor

    Hi All,
    We came across a situation where one of the delta in PSA was missed to load in DSO. This DSO updates another cube in the flow. Since it has been many days since this miss come in our knowledge we need to selectively delete data for those documents from DSO & Cube and then take a full load for the Documents filling the setup table.
    Now what will be the right approach to load this data from setup table > DSO > cube. There is change log present for those documents and a few KPI's in DSO are in summation mode.
    Regards
    Jitendra

    thanks Ajeet!!!!
    This is Sales Order extractor, the data got loaded to ODS just fine, but since the data is coming to ODS from different extractor. Everything is fine in ODS, but not in the cube. Will Full repair request and Full load would it make difference when the data is going to cube? I thought that it would matter only if I am loading to ODS.
    what do you mean "Even if you do a full load without any selections you should do a full repair ".
    thanks.
    W

  • Data loading from flat file to cube using bw3.5

    Hi Experts,
                       Kindly give  me the detailed steps with screens  about Data loading from flat file to cube using bw3.5
           ...............Please

    Hi ,
    Procedure
    You are in the Data Warehousing Workbench in the DataSource tree.
           1.      Select the application components in which you want to create the DataSource and choose Create DataSource.
           2.      On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy.
    The DataSource maintenance screen appears.
           3.      Go to the General tab page.
                                a.      Enter descriptions for the DataSource (short, medium, long).
                                b.      As required, specify whether the DataSource builds an initial non-cumulative and can return duplicate data records within a request.
                                c.      Specify whether you want to generate the PSA for the DataSource in the character format. If the PSA is not typed it is not generated in a typed structure but is generated with character-like fields of type CHAR only.
    Use this option if conversion during loading causes problems, for example, because there is no appropriate conversion routine, or if the source cannot guarantee that data is loaded with the correct data type.
    In this case, after you have activated the DataSource you can load data into the PSA and correct it there.
           4.      Go to the Extraction tab page.
                                a.      Define the delta process for the DataSource.
                                b.      Specify whether you want the DataSource to support direct access to data.
                                c.      Real-time data acquisition is not supported for data transfer from files.
                                d.      Select the adapter for the data transfer. You can load text files or binary files from your local work station or from the application server.
    Text-type files only contain characters that can be displayed and read as text. CSV and ASCII files are examples of text files. For CSV files you have to specify a character that separates the individual field values. In BI, you have to specify this separator character and an escape character which specifies this character as a component of the value if required. After specifying these characters, you have to use them in the file. ASCII files contain data in a specified length. The defined field length in the file must be the same as the assigned field in BI.
    Binary files contain data in the form of Bytes. A file of this type can contain any type of Byte value, including Bytes that cannot be displayed or read as text. In this case, the field values in the file have to be the same as the internal format of the assigned field in BI.
    Choose Properties if you want to display the general adapter properties.
                                e.      Select the path to the file that you want to load or enter the name of the file directly, for example C:/Daten/US/Kosten97.csv.
    You can also create a routine that determines the name of your file. If you do not create a routine to determine the name of the file, the system reads the file name directly from the File Name field.
                                  f.      Depending on the adapter and the file to be loaded, make further settings.
    ■       For binary files:
    Specify the character record settings for the data that you want to transfer.
    ■       Text-type files:
    Specify how many rows in your file are header rows and can therefore be ignored when the data is transferred.
    Specify the character record settings for the data that you want to transfer.
    For ASCII files:
    If you are loading data from an ASCII file, the data is requested with a fixed data record length.
    For CSV files:
    If you are loading data from an Excel CSV file, specify the data separator and the escape character.
    Specify the separator that your file uses to divide the fields in the Data Separator field.
    If the data separator character is a part of the value, the file indicates this by enclosing the value in particular start and end characters. Enter these start and end characters in the Escape Charactersfield.
    You chose the; character as the data separator. However, your file contains the value 12;45 for a field. If you set u201C as the escape character, the value in the file must be u201C12;45u201D so that 12;45 is loaded into BI. The complete value that you want to transfer has to be enclosed by the escape characters.
    If the escape characters do not enclose the value but are used within the value, the system interprets the escape characters as a normal part of the value. If you have specified u201C as the escape character, the value 12u201D45 is transferred as 12u201D45 and 12u201D45u201D is transferred as 12u201D45u201D.
    In a text editor (for example, Notepad) check the data separator and the escape character currently being used in the file. These depend on the country version of the file you used.
    Note that if you do not specify an escape character, the space character is interpreted as the escape character. We recommend that you use a different character as the escape character.
    If you select the Hex indicator, you can specify the data separator and the escape character in hexadecimal format. When you enter a character for the data separator and the escape character, these are displayed as hexadecimal code after the entries have been checked. A two character entry for a data separator or an escape sign is always interpreted as a hexadecimal entry.
                                g.      Make the settings for the number format (thousand separator and character used to represent a decimal point), as required.
                                h.      Make the settings for currency conversion, as required.
                                  i.      Make any further settings that are dependent on your selection, as required.
           5.      Go to the Proposal tab page.
    This tab page is only relevant for CSV files. For files in different formats, define the field list on the Fields tab page.
    Here you create a proposal for the field list of the DataSource based on the sample data from your CSV file.
                                a.      Specify the number of data records that you want to load and choose Upload Sample Data.
    The data is displayed in the upper area of the tab page in the format of your file.
    The system displays the proposal for the field list in the lower area of the tab page.
                                b.      In the table of proposed fields, use Copy to Field List to select the fields you want to copy to the field list of the DataSource. All fields are selected by default.
           6.      Go to the Fields tab page.
    Here you edit the fields that you transferred to the field list of the DataSource from the Proposal tab page. If you did not transfer the field list from a proposal, you can define the fields of the DataSource here.
                                a.      To define a field, choose Insert Row and specify a field name.
                                b.      Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BI.
                                c.      Instead of generating a proposal for the field list, you can enter InfoObjects to define the fields of the DataSource. Under Template InfoObject, specify InfoObjects for the fields in BI. This allows you to transfer the technical properties of the InfoObjects into the DataSource field.
    Entering InfoObjects here does not equate to assigning them to DataSource fields. Assignments are made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field.
                                d.      Change the data type of the field if required.
                                e.      Specify the key fields of the DataSource.
    These fields are generated as a secondary index in the PSA. This is important in ensuring good performance for data transfer process selections, in particular with semantic grouping.
                                  f.      Specify whether lowercase is supported.
                                g.      Specify whether the source provides the data in the internal or external format.
                                h.      If you choose the external format, ensure that the output length of the field (external length) is correct. Change the entries, as required.
                                  i.      If required, specify a conversion routine that converts data from an external format into an internal format.
                                  j.      Select the fields that you want to be able to set selection criteria for when scheduling a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage.
                                k.      Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage.
                                  l.      Under Field Type, specify whether the data to be selected is language-dependent or time-dependent, as required.
           7.      Check, save and activate the DataSource.
           8.      Go to the Preview tab page.
    If you select Read Preview Data, the number of data records you specified in your field selection is displayed in a preview.
    This function allows you to check whether the data formats and data are correct.
    For More Info:  http://help.sap.com/saphelp_nw70/helpdata/EN/43/01ed2fe3811a77e10000000a422035/content.htm

  • My requirement is to update 3 valuesets daily based on data coming to my staging table. What is the API used for this and how to map any API to our staging table? I am totally new to oracle and apps. Please help. Thanks!

    My requirement is to update 3 valuesets daily based on data coming to my staging table. What is the API used for this and how to map any API to our staging table? I am totally new to oracle and apps. Please help. Thanks!

    Hi,
    You could use FND_FLEX_LOADER_APIS.UP_VALUE_SET_VALUE to upload them from staging table (I suppose you mean value set values...).
    You can find a sample scripts if you google around.
    What do you mean "how to map any API to our staging table" ?
    You should do at least the following mapping (which column(s) in the staging table will provide these information):
    - the 3 value sets name which you're going to update/upload (I suppose these are existing value sets or which have been already created)
    - the value set values and  description
    Try to start with something and if there is any issues the community could then help... but for the time being with the description of the problem you have provided, that's the best I can do...

  • URGENT ! JDEV 10.1.2 Problem with data control generated from session bean

    I got a problem with data control generated from session bean which return a collection of data transfer object.
    The dto's seem to be correct. The session bean load correctly the data into and the object's are plenty of data. Using the console to display the dto content is ok.
    When generating a data control from this session bean and associate the dto included in the collection only the first object level and one-to-one dto object are correctly setted in the data control. Object that represent collection into the dto (one-to-many foreign key) are setted as collection with an iterator but the structure of the object is not setted. I don't know how to associate this second level of collection with the dto bean class to obtain the attributes definition.
    I created a case with hr schema like the hrApp demo application in the tutorial with departments and employees table. I got the same problem.
    Is it a bug ?
    It exists a workaround to force the data control to understand the collection data structure ?
    Help is welcome ! this is urgent !!!

    we found the problem by assigning the child dto bean class to the node representing the iterator in the xml file corresponding to the master dto.

  • Problem when loading from ODS to the CUBE

    Hi Experts,
    I am facing an unusual problem when loading data from ODS to the cube.
    I have a first level ODS where the delta postings are updated. I have checked the active table of the ODS and the data is accurate.
    I have deleted the entire data from the Cube and trying to do a full load from ODS to the CUBE.
    I am sure when I run a full load the data goes from Active table of the ODS.
    After the full load the the keyfigure values are 0. I have tried doing the testing by loading couple of sales documents and still the keyfigure values are 0.
    I wonder when I load the full. The data should be picked exactly the way it is in active table of the ODS.
    I also dont have any fancy routines in the update rules.
    Please help me in this regard.
    Regards
    Raghu

    Hi,
    Check the procedure did u do that exactly or not. just follow the the laymen steps here:... and let me know ur issue
    o     First prepare a flat file in Microsoft excel sheet for master data and transaction data and save it in .CSV format and close it.
    o     First select info objects option and create info area then create info object catalog then char, & key figure. Then create ‘id’ in Char, Name as attribute activate it then in key figures create no and activate it.
    o     In info sources create application component then create info source with direct update for master data and flexible update for transaction data. The flat files would be created for master data create info package and execute it.
    o     For transaction data go to info provider then select the file right click on it then select option create ODS object. Then a new screen opens give the name of the ODS and create other screen opens then drag character to key fields.
    o     Activate it then update rules it will give error to it then go to communication structure and give 0Record mode then activate it then go to update rules it will activate it.
    o     Then go to info source then create info package then external data then processing update, Data targets then the scheduling then click start then the monitor.
    o     Then other screen opens there we can see if we doesn’t able to see the records then come back to the info provider right click on it then other screen opens then select the option activate data in ODS then a screen opens.
    o     so select the option QM where it would be in a yellow co lour then by double click then another screen opens then select green color option then click continue it and it comes to the previous screen then select the option and click save it would be underneath of it just click it up and close it.
    o     Then come back to the main screen to the info source to the info package go to the package and right click on it then a screen opens then click schedule without distributing any of the tab directly click the monitor tab. Then refresh it until the records come to the green.
    o     Once it is green the click the data target and see the file executed.
    o     Now go to the info provider then a right click on the file name then create info cube. Then a new screen opens and gives the name and activates it. The drag the key figures to the other side, time characteristics to the other side, and chacetristics to the other side.
    o     Then create the dimension table then assign it and activate it .Then go to file name right click on it then select the option create update rules select ODS option and give the file name and activate it .
    o     Then come back to the main screen and refresh it. Whenever you see 8 file names you will know that it is undergoing the data mart concept.
    o     In the main screen click ODS file right click then a pop up screen opens select the option update ODS data target. Then another screen opens in that we are able to see 2 options full update, initial update then select initial update.
    o     Then another screen opens info Package there the external data , data targets , scheduler select the option later in background option then a another screen opens just click immediate then click save automatically it closes then click start option.
    o     Then select monitor option then the contents then the field to be elected then the file to be executed.
    regards
    ashwin

  • Copy from staging table to multiple tables

    We are using an SSIS package to fast load our data source into a staging table for processing.
    The reason we are using a staging table is that we need to copy the data from staging to our actual DB tables (4 in total), and the insertion order matters as we have foreign key relationships.
    When copying the data from our staging table, should we enumerate through all the records and use an insert-select method for each row or is there a more efficient way to do this?

    Our raw data source is a mdb file and we are using SSIS to fast load into SQL Server, and we are looking to transform the data set into 3 tables (using a stored proc):
    Site (SiteID, Name)
    Gas (ID, Date, Time, GasFIeld1, GasField2....., SiteID)
    GenSet (ID, Date, Time, GenSetField1, GenSetField2.....,
    SiteID)
    Each record in our raw data source contains a Name field which identifies the Site. We only need to add a new site to the Site table if it does not already exist. This is already coded and working using insert-select and NOT EXISTS.
    We now need to iterate over all records and extract a subset of data for the Gas table and extract a subset of data for the GenSet table and link each row with the
    associated SiteID, using Name field.
    The insertion order should be Site table first then remaining tables.
    Are you saying it would be better to transform this data using SSIS and not to use a staging table and stored procedure?
    I would prefer using staging + sp appproach here as that would involve set based logic and would be faster performance wise
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Performance issues with data warehouse loads

    We have performance issues with our data warehouse load ETL process. I have run
    analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
    Scott

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Tbale Data Not picked from Child Table while configuring Database Adapter

    Hi,
    we are experiencing problem while picking multiple records from child table while polling from the database.
    details are, lets say we have one Parent table "Student (Id ,College) and one Child Table (Id,Book,Author)
    Id is Primary key in Student Table and Foregin in Child Table.Lets say data in tables are
    Student Table :196263,"Y.M.C.A"
    Child Table :196263,"English","Gill"
    196263 ,"Maths","Nagpal"
    Now what should be done while polling 1 record should be picked from student table and 2 diferent records from Child Table ..for the above given scenario...
    (I have used 1:M Relationship).
    But What is happening that from Child Table only first record got picked up 2 times..and record from Student Tablle was being picked correctly.....but in the ideal
    Cab anybody help me how to resolve this issues....

    Hi Gurus,
    I am facing similar kind of issue i.e. after polling on header table (Logical Delete), the header records are updated correctly.
    But the problem starts in child table. From Child Table only first record is getting picked up 2 times (if 2 rows are there for that particular header id).
    Kindly help.
    Thanks,
    Abhishek
    Edited by: Abhishek saurabh on Jul 31, 2009 1:09 AM
    Edited by: Abhishek saurabh on Jul 31, 2009 1:11 AM

  • Data warehouse Loader did not write the data

    Hi,
    I need to know which products are the most searched, I know the tables responsible for storing this information
    and are ARF_QUERY ARF_QUESTION. I already have the Data Warehouse module loader running, if anyone knows
    why the data warehouse loader did not write the data in the database, I thank you.
    Thank.

    I have configured the DataWarehouse Loader and its components.Even I have enabled the logging mechanism.
    I can manually pass the log files into queue and then populate the data into Data Warehouse database through scheduling.
    The log file data is populated into this queue through JMS message processing" and should be automated.I am unable to
    configure this.
    Which method is responsible for adding the log file data into loader queue and how to automate this.

  • Can data be loaded from Cube to DSO

    Can data be loaded from Cube to DSO

    Hello,
    Yes,it can be loaded...all you need to do is the follow the same procedure of source and target as you do with the normal flow,...there is no different with cube as data source...you can even schedule delta from the cube to the DSO.
    Thanks
    Ajeet

  • How to delete master data of materials from sap tables

    how to delete master data of materials from sap tables...its needed now.
    i know its not recommended but still we need to do this. give me the best possible approach.
    regards,
    suneetha

    Hi,
    I would suggest you not to write your own code to delete the entries.
    BAPI_MATERIAL_DELETE, this would mark all materials selected for deletion. But still the material would exist in SAP.
    Another solution: Delete a material manually and in another session execute the transaction SM04. This would display the tables which get locked when you do the operation, Then you can write your own code to delete the material numbers from all the related tables.
    Regards
    Subramanian

  • The load from ODS - 0FIAP_O03 to cube 0FIAP_C03  is duplicating records

    Hello Everyone
    I need some help/advice for the following issue
    SAP BW version 3.5
    The Delta load for ODS - 0FIAP_O03 works correctly
    The load from ODS - 0FIAP_O03 to cube 0FIAP_C03  is duplicating records
    NB Noticed one other forum user who has raised the same issue but question is not answered
    My questions are 
    1. Is this a known problem?
    2. Is there a fix available from SAP?
    3. If anyone has had this issue and fixed it, could you please share how you achieved this
    i have possible solutions but need to know if there is a standard solution
    Thankyou
    Pushpa

    Hello Pushpa,
    I assume that you are using the Delta load to the initial ODS and then to the CUBE as well.
    If the Delta is placed in both the places then there should not be any issue while sending the data to CUBE.
    If you are using the FULL Load then normally the data will gets aggregated in the cube as objects are with the Addition mode in the Cube.
    Can you post the exact error that you are facing here as this also can be of the design issue.
    Murali

Maybe you are looking for

  • Apex_application.g_notification

    Hi there, I use a plsql procedure within a process so that the users can fill the category field for all selected customers (within an array) with a value from a list. Sometimes this category value has already been set and it is not updated within th

  • HP Officejet Pro 8500A Plus- Does not print; requires power off/on to reset.

    I've spend a month trying to resolve this issue.  When printing wirelessly, documents hang in the queue and never print.  When the printer power is cycled off then on, documents will print wirelessly, but only for a temporary period of undetermined t

  • My computer is no longer syncing with my icloud account, why?

    I am not sure where to check or change setting for my iCloud account on my computer. Anyone have some insight for me?

  • OBIEE 11.1.1.6 External Security Issue

    I have setup MSAD in WebLogic successfully, however my issue is that no user can log in unless I assign their ID to BIAdministrator role. If I assign my ID to that role I can login. If I assign my ID to BIConsumer role it fails. I have the same setup

  • Custom pseudo classes

    Hi! How can I create my own CSS pseudo classes in Java FX 2? So that I could use a CSS file like *.MyButton : online{ }* In FX 1 it could be done by overriding methods like public long impl_getPseudoClassState() {} but they are now deprecated :-( Is