CRUD operations on Data Mart

Our data sources are as follows:
- An mdb file (downloaded every hour)
- Multiple xls files (downloaded every week)
Our aim is to develop a BI solution using BISM, Data Mart, OLAP cubes etc.
From my understanding, we do not necessarily require an OLTP DB and we can import our data directly into our data mart using SSIS.
- However, with a data mart, will we be able to perform CRUD operations on all our data just like an OLTP DB?
Thanks.

Hi Darrenod,
Thank you for your question. 
I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated. 
Thank you for your understanding and support.
Regards,
Charlie Liao
TechNet Community Support

Similar Messages

  • Do we require an OLTP DB and Data Mart?

    Our data sources are as follows:
    - An mdb file (downloaded every hour)
    - Multiple xls files (downloaded every week)
    Our aim is to develop a BI solution using BISM, Data Mart, OLAP cubes etc.
    From my understanding, we do not necessarily require an OLTP DB and we can import our data directly into our data mart using SSIS.
    - However, with a data mart, will we be able to display all our data and and perform CRUD operations on all our data at our presentation layer just like an OLTP DB? For example, list historical data in table format, which can be updated if needed?
    Thanks.

    Hi DarrenOD,
    It is correct that you do not require an OLTP DB, but only the extracts you require. The extracts are usually significantly less than the OLTP DB since you will never do analysis on every fields in the operational system, but only a small portion of the
    source system.
    The tradisional datawarehouse (DWH) architecture is staging DB, DWH DB(+data mart if needed) and analytic layer ( OLAP \ Tabular). There are very specific and good reasons why. The DWH DB contains all history. Keep in mind that the DWH follows a dimensional
    model whereas the OLTP follows a normalized (3NF) model with lots of indexes and foreign keys and table relationships.
    Data marts are created for specific reporting reasons which cannot be derived from the DWH facts. The marts are created from the DWH tables.
    Hope this helps.

  • How to use tree tables with CRUD operation for begineers ADF 11g

    This is Friday night call for help.
    This is only few sample ressources on the web for tree table and only one with CRUD operation.
    I used this one http://jobinesh.blogspot.com/2010/05/crud-operations-on-tree-table.html because this is the only one that address CRUD.
    And it is shaky. Deletion works fine but insertion not very well. This is working using custom code provided below.
    Depending if the user selection in the tree, the code insert from the master node to the children node.
    Any other options because it is not working well.
    Also where Oracle describes how to use the row, rowset, itorator API? This is really hard to understand like almost if we should not use it.
    then if not how can I insert in tree with two nodes and insert in the parent or children depending the users selection.
    Lately I 'been posting questions on this forum with no response. This hurts. I understand developers cannot spend their time on this but People from Oracle, please help. We pay licenses...
    public void createChildren(RowIterator ri, Key selectedNodeKey) {
    final String deptViewDefName = "model.DepartmentsView";
    final String empViewDefName = "model.EmployeesView";
    if (ri != null && selectedNodeKey != null) {
    Row last = ri.last();
    Key lastRowKey = last.getKey();
    // if the select row is not the last row in the row iterator...
    Row[] found = ri.findByKey(selectedNodeKey, 1);
    if (found != null && found.length == 1) {
    Row foundRow = found[0];
    String nodeDefname =
    foundRow.getStructureDef().getDefFullName();
    if (nodeDefname.equals(deptViewDefName)) {
    RowSet parents =
    (RowSet)foundRow.getAttribute("EmployeesView");
    Row childrow = parents.createRow();
    parents.insertRow(childrow);
    } else {
    RowSet parents =
    (RowSet)foundRow.getAttribute("EmployeesView");
    Row childrow = parents.createRow();
    childrow.setAttribute("DepartmentId",
    foundRow.getAttribute("DepartmentId"));
    parents.insertRow(childrow);
    } else {
    System.out.println("Node not Found for " + selectedNodeKey);
    }

    I am looking for a sample that describe how to design a jsf page with a tree table.
    So you have Department and employees. In the tree first comes Department and if you click the node you see the employees assigned to this department.
    I need to be able to insert a new department or a new employee from the tree table by clicking on a insert button in the panel collection toolbar depending on user selection in the tree.
    I got part of it working but not good enough.
    By problem is the get insertion working
    I have a createChildren method in my AM implementation that get in input a RowIterator and selected node key.
    To goal is to create new records depending of the user selection and the input parameters get populated by the binding like this:
    #{backing_treeSampleBean.selectedNodeRowIterator} #{backing_TreeSampleBean.selectedNodeRowkey} via method binding with parameters.
    Is it the right approach?
    First to be able to insert a parent record, I select nothing in the tree and ri and selectedNodeKey comes to null
    we run this code
    ViewObjectImpl vo = getSchHolidaySchedExceptionsView1();
    //ViewObjectImpl vo = getDepartmentsView1();
    Row foundRow = vo.first();
    Row childrow = vo.createRow();
    vo.insertRow(childrow);
    A new blank entry appears in the parent node and we enter a value.
    The the problem starts when we want to add a child to this parent.
    We select the created parent and press the insert button, this code get executed
    if (nodeDefname.equals(deptViewDefName))
    //list of the children of the parent and create an new row
    RowSet childRows = (RowSet)foundRow.getAttribute("SchHolidayExceptionDatesView");
    Row childrow = childRows.createRow();
    childRows.insertRow(childrow);
    But the new entry does not appear, it is almost like it would be created for a different parent because this is a mandatory field that is not feel in yet and the interface complaints of a missing value. It is created somewhere just not a the right place... This is my guess.
    Do you see something wrong with the code?
    The full code og my create children method is there below
    I am using jdeveloper 11.1.1.3.0 any issues with tree table to know about with this version?
    Thanks for your help
    public void createChildren(RowIterator ri, Key selectedNodeKey) {
    final String deptViewDefName = "com.bcferries.app.pdfroutesched.model.SchHolidaySchedExceptionsView";
    final String empViewDefName = "com.bcferries.app.pdfroutesched.model.SchHolidayExceptionDatesView";
    if (ri != null && selectedNodeKey != null) {
    // last row
    Row last = ri.last();
    Key lastRowKey = last.getKey();
    // if the select row is not the last row in the row iterator...
    Row[] found = ri.findByKey(selectedNodeKey, 1);
    if (found != null && found.length == 1) {
    // foundRow is the row selected
    Row foundRow = found[0];
    // The row selected can be the parent node or the child node
    String nodeDefname = foundRow.getStructureDef().getDefFullName();
    // if parent row
    if (nodeDefname.equals(deptViewDefName))
    //list of the children of the parent and create an new row
    //works but we try to resolve the creation of a parent
    RowSet childRows = (RowSet)foundRow.getAttribute("SchHolidayExceptionDatesView");
    Row childrow = childRows.createRow();
    //childrow.setAttribute("HolidayDate", new java.util.Date().getDate());
    System.out.println("insert child row from master");
    childRows.insertRow(childrow);
    } else
    //RowSet ParentRow = (RowSet)foundRow.getAttribute("SchHolidaySchedExceptionsView");
    //RowSet childRows = (RowSet)ParentRow.first().getAttribute("SchHolidayExceptionDatesView");
    Row childrow = ri.createRow();
    System.out.println("insert child row from child ");
    } else {
    System.out.println("Node not Found for " + selectedNodeKey);
    } else {
    System.out.println(" param null try creating for first row : " +
    ri + " * " + selectedNodeKey);
    ViewObjectImpl vo = getSchHolidaySchedExceptionsView1();
    Row foundRow = vo.first();
    Row childrow = vo.createRow();
    vo.insertRow(childrow);
    }

  • Oracle BPM Process Data mart

    I am required to create audit reports on BPM workflows.
    I am new to thid & need some guidance on configuring BPM Process Data mart. What are the pre-requisites for configuring it & what are the steps to do it.
    Also, need some inputs on BAM database. What is the frequency of data upload. Is it data update or insert in BAM.

    Hi,
    You might want to check out the Administration and Configuration Guides on http://download.oracle.com/docs/cd/E13154_01/bpm/docs65/index.html.
    I suspect you might find the BAM and Data Mart portions of this documentation a bit terse, so I've added the steps below that provides more detail. I wrote this for ALBPM 6.0, but believe it will still work for Oracle BPM 10g. It was created from an earlier ALBPM 5.7 document Support wrote called "ALBPM 5_7 Configuring and Troubleshooting the BAM and DataMart Updater.pdf.
    You can define how often you want the contents in both databases updated (actually inserted) and how long you want to persist the contents of the BAM database during the configuration.
    Here's the contents of the document:
    1. Introduction
    The use of BAM (Business Activity Monitoring) and Data Mart (or Warehouse) information is becoming more and more widespread in today’s BPM project implementations for the obvious benefits they bring to the management and tuning of processes.
    BAM is basically composed by a collection of measurements of current processes load and execution times. This gives us an idea of how the business is doing at this moment (in a pseudo real-time fashion).
    Data Mart, on the other hand, is a historical view of the processes load and execution times. And this gives us an idea of how the business has developed since the moment the projects have been put in place.
    In this document we are not going to describe exhaustively all configuration aspects of the BAM and Data Mart Updater, but rather we will quickly move from one configuration step to another paying more attention to subjects that have presented some difficulties in real-life projects.
    2. Creating the Service Endpoints
    The databases for BAM and for Data Mart first have to be defined in the External Resources section of the BPM Process Administrator.
    In this following example the service endpoint ‘BAMJ2EEWL’ is being defined. This definition is going to be used later as BAM storage. At this point nothing is created.
    Add an External Resource with the name ‘BAMJ2EEWL’ and, as we use Oracle, select the Oracle driver, then click <Next>:
    On the following screen, specify:
    ·     the hostname – here I have used ‘localhost’ as I am just setting this up to work on my laptop
    ·     the port for the Oracle service
    ·     the SID – here I have use Oracle Express so the SID is ‘XE’
    ·     the new user to create / use in Oracle for this database – here I have specified ‘BPMBAM’. This user, and its database, will be created later
    ·     the password for the user
    Scroll down to the bottom of the page and click <Save>.
    In addition to a standard JDBC connection that is going to be used by the Updater Service, a remote JDBC configuration needs to be added as the Engine runs in a WebLogic J2EE container. This Data Source is needed to grant the Engine access over BAM tables thru the J2EE Connection Pool instead of thru a dedicated JDBC. The following is an example of how to set this up.
    Add an External Resource with the name ‘BAMremote’ and select the Oracle driver, then click <Next>
    On the following screen, specify:
    ·     the Lookup Name that will be used subsequently in WebLogic - here I have given it the name ‘XAbamDS’
    Then click <Save>.
    In the next example the definition ‘DWHJ2EEWL’ is created to be used later as Data Mart storage. If you are not going to use a Data Mart storage you can skip this step.
    Add an External Resource with the name ‘DWHJ2EEWL’ and select the Oracle driver, then click <Next>:
    On the following screen, specify:
    ·     the hostname – here I have used ‘localhost’ as I am just setting this up to work on my laptop
    ·     the port for the Oracle service
    ·     the SID – here I have use Oracle Express so the SID is ‘XE’
    ·     the new user to create / use in Oracle for this database – here I have specified ‘BPMDWH’. This user, and its database, will be created later
    ·     the password for the user
    3. Configuring BAM Updater Service
    Once the service endpoint has been created the next step is to enable the BAM update, select the service endpoint to be used as BAM storage and configure update frequency and others. Here the “Updater Database Configuration” is the standard JDBC we configured earlier and the “Runtime Database Configuration” is the Remote JDBC as we are using the J2EE Engine.
    So, here’s the example of how to set up the BAM Updater service….
    Go into ‘Process Monitoring’ and select the ‘BAM’ tab and enter the relevant information (using the names created earlier – use the drop down list to select):
    Note that here, to allow me to quickly test BAM reporting, I have set the update frequency to 1 minute. This would not be the production setting.
    Once the data is input, click <Save>.
    We now have to create the schema and related tables. For this we will open the “Manage Database” page that has appeared at the bottom of the BAM screen (you may have to re-select that Tab) and select to create the database and the data structure. The user required to perform this operation is the DB system administrator:
    Text showing the successful creation of the database and data structures should appear.
    Once we are done with the schema creation, we can move to the Process Data Mart configuration screen to set up the Common Updater Service parameters. Notice that the service has not been started yet… We will get to that point later.
    4. Configuring Process Data Mart Updater Service
    In the case that Data Mart information is not going to be used, the “Enable Automatic Update” checkbox must be left off and the “Runtime Database Configuration” empty for this service. Additionally, the rest of this section can be skipped.
    In the case it is going to be used, the detail level, snapshot time and the time of update should be configured; in addition to enabling the updater and choosing the storage configuration. An example is shown below:
    Still in ‘Process Monitoring’, select the ‘Process Data Mart’ tab and enter the name created earlier (use the drop down list to select).
    Also, un-tick the Generate O3 Cubes (see later notes):
    Then click <Save>.
    Once those properties have been configured the database and the data structure have to be created. This is performed at the “Manage Database” page for which the link has appeared at the bottom of the page (as with BAM). Even when this page is identical to the one shown above (for the BAM configuration) it has been opened from the link in the “Process Data Mart” page and this makes it different.
    Text showing the successful creation of the database and data structures should appear.
    5. Configuring Common Updater Service Parameters
    In the “Process Data Mart” tab of the Process Monitoring section -along with the parameters that are specific to the Data Mart - we will find some parameters that are common to all services. These parameters are:
    • Log directory: location of the log file
    • Messages logged from Data Store Updater: severity level of the Updater logs
    • Language
    • Generate Performance Metrics: enables performance metrics generation
    • Generate Workload Metrics: enables workload metrics generation
    • Generate O3 Cubes: enables O3 Cubes generation
    In this document we are not going to describe in detail each parameter. But we will mention a few caveats:
    a. the Log directory must be specified in order for the logs to be generated
    b. the Messages logged from Data Store Updater, which indicates the level
    of the logs, should be DEBUG for troubleshooting and WARNING otherwise
    c. Performance and Workload Metrics need to be on for the typical BAM usage and, even when either metric might not be used on the initial project releases, it is recommended to leave them on in case they turn out to be useful in the future
    d. the Generation of O3 Cubes must be off if this service is not used, otherwise the Data Mart Updater service might not work properly .
    The only changes required on this screen was to de-select the ‘Generate O3 Cubes’ as shown in the last section.
    6. Set up the WebLogic configuration
    We need to set up the JDBC data source specified above, so go to Services / JDBC / Data Sources.
    Click on <Lock and Edit> and then <New> to add a New data source.
    Specify:
    ·     the Name – use the name you set up in the Process Administrator
    ·     the JNDI Name – again use the name you set up in the Process Administrator
    ·     the Database Type – Oracle
    ·     use the default Oracle Database Driver
    Then click <Next>
    On the next screen, click <Next>
    On the next screen specify:
    ·     the Database Name – this is the SID – for me that is XE
    ·     the Host Name – as I am running on my laptop, I’ve just specified ‘localhost’
    ·     the Database User Name and Password – this is the BAM database user specified in the Process Administrator
    Then click <Next>
    On the next screen, you can test the configuration to make sure you have got it right, then click <Next>
    On the next screen, select your server as the target server and click <Finish>:
    Finally, click <Activate Changes>.
    7. The Last Step: Starting Up and Shutting Down the Updater Service
    ALBPM distribution is different depending on the Operating System. In the case of the Updater Service:
    -     For Unix like Operating Systems the service is started or stopped with the albpmwarehouse.sh shell script. The command in this case is going to look like this:
    $ALBPM_HOME/bin$ ./albpmwarehouse.sh start
    -     For Windows Operating Systems the service is installed or uninstalled as a Windows Service with the albpmwarehouse.bat batch file. The command will look like:
    %ALBPM_HOME%\bin> albpmwarehouse.bat install
    After installing the service, it has to be started|stopped from the Microsoft Management Console. Note also that Windows will start automatically the installed service when the computer starts. In either case the location of the script is ALBPM_HOME/bin Where ALBPM_HOME is the ALBPM installation directory. An example will be:
    C:\bea\albpm6.0\j2eewl\bin\albpmwarehouse.bat
    8. Finally: Running BAM dashboards to show it is Working
    Now we have finally got the BAM service running, we can run dashboards from within Workspace and see the results:
    9. General BAM and Data Mart Caveats
    a. The basic difference between these two collections of measurements is that BAM keeps track of current processes load and execution times while Data Mart contains a historical view of those same measurements. This is why BAM information is collected frequently (every minute) and cleared out every several hours (or every day) and why Data Mart is updated infrequently (once a day) and grows indefinitely. Moreover, BAM measurements can be though of as a minute-by-minute sequence of Engine Events snapshots, while Data Mart measurements will be a daily sequence of Engine Events snapshots.
    b. BAM and Data Mart table schemas are very similar but they are not the same. Thus, it is important not to use a schema created with the Manage Database for BAM as Data Mart storage or vice-versa. If these schemas are exchanged by mistake, the updater service will run anyway but no data will be added to the tables and there will be errors in the log indicating that the schema is incorrect or that some tables could not be found.
    c. BAM and Data Mart Information and Services are independent from one another. Any of them can be configured and running without the other one. The information is extracted directly from the Engine Database (PPROCINSTEVENT table is the main source of info) for both of them.
    d. So far there has not been a mention of engines, projects or processes in any of the BAM or Data Mart configurations. This is because the metrics of all projects published under the current Process Administrator (or, more precisely, FDI Directory) are going to be collected.
    e. It is also important to note that only activities for which events are generated are going to be measured (and therefore, shown in the metrics). The project default is to generate events only for Interactive activities. This can be changed for any particular activity and for the whole process (where the activity setting, when specified, overrides the process setting). Unfortunately, there is no project setting for events generation so far; thus, remember to edit the level of event generation for every new process that is added to the project.
    f. BAM and Data Mart metrics are usually enriched with Business Variables. These variables are a special type of External Variables. An External Variable is a process variable with the scope of an Instance and whose value is stored on a separate column in the Engine Instances table. This allows the creation of views and filters based on this variable. A Business Variable, then, shares all the properties of an External Variable plus the fact that its value is collected in all BAM and Data Mart measurements (in some cases the value is shown as it is for a particular instance and in others the value is aggregated).
    The caveat here is that there is a maximum number of 256 Business Variables per FDI. Therefore, when publishing several projects into a single FDI directory it is recommendable to reuse business variables. This is achieved by mapping similar Business Variables of different projects with a unique real Variable (on the variable mapping performed at publish time).
    g. Configuring the Updater Service Log
    In section 5. Configuring Common Updater Service Parameters we have seen that there are two common Updater properties related to logging. These properties are “Log directory” and “Messages logged from Data Store Updater”, and they specify the location and level of these two files:
    - dwupdater.log: which is the log for the Data Mart updater service
    - bam-dwupdater.log: which is the log for the BAM updater service
    In addition to these two properties, there is a configuration file called ‘WarehouseService.conf’ that allows us to modify these other properties:
    - wrapper.console.loglevel: level for the updater service log
    - wrapper.logfile.loglevel: level for the updater service log
    - wrapper.java.additional.n: additional argument to the service JVM
    - wrapper.logfile.maxsize: maximum size of the updater service log files
    - wrapper.logfile.maxfiles: maximum number of updater service log files
    - wrapper.logfile: updater service log file name (the default value is dwupdater-service.log)
    9.1. Updater Service Log Configuration Caveats
    a. The first three parameters listed above have to be modified when increasing the log level to DEBUG (since the default is WARNING). The loglevel parameters have to be set to DEBUG and a java.additional.n (where n is a consecutive integer to the already used ones) has to be set to –ea to enable asserts, since without this option no DEBUG message is going to be generated.
    b. Of the other arguments, maxfiles might need to be increased to hold a few more days of data when the log level is set to DEBUG (with the default value up to two days are stored).
    c. The updater service has to be stopped, uninstalled, installed and then started for any of these changes to take effect.
    Hope this helps,
    Dan

  • CRUD Operations using Servlets , SessionBeans and JPA

    Hi
    Please tell me what will be the right approach for doing CRUD operations .
    I have a jsp page with which the user interacts.
    Please tell me the right approach here :
    Option 1 :
    JSP --->Servlet--->SessionBean--->JPA.
    JSP Page submits data to the servlet , the servlet looks up for the SessionBean , and the sessionBean consists of the container EntityManager , for persist , delete , and finding .
    Option 2 :
    JSP --->Servlet-->JPA.
    The Jsp page submits to servlets and the Servlet itself consists of the container EntityManager , for persist , delete , and finding .

    Depends on the reusabiltity of the query really. If you have a single database query that is used in multiple places, then by all means put it in a session bean. But if you only do persist/delete operations, then just use the EntityManager directly. The session bean would just be an extra layer of complexity that you don't need.

  • Cnsuming Lightswitch Odata service in c# winform to perform crud Operations

    HI
    I have created light switch html application which is having  sql as datasource.
    I want to consume the Odataservices that will be created from light switch in c# winforms to perform crud operations .
    Can any one help me how to handled crud operations.
    Thanks and regards
    Venkat

    I found:
    https://social.msdn.microsoft.com/Forums/en-US/71338428-faa6-4e2f-87fb-5ef86f02c69d/how-to-consume-odata-service-made-in-lightswitch-in-a-windows-forms-application?forum=LightSwitchDev11Beta
    https://msdn.microsoft.com/en-us/library/hh973174.aspx
    and a bunch more when I googled "windows forms consuming lightswitch odata"
    These are REST services
    http://blogs.msdn.com/b/bethmassi/archive/2012/03/09/creating-and-consuming-lightswitch-odata-services.aspx
    If you google "windows forms consuming rest services" there are a stack more hits.
    http://www.codeproject.com/Questions/648106/Consuming-REST-WCF-Service-in-Windows-Forms-Applic
    http://www.c-sharpcorner.com/UploadFile/rahul4_saxena/create-and-consume-wcf-restful-service/
    I would recommend you create a "proper" wcf data service rather than using a lightswitch one.
    Hope that helps.
    Recent Technet articles: Property List Editing;
    Dynamic XAML

  • Swing CRUD Operations

    Hi All,
    I am working on the crud operations of the swing textfields using javafx. when the user clicks on the Edit button, the fields should be enabled for updating and after edit is finished , the textfields should be updated with the new data. I also have a reset button to reset information in textfields . Could someone help me by providing some examples.
    Regards,
    Sherlyn

    Maybe the [JavaFX CRUD|http://www.learntechnology.net/content/javafx/javafx.jsp] article might interest you.

  • Data Mart Status of the request is not ticked

    Hello Everybody,
    I am the first time to deal with BW.
    After I loaded the data to the ODS, the request in the manage data view was not ticked as the others although the job was completed successfully.
    Does anyone can help?
    Many Thanks
    F-B-I

    Hi,
    have you deleted the data mart status in ods .
    u need to follow these steps
    1, if you chenged the status to red no need to delete the datamart status in ods .you can load the data from ods to infocube.
    2, if the status green and need to delete the datamart symbol in ods.
    regards
    sivaraju

  • Unable to update the Data from Cube to Data Mart

    Hi,
    I have a problem with the data loading to a Cube(data Mart)in BW. When i checked in RSA3 it is showing 0 records.The data flow is depicted as follows: R/3 -> ODS(BW)-> cube(BW) -> Cube(BW- Data mart for APO cube)-> APO System. In BW, for the final Data target whenever data is loaded, by deleting the previous request through full load. But on checking this final cube (Data Mart to APO) records are avilable, wherein while checking in RSA3 for this final data target(Data Mart to APO)it is showing 0 records.
    Why? please help me.
    Regards,
    krishna

    Hi,
    I checked the data mart in RSA3.It is not a matter of Full upload or delta upload.
    thanks
    Krishna

  • Data mart cube to cube copy records are not matching in target cube

    hI EXPERTS,
    Need help on the below questions for DATA Mart-Cube to cube copy(8M*)
    Its BW 3.5 system
    We have two financial cube.
    Cube A1 - Sourced through R/3 system (delta update) and Cube B1- Sourced through A1 cube.(Full update). These two cubes are connected through update rules with one to one mapping without any routines.Basis did a copy of back-end R/3 system from Production to Quality server.This happened approximately 2 months back.
    The Cube A1 which extracts delta load from R/3 is loading fine. but for the second cube, (extraction from previous cube A1) i am not getting full volume of data instead i m getting meagre value but the loading shows successful status in the monitor.
    We  tried through giving conditions in my infopackage (as it was given in previous year's loading) but then also its fetching the same meagre volume of data.
    To ensure that is it happening for the particular cube, we tried out in other cube which are sourced thro myself system and that is also getting meagre data rather than full data..
    For Example: For an employee if the data available is 1000, the system is extracting randomly some 200 records.
    Any quick reply will be more helpful. Thanks

    Hi Venkat,
                  Did you do any selective delitions in CUBEA1.
    first reconcile data cube1 & cube2 .
    match totals of cube1 with cube2.
    Thanks,
    Vijay.

  • Get back the Data mart status in ODS and activate the delta update.

    I got a problem when deleting the requests in ODS.
    actually there is Cube(1st level. it gets loaded from an ODS(2nd level). this gets loaded from 3 ODS'S( 3rd level). we were willing to delete recents requests from all the data tardets and reload from PSA. but while delting in the request in ODS(2nd level), it has displayed a window, showing as follows.
    - the request 132185 already retrived by the data target BP4CLT612.
    -Delta update in BP4CLT612 must be deactivated before deleting the request.
    - Do you want to deactivate the delta update in data target BP4CLT612.
       I have clicked on execute changes in the window. it has removed the data mart status for all the request which i have not deleted.
    in the same it happened inthe 3 ODS's(3rd level).
    I got clear that if we load further data from source system. it will load all the records from starting.
    so to avoid this can any body help me how to reset the Data mart status and activate the delta update.

    Hi Satish,
    U have to make the requests RED in cube and back them out from cube,before u can go for request deletions from the base targets(from which cube gets data).
    Then u have to reset data mart status for the requests in your 'L2 ODS' before u can delete requests from ODS.
    Here I think u tried to delete without resetting data mart status which has upset the delta sequence.
    To correct this..
    To L2 ODS,do an init without data transfer from below 3 ODS's after removing init request from scheduler menu in init infopackage.
    Do similar from L2 ODS to Cube.
    then reconstruct the deleted request in ODS.It will not show the tick mark in ODS.Do delta load from ODS to Cube.
    see below thread..
    Urgentt !!! Help on reloading the data from the ODS to the Cube.
    cheers,
    Vishvesh

  • Data mart status has beed reset by del the in between req in source ODS

    Hi SDN,
    We have a situation where in which there is a daily delta laod going from ODS to Cube and we have reset data mart status accidentally by deleting the request which is in between multiple number of requests in the source ODS, now when we deleted the requests in ODS we got a POP asking for 'do you want to delete the data in cube' we went with on that POP up, after that when we see in the manage of the source ODS the datamart status is not seen for all the requests that have been loaded to target cube. Then we have reconstructed the data for the deleted request from the PSA.
    Next day when a new laod comes into ODS then will the ODS send the correct delta update to target Cube.
    If correct delta is not updated to cube is there any method that we can follow to maintain the data consistency with out deleting the data in the target cube.
    Thank you,
    Prasaad

    Hi,
    You dekleted Data in Cube and loaded,but Ddata Mart is not appearing in ODS. So if you have all data in ODS, delete data in Cube and then just delete datamart layer in ODS and right click and click on Update data in DataTarget, then fresh request will update to Cub , this is Init only then Deltas will go from net day onwards.
    Thanks
    Reddy

  • Giving Error while generating the Data mart to Infocube.

    Hi Gurus,
    I need to  extract the APO infocube data in to the BW infocube. For that iam trying to generate the data mart to APO infocube .
    So, that i can use that data mart as a data source to extract that APO Infocube data in to  BW infocube.
    In that process iam trying to generate the datamart for APO Infocube . But while generating it is giving errors like below:
    Creation of InfoSource 8ZEXTFCST for target system BW 1.2 failed
    The InfoCube cannot be used as a data mart for a BW 1.2 target system.
    Failed to create InfoSource &v1& for target system BW 1.2.
    PLease suggest me what to do for this Error problem.
    Thanks alot in Advance.

    Hi,
    Point No : 1
    What is Planning Area :
    http://help.sap.com/saphelp_scm41/helpdata/en/70/1b7539d6d1c93be10000000a114084/content.htm
    Point No : 2
    Creation Steps for Planning Area :
    http://www.sap-img.com/apo/creation-of-planning-area-in-apo.htm
    Note : We will not create Planning Area.This will be done by APO team.
    Point No 3  : Afetr opening the T-Code : /n/SAPAPO/MSDP_ADMIN in APO you will be able to see all the planning areas.
    Point No 4 : Select your planning area and Goto Extras menu and Click on Generate DS
    Point No 5. System automaticall generate the DS in APO (Naming Convention start with 9) and Replicate the DS in BI Map to your cube and load the data.
    Regards
    Ram.

  • Error Caller 09 contains error message - Data Marts loading(cube to ODS)

    Dear all,
              Please ! Help me in this problem, This is very urgent.
              I have one process chain that loads data from BIW to BIW only through Data Marts. In that process chain, one process loads data from one cube(Created by us) & loads data to one ODS(also created by us). Data is loaded through full update & for the selected period specified in 'Calender Day' field in data selection.
             Previously I was able to load data for 2 months, but some days ago, suddenly one day, the process of Extraction got stuck in background for long time,& showed following error :
              Error message from the source system
              Diagnosis
             An error occurred in the source system.
              System Response
             Caller 09 contains an error message.
             Further analysis:
             The error occurred in Extractor . 
             Refer to the error message.
             Procedure
             How you remove the error depends on the error message.
             Note
             If the source system is a Client Workstation, then it is possible that the file that you wanted to                load was being edited at the time of the data request. Make sure that the file is in the specified directory, that it is not being processed at the moment, and restart the request.
                  Then we killed that process on server & after another attempt, It showed some calmonth...timestamp error. Then after reducing data selection period, It had been loaded successfully, after that I was able to load data for 20 days,Again after some days process got stuck,I followed the same procedure,reduced the period to 15 days & continued, Now I can't even load data for 5 Days successfully in one attempt, I have to kill that process in background & repeat it, then sometimes It get loaded.
             Pls, suggest some solutions as soon as possible. I am waiting for your reply. Points will be assigned.
             Thanks,
              Pankaj N. Kude
    Edited by: Pankaj Kude on Jul 23, 2008 8:33 AM

    Hi Friends !
                      I didn't find any short dump for that in ST22.
                      Actually , What happens is, Request continues to run in background for infinite time. At that time
    Status Tab in Process Monitor shows  this messages :
                        Request still running
                        Diagnosis
                        No errors found. The current process has probably not finished yet.
                         System Response
                         The ALE inbox of BI is identical to the ALE outbox of the source system
                           or
                         the maximum wait time for this request has not yet been exceeded
                           or
                        the background job has not yet finished in the source system.
                       Current status
                       in the source system
                        And Details Tab shows following Messages :
                        Overall Status : Missing Messages or warnings
                        Requests (Messages) : Everything OK
                                Data Request arranged
                                Confirmed with : OK
                         Extraction(Messages) ; missing messages
                                Data request received
                                Data selection scheduled
                                Missing message : Number of Sent Records
                                Missing message : selection completed
                        Transfer (IDOCS and TRFC) : Everything OK
                                Info Idoc1 : Application Document Posted
                                Info Idoc2 : Application Document Posted
                         Processing (data packet) : No data
                        This Process runs for infinite time, then I have to kill that process from server, & Then It shows  Caller 09 Error in Status Tab
                        Should I Changed the value for that memory parameter of server or not ?. We r planning to try it today, Is it really belongs to this problem, Will it be helpful ? What r the risks ?
                        Please, give your suggestion as early as possible, I m ewaiting for your reply.
      Thanks,
    Pankaj N. Kude

  • Table Onwers and Users Best Practice for Data Marts

    2 Questions:
    (1)We are developing multiple data marts that share the same Instance. We want to deny access to the users when tables are being updated. We have one generic user (BI_USER) with read access through one of the popular BI Tools. The current (first) data mart we denied access by revoking the privilege to the BI_USER, however going forward with other data marts the tables will get updated on a different schedule and we do not want to deny access to all the data marts. What is the best approach?
    (2) What is the best Methodology for table ownership of tables in different data marts that share tables across marts? Can we create one generic ETL_USER to update tables with different owners?
    Thanx,
    Jim Masterson

    If you have to go with generic logins, I would at least have separate generic logins for each data mart.
    Ideally, data loads should be transactional (or nearly transactional), so you don't have to revoke access ever. One of the easier tricks to accomplish this is to load data into a shadow table and then rename the existing table and the shadow table. If you can move the data from the shadow table to the real table in a single transaction, though, that's even better from an availability standpoint.
    If you do have to revoke table access, you would generally want to revoke SELECT access to the particular object from a role while the object is being modified. If this role is then assigned to all the Oracle user accounts, everyone will be prevented from viewing the table. Of course, in this scenario, you would have to teach your users that "table not found" means that the table is being refreshed, which is why the zero downtime approach makes sense.
    You can have generic users that have UPDATE access on a large variety of tables. I would suggest, though, that you have individual user logins to the database and use roles to grant whatever ad-hoc privileges users need. I would then create one account per data mart, with perhaps one additional account for the truely generic tables, that own each data mart's objects. Those users would then grant different roles different database privileges, and you would then grant those different roles to different users. That way, Sue in accounting can have SELECT access to portions of one data mart and UPDATE access to another data mart without granting her every privilege under the sun. My hunch is that most users should not be logging in to, let alone modifying, all the data marts, so their privileges should reflect that.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

Maybe you are looking for