What is data mart?

Hi all,
Can anyone tell me wat is data mart and wat is the use of it..??

Hi Apeetha,
this is a very good question. usually datamart is mostly on BW/BI side (well part of DP) is like linking an infocube to another infocube. You can find further notes/details about it on the link provided below...
[http://help.sap.com/saphelp_smehp1/helpdata/en/80/1a6110e07211d2acb80000e829fbfe/frameset.htm]
Cheers! ;0

Similar Messages

  • Under Data Marts folder, what is the datasource starting with 8?

    hello BW gurus,
    Run rsa1 -> Source Systems, then pick the BW system itself as the client,  under folders BW DataSources ->  Business Information Warehouse -> Data Marts, we can see some datasources or infosources starting with 8, how these datasouces get created and what are the purpose of them? 
    We know that the datasource starting with 7 means the export datasource generated from PSA, but have no idea what is for datasource starting with 8.   Anyone's explanation is greatly appreciated!

    Hi Kevin,
    The DataSources starting with 8 are the datamart data sources, or Export DatSources. This means that they would supply data from the BW system to the BW system.
    You will notice that the name is 8DataTarget, where DataTarget could be like an ODS object or an InfoCube. This means that data from this ODS or InfoCube can be updated to other data targets in the BW system using update rules.
    You can generate the Export datasource from the context menu of a data target.
    Hope this helps...

  • What is the difference between SSAS & Data Mart? Is SSAS required to create BI Dashboard in SharePoint 2013?

    Hello All,
    Greetings for the day.!
    What is the main feature wise difference between SQL Server Analysis Service & Data Mart? Can we create BI Dashboard in SharePoint 2013 without creating SSAS & Measures?
    Thanks,
    msdn.microsoft.com

    Hi jdoshi65, 
    This is a very long-to-answer question :) 
    A Data Mart is a subset of a Data Warehouse, specifically designed for a department for example, or to support certain reports. The Data Warehouse is where ALL the data available for analysis in your company is, is a super set from the Data Mart.
    Then, we have SQL Server Analysis Services, which is a service which ships with SQL Server. It builds, ON TOP of the Data Warehouse or the Data Mart (or both) an OLAP solution in order to improve and enrich the reports, analytics and insights coming from
    the data in the Data Warehouse / Data Mart. 
    That's the big picture, but: 
    - Is it necessary to have a Data Warehouse or a Data Mart to use SQL Server Analysis Services? NO
    - Is it necessary to have SQL Server Analysis services to query and create reports from a Data Warehouse / Data Mart? NO
    So, answering your question, there is no difference between SQL Service Analysis Services and a Data Mart because they are different things.
    Then, yes, you can create a BI Dashboard in SharePoint 2013 without a SSAS cube behind. You can just use SQL Server Reporting Services or Performance Point Services to build dashboards in SharePoint 2013.
    Regards.

  • Giving Error while generating the Data mart to Infocube.

    Hi Gurus,
    I need to  extract the APO infocube data in to the BW infocube. For that iam trying to generate the data mart to APO infocube .
    So, that i can use that data mart as a data source to extract that APO Infocube data in to  BW infocube.
    In that process iam trying to generate the datamart for APO Infocube . But while generating it is giving errors like below:
    Creation of InfoSource 8ZEXTFCST for target system BW 1.2 failed
    The InfoCube cannot be used as a data mart for a BW 1.2 target system.
    Failed to create InfoSource &v1& for target system BW 1.2.
    PLease suggest me what to do for this Error problem.
    Thanks alot in Advance.

    Hi,
    Point No : 1
    What is Planning Area :
    http://help.sap.com/saphelp_scm41/helpdata/en/70/1b7539d6d1c93be10000000a114084/content.htm
    Point No : 2
    Creation Steps for Planning Area :
    http://www.sap-img.com/apo/creation-of-planning-area-in-apo.htm
    Note : We will not create Planning Area.This will be done by APO team.
    Point No 3  : Afetr opening the T-Code : /n/SAPAPO/MSDP_ADMIN in APO you will be able to see all the planning areas.
    Point No 4 : Select your planning area and Goto Extras menu and Click on Generate DS
    Point No 5. System automaticall generate the DS in APO (Naming Convention start with 9) and Replicate the DS in BI Map to your cube and load the data.
    Regards
    Ram.

  • Error Caller 09 contains error message - Data Marts loading(cube to ODS)

    Dear all,
              Please ! Help me in this problem, This is very urgent.
              I have one process chain that loads data from BIW to BIW only through Data Marts. In that process chain, one process loads data from one cube(Created by us) & loads data to one ODS(also created by us). Data is loaded through full update & for the selected period specified in 'Calender Day' field in data selection.
             Previously I was able to load data for 2 months, but some days ago, suddenly one day, the process of Extraction got stuck in background for long time,& showed following error :
              Error message from the source system
              Diagnosis
             An error occurred in the source system.
              System Response
             Caller 09 contains an error message.
             Further analysis:
             The error occurred in Extractor . 
             Refer to the error message.
             Procedure
             How you remove the error depends on the error message.
             Note
             If the source system is a Client Workstation, then it is possible that the file that you wanted to                load was being edited at the time of the data request. Make sure that the file is in the specified directory, that it is not being processed at the moment, and restart the request.
                  Then we killed that process on server & after another attempt, It showed some calmonth...timestamp error. Then after reducing data selection period, It had been loaded successfully, after that I was able to load data for 20 days,Again after some days process got stuck,I followed the same procedure,reduced the period to 15 days & continued, Now I can't even load data for 5 Days successfully in one attempt, I have to kill that process in background & repeat it, then sometimes It get loaded.
             Pls, suggest some solutions as soon as possible. I am waiting for your reply. Points will be assigned.
             Thanks,
              Pankaj N. Kude
    Edited by: Pankaj Kude on Jul 23, 2008 8:33 AM

    Hi Friends !
                      I didn't find any short dump for that in ST22.
                      Actually , What happens is, Request continues to run in background for infinite time. At that time
    Status Tab in Process Monitor shows  this messages :
                        Request still running
                        Diagnosis
                        No errors found. The current process has probably not finished yet.
                         System Response
                         The ALE inbox of BI is identical to the ALE outbox of the source system
                           or
                         the maximum wait time for this request has not yet been exceeded
                           or
                        the background job has not yet finished in the source system.
                       Current status
                       in the source system
                        And Details Tab shows following Messages :
                        Overall Status : Missing Messages or warnings
                        Requests (Messages) : Everything OK
                                Data Request arranged
                                Confirmed with : OK
                         Extraction(Messages) ; missing messages
                                Data request received
                                Data selection scheduled
                                Missing message : Number of Sent Records
                                Missing message : selection completed
                        Transfer (IDOCS and TRFC) : Everything OK
                                Info Idoc1 : Application Document Posted
                                Info Idoc2 : Application Document Posted
                         Processing (data packet) : No data
                        This Process runs for infinite time, then I have to kill that process from server, & Then It shows  Caller 09 Error in Status Tab
                        Should I Changed the value for that memory parameter of server or not ?. We r planning to try it today, Is it really belongs to this problem, Will it be helpful ? What r the risks ?
                        Please, give your suggestion as early as possible, I m ewaiting for your reply.
      Thanks,
    Pankaj N. Kude

  • Table Onwers and Users Best Practice for Data Marts

    2 Questions:
    (1)We are developing multiple data marts that share the same Instance. We want to deny access to the users when tables are being updated. We have one generic user (BI_USER) with read access through one of the popular BI Tools. The current (first) data mart we denied access by revoking the privilege to the BI_USER, however going forward with other data marts the tables will get updated on a different schedule and we do not want to deny access to all the data marts. What is the best approach?
    (2) What is the best Methodology for table ownership of tables in different data marts that share tables across marts? Can we create one generic ETL_USER to update tables with different owners?
    Thanx,
    Jim Masterson

    If you have to go with generic logins, I would at least have separate generic logins for each data mart.
    Ideally, data loads should be transactional (or nearly transactional), so you don't have to revoke access ever. One of the easier tricks to accomplish this is to load data into a shadow table and then rename the existing table and the shadow table. If you can move the data from the shadow table to the real table in a single transaction, though, that's even better from an availability standpoint.
    If you do have to revoke table access, you would generally want to revoke SELECT access to the particular object from a role while the object is being modified. If this role is then assigned to all the Oracle user accounts, everyone will be prevented from viewing the table. Of course, in this scenario, you would have to teach your users that "table not found" means that the table is being refreshed, which is why the zero downtime approach makes sense.
    You can have generic users that have UPDATE access on a large variety of tables. I would suggest, though, that you have individual user logins to the database and use roles to grant whatever ad-hoc privileges users need. I would then create one account per data mart, with perhaps one additional account for the truely generic tables, that own each data mart's objects. Those users would then grant different roles different database privileges, and you would then grant those different roles to different users. That way, Sue in accounting can have SELECT access to portions of one data mart and UPDATE access to another data mart without granting her every privilege under the sun. My hunch is that most users should not be logging in to, let alone modifying, all the data marts, so their privileges should reflect that.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • How to find Info Source for Export Data source in Data Marts node

    Hi
                  I need to load data from ODS to Info Cube. I created the Export Data source for the ODS. I can see the Export Data Source but in the Data Marts node of Info Source i cannot find the Info Source for the Export Data source i created. I replicated Data sources in the BW source system. I also tried to use Insert Lost Nodes from the Context Menu of the InfoSource node but nothing worked. Please let me know what i need to do to see the Info source in the Data Marts.
    Thanks
    Padma

    In the infosource tab in RSA1 - use settings --> display generated objects
    you will be able to see the datamart infosources..

  • Deletion of Data Mart Request taking more time

    Hi all,
    One of the Data Mart process(ODS to Infocube) has failed.
    Im not able to delete the request in Infocube.
    When delete option executed, its taking more time while im monitoring that job in SM37 and its not getting completed.
    The details seen in the Job Log is as shown below,
    Job started                                                                  
    Step 001 started (program RSDELPART1, variant &0000000006553, user ID SSRIN90)
    Delete is running: Data target BUS_CONT, from 376,447 to 376,447             
    Please let me know your suggestions.
    Thanks,
    Sowrabh

    Hi,
    How many records are there in that request. Usually when you try to delete out a request it takes more time. Depending on the data volume in the request the deletion time will vary.
    Give it some more time and see if it finishes. To actually know if the job is doing anything, goto SM50/51 and look at whats happening.
    Cheers,
    Kedar

  • What Master Data to hold within a Data Warehouse

    Hi,
    We are developing a data warehouse which will incorporate Master Data entities. We have a pre-existing Master Data Management solution which will be the system of source for MD within the DW & associated Data Marts. We have decided to have a copy
    of the MD within the DW. However we are of two schools of thought on what data should be held.
    One school says that only those attributes that shapes a query result should be held in the DW-MD the second says that All MD that may be used by a reporting system should be held within the DW so that it is the single source of data for the reporting
    application. let me give an example. Let’s say we have a Customer MD entity with the following attributes
    Customer
    ID
    NAME
    COUNTRY *
    EMAIL ADDR
    CITY *
    PHONE NUM
    GENDER *
    LAST LOGGED IN
    FAX ADDR
    DATE OF JOINING *
          LOGO (binary)
    Now, we will never do a query based on phone number, fax or email address etc. But the attributes flagged by a
    * will shape queries when coupled with our facts. Such as "find all male customers with 4 or more transactions over $1000" or "find all customers registered from 2007 based in New York who have purchased an X". However when
    showing the results we will always show the full customer profile including Name, email address etc. (I realise the queries are very specific and not report queries as such but they suffice for the question at hand)
    The first schools says only the query shaping MD elements should form the MD within the DW and that the reporting application should obtain the remainder directly from the group MD system as required. The second school says that the DW (or the DM) should
    furnish all the MD required by the application. My question is which of the two approaches is considered best practice and as importantly, why?
    Cheers,
    Daryl

    Do you have ODS? NDS?...? or you just have Data warehouse as the only resource for covering reports and dashboards?
    if you do have only the Data Warehouse then you have to cover
    all report's requirements within the data warehouse, no matter it used in the filter/slicing dicing/or as display only field. So I would add those fields such as names, email address, phone number, as attributes in the data warehouse. But I will only
    consider indexing and data warehouse best practices for performance tuning for attributes that participate in slicing and dicing and filtering (such as country, year....).
    on the other hand; if you use SSAS multi-dimensional cube on top of your data warehouse, then you can set some attributes to be only visible (attributes for display only), and some of them to be visible and hierarchy enabled (attributes that participate
    in slicing and dicing and filtering).
    Regards,
    Reza
    SQL Server MVP
    Blog:  
    http://rad.pasfu.com  Twitter:
      LinkedIn:
    SQL Server Integration Services 2012 Tutorial Videos:
    http://www.radacad.com/CoursePlan.aspx?course=1

  • GRC AC DATA MART CONFIGURATION

    Hi All,
    We are on GRC AC 5.3 SP 11. The customer wants to use the Data Mart functionality with crystal report 2008 for custom reporting purpose. In my knowledge this functionality is available in AC since SP 9. But don't know the exact procedure to go ahead and configure the same. Though I have gone through couple of documents available on SDN on this, no document is Step by step.Can anyone please suggest me any detailed configuration guide available in SMP/SDN or on web.
    Thanks in Advance.
    Best Regards,
    Guru

    Hi Gangadhar,
    I have already gone through the AC config guide and the notes mentioned by you and some other available docs also. But nowhere it is detailed and step by step. Is there any detailed document or step-by-step on the same. Because in the config guide it has been given the steps need to be performed in the GRC AC side. To view the custom report in Crystal Reports 2008, what configurations need to be done there is no such details. However I have got some other docs like, 'SAP BUSINESSOBJECTS ACCESS CONTROL 5.3 SP09 DATA MART u2013 SAMPLE REPORTS', from SDN where it has been given the frontend configurations. But I am confused where to start and how to get the custom reports after doing the necessary configurations. Any idea on the same.
    Thanks,
    Guru

  • Problem in Data Mart

    I tried to transfer data from one InfoCube(ZE1) to another InfoCube(ZE2).      
    Please find below the procedure I exercised:
    INFOCUBE ZE1 SUCCESSFULLY POPULATED DATA FROM FLATFILE WITH 20 RECORDS.
    INFOCUBE ZE2 CREATED BY COPYING FROM ZE1.
    GENERATED EXPORT DATASOURCE IN ZE1 AND  REPLICATED THIS NEW DATASOURCE 8ZE1 IN THE B3TCLNT800 - BW CLIENT 800 SOURCE SYSTEM.  THEN ACTIVATED THE TRANSFER RULES. THE INFOSOURCE IS IN DATAMART FOULDER.
    THE UPDATE RULES FOR ZE2 WAS CREATED USING INFOCUBE ZE1 (data mart concept) UNDER DATASOURCE TAB.
    SCHEDULED A LOAD PACKAGE ON THE INFOSOURCE  8ZE1 TO LOAD TO THE NEW INFOCUBE ZE2.
    Now while running the scheduling process, a short dump is created:
    No request Idoc generated in BW.
    Exception condition “INHERITED_ERROR” RAISED.
    Could you tell me what could be the reason behind it.
    Thanks in advance.

    Hi Venkat,
    Posting the OSS Note:568768 for your ref:
    <u><b>Symptom</b></u>
    A shortdump with an SQL Error occurs, or a message indicates, that a SQL Error occured. You need to figure out more information about the failing statement and the reasons for failing.
    A shortdump with Exception condition "INHERITED_ERROR" occured, a RAISE statement in the program "SAPLRSDRC " raised the exception condition "INHERITED_ERROR".
    <u><b>Other terms</b></u>
    Shortdump SQL Error Oracle DB6 DB2 MSSQL SAPDB Informix Database dev_w0 developer trace syslog UNCAUGHT_EXCEPTION CX_SY_SQL_ERROR DBIF_REPO_SQL_ERROR INHERITED_ERROR SAPLRSDRC RSDRC RSDRS SAPLRSDRS DBIF_RSQL_SQL_ERROR
    <u><b>Reason and Prerequisites</b></u>
    A shortdump with a SQL Error occurs, e.g. during BW Aggregate build, compression, BW queries, Datamart extraction, or a SQL Statement failed without a shortdump.
    The actions mentioned will already indicate the cause of the error.In particular, the database administrator should frequently be able to immediately recognize solutions.Nevertheless, you should create an OSS problem and you should then attach the SQL error message including additional information (such as the SQL statement that occurred or the error message text) to the problem message.
    This note combines the two OSS notes 495256 and 568768.
    <u><b>Solution</b></u>
    1. If a shortdump occured, get the work process and App Server where the dump occured, otherwise continue with the next step.
               Inside the shortdump, scroll to "System environment". Either you find the work process number here, or continue with the next step.
    1. Get the work process number from the syslog
                Go into the system log (Transaction sm21), search for the Short dump entry with the same timestamp in the syslog of the App Server where the short dump occured.
               The column "Nr" contains the work process number. Maybe there are already syslog entries before this entry containing more information about the error.
               Example: The shortdump contains
    UNCAUGHT_EXCEPTION
    CX_SY_SQL_ERROR
    04.11.2002 at 14:47:32
               The syslog contains:
    Time     Ty. Nr Cl. User    Tcod MNo Text
    14:47:32 BTC 14 000 NAGELK  BY2 Database error -289 at EXE
    14:47:32 BTC 14 000 NAGELK  BY0 > dsql_db6_exec_immediate( SQL
    14:47:32 BTC 14 000 NAGELK  BY0 > Driver][DB2/LINUX] SQL0289N
    14:47:32 BTC 14 000 NAGELK  BY0 > table space "PSAPTEMP". SQLS
    14:47:32 BTC 14 000 NAGELK  BY2 Database error -289 at EXE
    14:47:32 BTC 14 000 NAGELK  R68 Perform rollback
    14:47:32 BTC 14 000 NAGELK  AB0 Run-time error "UNCAUGHT_EXCEP
    14:47:32 BTC 14 000 NAGELK  AB2 > Include RSDRS_DB6_ROUTINES l
    14:47:33 BTC 14 000 NAGELK  AB1 > Short dump "021104 144732
                So from the Syslog the information can be got, that the reason of the error is
                Database error -289 at EXE dsql_db6_exec_immediate( SQLExecDirect ): [IBM][CLI Driver][DB2/LINUX] SQL0289N Unable to allocate new pages in table space "PSAPTEMP". SQLSTATE=57011
                The database error code is -289, the error text is "Unable to allocate new pages in tablespace "PSAPTEMP" ", and the work process number is "14".
    1. Displaying the developers trace
                Go to transaction sm51, select the correct application server (where the shortdump occured), now you are on transaction sm50 for this app server. Check the work process with the number you got from shortdump or syslog (in our example number 14), in the menu bar select Process - Trace - Display File.
               In the developers Trace search for the timestamp. In our example we get the entry:
    C Mon Nov  4 14:47:32 2002
    C  *** ERROR in ExecuteDirect[dbdb6.c, 5617]
    C  &+  0|     
    dsql_db6_exec_immediate( SQLExec...
    C  &+  0
    able space "PSAPTEMP".  SQLSTATE=57011
    C  &+  0
    C  &+  0|     
    INSERT INTO "/BIC/E100015" ...
    C  &+  0
    |    |     ...
    1. For most of the important sql statements generated by BW, when an error occurs, the SQL statement is saved as a text file called, for example, SQL00000959.sql (SQL<error code>.sql). If the statement is run in a dialog process, then the file is in the current directory of your SAP GUI on the front-end PC (for example, C:\Documents and Settings\schmitt\SAPworkdir);with batch tasks, it is stored in the DIR_TEMP directory (see transaction AL11) on the application server.
    2. For SAP Internal Support:
                Using the sql error codes determined under (1) as well as the relevant database platform, you can obtain a more detailed description of the error and possible error causes at http://dbi.wdf.sap.corp:1080/DBI/cgi/dberr.html.
    Hope this helps.
    Bye
    Dinesh

  • Data mart data load InfoPackage gets shot dumps

    This is related to the solution Vijay provided in the link What is the functionality of 0FYTLFP [Cumulated to Last Fiscal Year/Period
    I encounter the problem again for Data Mart load that I created different initial load InfoPackages with different data selection and ran them separatedly that the initial data packet are messed up and whenever I try to creat a new InfoPackage, always get short dumps. RSA7 on BW system itself doesn't give the fault entry.
    I try to use the program RSSM_OLTP_INIT_DELTA_UPDATE you provided, get three parameters:
    LOGSYS (required)
    DATASOUR (required)
    ALWAYS (not required)
    I fill in LOGSYS with our BW system source system name that's in the InfoPackage and fill in DATASOUR with the datasource name 80PUR_C01. But it goes nowhere when clicking the execution button!
    Then I tried another option you suggested by checking the entries in the following three tables:
    ROOSPRMS Control Parameters Per DataSource
    ROOSPRMSC Control Parameter Per DataSource Channel
    ROOSPRMSF Control Parameters Per DataSource
    I find there is no any entry for 1st table with datasource 80PUR_C01, but find two entries in each of the 2nd and 3rd tables. I need to go ahead to delete these two entries for these two tables, right?
    Thanks

    Kevin,
    sorry, I didn't follow your problem/question, but pay attention when you want to modify these tables content !!!
    Since there is an high risk of inconsistencies...(why don't you ask for some SAP support in OSS for this situation?)
    Hope it helps!
    Bye,
    Roberto

  • Oracle BPM Process Data mart

    I am required to create audit reports on BPM workflows.
    I am new to thid & need some guidance on configuring BPM Process Data mart. What are the pre-requisites for configuring it & what are the steps to do it.
    Also, need some inputs on BAM database. What is the frequency of data upload. Is it data update or insert in BAM.

    Hi,
    You might want to check out the Administration and Configuration Guides on http://download.oracle.com/docs/cd/E13154_01/bpm/docs65/index.html.
    I suspect you might find the BAM and Data Mart portions of this documentation a bit terse, so I've added the steps below that provides more detail. I wrote this for ALBPM 6.0, but believe it will still work for Oracle BPM 10g. It was created from an earlier ALBPM 5.7 document Support wrote called "ALBPM 5_7 Configuring and Troubleshooting the BAM and DataMart Updater.pdf.
    You can define how often you want the contents in both databases updated (actually inserted) and how long you want to persist the contents of the BAM database during the configuration.
    Here's the contents of the document:
    1. Introduction
    The use of BAM (Business Activity Monitoring) and Data Mart (or Warehouse) information is becoming more and more widespread in today’s BPM project implementations for the obvious benefits they bring to the management and tuning of processes.
    BAM is basically composed by a collection of measurements of current processes load and execution times. This gives us an idea of how the business is doing at this moment (in a pseudo real-time fashion).
    Data Mart, on the other hand, is a historical view of the processes load and execution times. And this gives us an idea of how the business has developed since the moment the projects have been put in place.
    In this document we are not going to describe exhaustively all configuration aspects of the BAM and Data Mart Updater, but rather we will quickly move from one configuration step to another paying more attention to subjects that have presented some difficulties in real-life projects.
    2. Creating the Service Endpoints
    The databases for BAM and for Data Mart first have to be defined in the External Resources section of the BPM Process Administrator.
    In this following example the service endpoint ‘BAMJ2EEWL’ is being defined. This definition is going to be used later as BAM storage. At this point nothing is created.
    Add an External Resource with the name ‘BAMJ2EEWL’ and, as we use Oracle, select the Oracle driver, then click <Next>:
    On the following screen, specify:
    ·     the hostname – here I have used ‘localhost’ as I am just setting this up to work on my laptop
    ·     the port for the Oracle service
    ·     the SID – here I have use Oracle Express so the SID is ‘XE’
    ·     the new user to create / use in Oracle for this database – here I have specified ‘BPMBAM’. This user, and its database, will be created later
    ·     the password for the user
    Scroll down to the bottom of the page and click <Save>.
    In addition to a standard JDBC connection that is going to be used by the Updater Service, a remote JDBC configuration needs to be added as the Engine runs in a WebLogic J2EE container. This Data Source is needed to grant the Engine access over BAM tables thru the J2EE Connection Pool instead of thru a dedicated JDBC. The following is an example of how to set this up.
    Add an External Resource with the name ‘BAMremote’ and select the Oracle driver, then click <Next>
    On the following screen, specify:
    ·     the Lookup Name that will be used subsequently in WebLogic - here I have given it the name ‘XAbamDS’
    Then click <Save>.
    In the next example the definition ‘DWHJ2EEWL’ is created to be used later as Data Mart storage. If you are not going to use a Data Mart storage you can skip this step.
    Add an External Resource with the name ‘DWHJ2EEWL’ and select the Oracle driver, then click <Next>:
    On the following screen, specify:
    ·     the hostname – here I have used ‘localhost’ as I am just setting this up to work on my laptop
    ·     the port for the Oracle service
    ·     the SID – here I have use Oracle Express so the SID is ‘XE’
    ·     the new user to create / use in Oracle for this database – here I have specified ‘BPMDWH’. This user, and its database, will be created later
    ·     the password for the user
    3. Configuring BAM Updater Service
    Once the service endpoint has been created the next step is to enable the BAM update, select the service endpoint to be used as BAM storage and configure update frequency and others. Here the “Updater Database Configuration” is the standard JDBC we configured earlier and the “Runtime Database Configuration” is the Remote JDBC as we are using the J2EE Engine.
    So, here’s the example of how to set up the BAM Updater service….
    Go into ‘Process Monitoring’ and select the ‘BAM’ tab and enter the relevant information (using the names created earlier – use the drop down list to select):
    Note that here, to allow me to quickly test BAM reporting, I have set the update frequency to 1 minute. This would not be the production setting.
    Once the data is input, click <Save>.
    We now have to create the schema and related tables. For this we will open the “Manage Database” page that has appeared at the bottom of the BAM screen (you may have to re-select that Tab) and select to create the database and the data structure. The user required to perform this operation is the DB system administrator:
    Text showing the successful creation of the database and data structures should appear.
    Once we are done with the schema creation, we can move to the Process Data Mart configuration screen to set up the Common Updater Service parameters. Notice that the service has not been started yet… We will get to that point later.
    4. Configuring Process Data Mart Updater Service
    In the case that Data Mart information is not going to be used, the “Enable Automatic Update” checkbox must be left off and the “Runtime Database Configuration” empty for this service. Additionally, the rest of this section can be skipped.
    In the case it is going to be used, the detail level, snapshot time and the time of update should be configured; in addition to enabling the updater and choosing the storage configuration. An example is shown below:
    Still in ‘Process Monitoring’, select the ‘Process Data Mart’ tab and enter the name created earlier (use the drop down list to select).
    Also, un-tick the Generate O3 Cubes (see later notes):
    Then click <Save>.
    Once those properties have been configured the database and the data structure have to be created. This is performed at the “Manage Database” page for which the link has appeared at the bottom of the page (as with BAM). Even when this page is identical to the one shown above (for the BAM configuration) it has been opened from the link in the “Process Data Mart” page and this makes it different.
    Text showing the successful creation of the database and data structures should appear.
    5. Configuring Common Updater Service Parameters
    In the “Process Data Mart” tab of the Process Monitoring section -along with the parameters that are specific to the Data Mart - we will find some parameters that are common to all services. These parameters are:
    • Log directory: location of the log file
    • Messages logged from Data Store Updater: severity level of the Updater logs
    • Language
    • Generate Performance Metrics: enables performance metrics generation
    • Generate Workload Metrics: enables workload metrics generation
    • Generate O3 Cubes: enables O3 Cubes generation
    In this document we are not going to describe in detail each parameter. But we will mention a few caveats:
    a. the Log directory must be specified in order for the logs to be generated
    b. the Messages logged from Data Store Updater, which indicates the level
    of the logs, should be DEBUG for troubleshooting and WARNING otherwise
    c. Performance and Workload Metrics need to be on for the typical BAM usage and, even when either metric might not be used on the initial project releases, it is recommended to leave them on in case they turn out to be useful in the future
    d. the Generation of O3 Cubes must be off if this service is not used, otherwise the Data Mart Updater service might not work properly .
    The only changes required on this screen was to de-select the ‘Generate O3 Cubes’ as shown in the last section.
    6. Set up the WebLogic configuration
    We need to set up the JDBC data source specified above, so go to Services / JDBC / Data Sources.
    Click on <Lock and Edit> and then <New> to add a New data source.
    Specify:
    ·     the Name – use the name you set up in the Process Administrator
    ·     the JNDI Name – again use the name you set up in the Process Administrator
    ·     the Database Type – Oracle
    ·     use the default Oracle Database Driver
    Then click <Next>
    On the next screen, click <Next>
    On the next screen specify:
    ·     the Database Name – this is the SID – for me that is XE
    ·     the Host Name – as I am running on my laptop, I’ve just specified ‘localhost’
    ·     the Database User Name and Password – this is the BAM database user specified in the Process Administrator
    Then click <Next>
    On the next screen, you can test the configuration to make sure you have got it right, then click <Next>
    On the next screen, select your server as the target server and click <Finish>:
    Finally, click <Activate Changes>.
    7. The Last Step: Starting Up and Shutting Down the Updater Service
    ALBPM distribution is different depending on the Operating System. In the case of the Updater Service:
    -     For Unix like Operating Systems the service is started or stopped with the albpmwarehouse.sh shell script. The command in this case is going to look like this:
    $ALBPM_HOME/bin$ ./albpmwarehouse.sh start
    -     For Windows Operating Systems the service is installed or uninstalled as a Windows Service with the albpmwarehouse.bat batch file. The command will look like:
    %ALBPM_HOME%\bin> albpmwarehouse.bat install
    After installing the service, it has to be started|stopped from the Microsoft Management Console. Note also that Windows will start automatically the installed service when the computer starts. In either case the location of the script is ALBPM_HOME/bin Where ALBPM_HOME is the ALBPM installation directory. An example will be:
    C:\bea\albpm6.0\j2eewl\bin\albpmwarehouse.bat
    8. Finally: Running BAM dashboards to show it is Working
    Now we have finally got the BAM service running, we can run dashboards from within Workspace and see the results:
    9. General BAM and Data Mart Caveats
    a. The basic difference between these two collections of measurements is that BAM keeps track of current processes load and execution times while Data Mart contains a historical view of those same measurements. This is why BAM information is collected frequently (every minute) and cleared out every several hours (or every day) and why Data Mart is updated infrequently (once a day) and grows indefinitely. Moreover, BAM measurements can be though of as a minute-by-minute sequence of Engine Events snapshots, while Data Mart measurements will be a daily sequence of Engine Events snapshots.
    b. BAM and Data Mart table schemas are very similar but they are not the same. Thus, it is important not to use a schema created with the Manage Database for BAM as Data Mart storage or vice-versa. If these schemas are exchanged by mistake, the updater service will run anyway but no data will be added to the tables and there will be errors in the log indicating that the schema is incorrect or that some tables could not be found.
    c. BAM and Data Mart Information and Services are independent from one another. Any of them can be configured and running without the other one. The information is extracted directly from the Engine Database (PPROCINSTEVENT table is the main source of info) for both of them.
    d. So far there has not been a mention of engines, projects or processes in any of the BAM or Data Mart configurations. This is because the metrics of all projects published under the current Process Administrator (or, more precisely, FDI Directory) are going to be collected.
    e. It is also important to note that only activities for which events are generated are going to be measured (and therefore, shown in the metrics). The project default is to generate events only for Interactive activities. This can be changed for any particular activity and for the whole process (where the activity setting, when specified, overrides the process setting). Unfortunately, there is no project setting for events generation so far; thus, remember to edit the level of event generation for every new process that is added to the project.
    f. BAM and Data Mart metrics are usually enriched with Business Variables. These variables are a special type of External Variables. An External Variable is a process variable with the scope of an Instance and whose value is stored on a separate column in the Engine Instances table. This allows the creation of views and filters based on this variable. A Business Variable, then, shares all the properties of an External Variable plus the fact that its value is collected in all BAM and Data Mart measurements (in some cases the value is shown as it is for a particular instance and in others the value is aggregated).
    The caveat here is that there is a maximum number of 256 Business Variables per FDI. Therefore, when publishing several projects into a single FDI directory it is recommendable to reuse business variables. This is achieved by mapping similar Business Variables of different projects with a unique real Variable (on the variable mapping performed at publish time).
    g. Configuring the Updater Service Log
    In section 5. Configuring Common Updater Service Parameters we have seen that there are two common Updater properties related to logging. These properties are “Log directory” and “Messages logged from Data Store Updater”, and they specify the location and level of these two files:
    - dwupdater.log: which is the log for the Data Mart updater service
    - bam-dwupdater.log: which is the log for the BAM updater service
    In addition to these two properties, there is a configuration file called ‘WarehouseService.conf’ that allows us to modify these other properties:
    - wrapper.console.loglevel: level for the updater service log
    - wrapper.logfile.loglevel: level for the updater service log
    - wrapper.java.additional.n: additional argument to the service JVM
    - wrapper.logfile.maxsize: maximum size of the updater service log files
    - wrapper.logfile.maxfiles: maximum number of updater service log files
    - wrapper.logfile: updater service log file name (the default value is dwupdater-service.log)
    9.1. Updater Service Log Configuration Caveats
    a. The first three parameters listed above have to be modified when increasing the log level to DEBUG (since the default is WARNING). The loglevel parameters have to be set to DEBUG and a java.additional.n (where n is a consecutive integer to the already used ones) has to be set to –ea to enable asserts, since without this option no DEBUG message is going to be generated.
    b. Of the other arguments, maxfiles might need to be increased to hold a few more days of data when the log level is set to DEBUG (with the default value up to two days are stored).
    c. The updater service has to be stopped, uninstalled, installed and then started for any of these changes to take effect.
    Hope this helps,
    Dan

  • Database/data mart size and filegrowth

    We have an OLTP database with an initial size of 1GB and a log size of 500MB with filegrowth of 16MB for both files.
    Also, the data and log files are stored on different disks for dedicated r/w transactions
    Is their a best practice for a log size (i.e. half or quarter the DB size)?
    We are also creating a data mart.
    Should the data mart be on a different disk also?
    Is their a best practice for the data and log file sizes of a data mart?

    Is their a best practice for a log size (i.e. half or quarter the DB size)?
    I don't know about data mart I am just talking about Log file size for OLTP database.  There is no specific recommendation it should be according to auto growth happening for log file.
    16MB auto growth seems less to me for both data and log file. I dont know what would be exact value but would point you to article which would help you in setting correct value. Its big but worth reading.
    This Article has couple of queries which would tell you correct value.
    Please dont create multiple log files as there would be no performance gain
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Oracle Fusion Intelligence for E1 and the old EPM Data Marts

    I just received the latest edition of the "JD Edwards EnterpriseOne: The Complete Reference" book. Chapter 5 deals with Data Warehousing and OBI. This chapter mentions that Oracle has plans to make the old JDE EPM data marts compatible with OBIEE. The new offering will be called Oracle Fusion Intelligence for EnterpriseOne.
    I was wondering if anyone has more information about what this means as far as joining E1 to OBIEE. Does anyone have experience with the EPM data marts from JDE? I've never used them and was just wondering if I could get some specifics on the entire E1/OBIEE concept.
    I've also heard that Oracle is creating some adapters for E1 into OBIEE, so I'm assuming that those are along these same lines.
    Thanks,

    I am now hearing about finance analytics. I am more confused.

Maybe you are looking for

  • Motorola KRZR and isync problems

    I just got my motorola KRZR, paired it with my computer and everything, then when i try to sync my contacts on isync it says "this device is not supported by isync. Am I doing something wrong? Is there something with the phone Im notdoing right? help

  • Clear history is not working

    I have used Firefox for years and have always set the Privacy tab to "Clear history when Firefox closes". Tonight for some reason it started not working. After closing Firefox I can open it up again and am signed in on all my websites. As you can ima

  • Expanding my dual 2.5ghz G5 with 2TB internal drive?

    I have been looking at the Seagate Barracuda LP ST32000542AS 2TB 5900 RPM 32MB Cache SATA 3.0Gb/s 3.5" Hard Drive. Will my computer run two of these 2TB drives?

  • Finder window size

    I'm having a bad time with Yosemite! Firstly Timemachine is broken and Im having a bad time trying to save items to my computer. Saving an image from a browser I right click to get save image as, the problem is the finder window that opens is so big

  • More ZPM in ZCM questions--intended behavior?

    Perhaps I am doing something wrong or not looking in the correct spot. If I use ZPM in ZCM, there seems to be some deficiencies: a) Refresh of patch status. Meaning I go into Patch Management, click on a patch and select Actions -> Update Cache. I ge