Do we require an OLTP DB and Data Mart?

Our data sources are as follows:
- An mdb file (downloaded every hour)
- Multiple xls files (downloaded every week)
Our aim is to develop a BI solution using BISM, Data Mart, OLAP cubes etc.
From my understanding, we do not necessarily require an OLTP DB and we can import our data directly into our data mart using SSIS.
- However, with a data mart, will we be able to display all our data and and perform CRUD operations on all our data at our presentation layer just like an OLTP DB? For example, list historical data in table format, which can be updated if needed?
Thanks.

Hi DarrenOD,
It is correct that you do not require an OLTP DB, but only the extracts you require. The extracts are usually significantly less than the OLTP DB since you will never do analysis on every fields in the operational system, but only a small portion of the
source system.
The tradisional datawarehouse (DWH) architecture is staging DB, DWH DB(+data mart if needed) and analytic layer ( OLAP \ Tabular). There are very specific and good reasons why. The DWH DB contains all history. Keep in mind that the DWH follows a dimensional
model whereas the OLTP follows a normalized (3NF) model with lots of indexes and foreign keys and table relationships.
Data marts are created for specific reporting reasons which cannot be derived from the DWH facts. The marts are created from the DWH tables.
Hope this helps.

Similar Messages

  • CRUD operations on Data Mart

    Our data sources are as follows:
    - An mdb file (downloaded every hour)
    - Multiple xls files (downloaded every week)
    Our aim is to develop a BI solution using BISM, Data Mart, OLAP cubes etc.
    From my understanding, we do not necessarily require an OLTP DB and we can import our data directly into our data mart using SSIS.
    - However, with a data mart, will we be able to perform CRUD operations on all our data just like an OLTP DB?
    Thanks.

    Hi Darrenod,
    Thank you for your question. 
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated. 
    Thank you for your understanding and support.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Oracle BPM Process Data mart

    I am required to create audit reports on BPM workflows.
    I am new to thid & need some guidance on configuring BPM Process Data mart. What are the pre-requisites for configuring it & what are the steps to do it.
    Also, need some inputs on BAM database. What is the frequency of data upload. Is it data update or insert in BAM.

    Hi,
    You might want to check out the Administration and Configuration Guides on http://download.oracle.com/docs/cd/E13154_01/bpm/docs65/index.html.
    I suspect you might find the BAM and Data Mart portions of this documentation a bit terse, so I've added the steps below that provides more detail. I wrote this for ALBPM 6.0, but believe it will still work for Oracle BPM 10g. It was created from an earlier ALBPM 5.7 document Support wrote called "ALBPM 5_7 Configuring and Troubleshooting the BAM and DataMart Updater.pdf.
    You can define how often you want the contents in both databases updated (actually inserted) and how long you want to persist the contents of the BAM database during the configuration.
    Here's the contents of the document:
    1. Introduction
    The use of BAM (Business Activity Monitoring) and Data Mart (or Warehouse) information is becoming more and more widespread in today’s BPM project implementations for the obvious benefits they bring to the management and tuning of processes.
    BAM is basically composed by a collection of measurements of current processes load and execution times. This gives us an idea of how the business is doing at this moment (in a pseudo real-time fashion).
    Data Mart, on the other hand, is a historical view of the processes load and execution times. And this gives us an idea of how the business has developed since the moment the projects have been put in place.
    In this document we are not going to describe exhaustively all configuration aspects of the BAM and Data Mart Updater, but rather we will quickly move from one configuration step to another paying more attention to subjects that have presented some difficulties in real-life projects.
    2. Creating the Service Endpoints
    The databases for BAM and for Data Mart first have to be defined in the External Resources section of the BPM Process Administrator.
    In this following example the service endpoint ‘BAMJ2EEWL’ is being defined. This definition is going to be used later as BAM storage. At this point nothing is created.
    Add an External Resource with the name ‘BAMJ2EEWL’ and, as we use Oracle, select the Oracle driver, then click <Next>:
    On the following screen, specify:
    ·     the hostname – here I have used ‘localhost’ as I am just setting this up to work on my laptop
    ·     the port for the Oracle service
    ·     the SID – here I have use Oracle Express so the SID is ‘XE’
    ·     the new user to create / use in Oracle for this database – here I have specified ‘BPMBAM’. This user, and its database, will be created later
    ·     the password for the user
    Scroll down to the bottom of the page and click <Save>.
    In addition to a standard JDBC connection that is going to be used by the Updater Service, a remote JDBC configuration needs to be added as the Engine runs in a WebLogic J2EE container. This Data Source is needed to grant the Engine access over BAM tables thru the J2EE Connection Pool instead of thru a dedicated JDBC. The following is an example of how to set this up.
    Add an External Resource with the name ‘BAMremote’ and select the Oracle driver, then click <Next>
    On the following screen, specify:
    ·     the Lookup Name that will be used subsequently in WebLogic - here I have given it the name ‘XAbamDS’
    Then click <Save>.
    In the next example the definition ‘DWHJ2EEWL’ is created to be used later as Data Mart storage. If you are not going to use a Data Mart storage you can skip this step.
    Add an External Resource with the name ‘DWHJ2EEWL’ and select the Oracle driver, then click <Next>:
    On the following screen, specify:
    ·     the hostname – here I have used ‘localhost’ as I am just setting this up to work on my laptop
    ·     the port for the Oracle service
    ·     the SID – here I have use Oracle Express so the SID is ‘XE’
    ·     the new user to create / use in Oracle for this database – here I have specified ‘BPMDWH’. This user, and its database, will be created later
    ·     the password for the user
    3. Configuring BAM Updater Service
    Once the service endpoint has been created the next step is to enable the BAM update, select the service endpoint to be used as BAM storage and configure update frequency and others. Here the “Updater Database Configuration” is the standard JDBC we configured earlier and the “Runtime Database Configuration” is the Remote JDBC as we are using the J2EE Engine.
    So, here’s the example of how to set up the BAM Updater service….
    Go into ‘Process Monitoring’ and select the ‘BAM’ tab and enter the relevant information (using the names created earlier – use the drop down list to select):
    Note that here, to allow me to quickly test BAM reporting, I have set the update frequency to 1 minute. This would not be the production setting.
    Once the data is input, click <Save>.
    We now have to create the schema and related tables. For this we will open the “Manage Database” page that has appeared at the bottom of the BAM screen (you may have to re-select that Tab) and select to create the database and the data structure. The user required to perform this operation is the DB system administrator:
    Text showing the successful creation of the database and data structures should appear.
    Once we are done with the schema creation, we can move to the Process Data Mart configuration screen to set up the Common Updater Service parameters. Notice that the service has not been started yet… We will get to that point later.
    4. Configuring Process Data Mart Updater Service
    In the case that Data Mart information is not going to be used, the “Enable Automatic Update” checkbox must be left off and the “Runtime Database Configuration” empty for this service. Additionally, the rest of this section can be skipped.
    In the case it is going to be used, the detail level, snapshot time and the time of update should be configured; in addition to enabling the updater and choosing the storage configuration. An example is shown below:
    Still in ‘Process Monitoring’, select the ‘Process Data Mart’ tab and enter the name created earlier (use the drop down list to select).
    Also, un-tick the Generate O3 Cubes (see later notes):
    Then click <Save>.
    Once those properties have been configured the database and the data structure have to be created. This is performed at the “Manage Database” page for which the link has appeared at the bottom of the page (as with BAM). Even when this page is identical to the one shown above (for the BAM configuration) it has been opened from the link in the “Process Data Mart” page and this makes it different.
    Text showing the successful creation of the database and data structures should appear.
    5. Configuring Common Updater Service Parameters
    In the “Process Data Mart” tab of the Process Monitoring section -along with the parameters that are specific to the Data Mart - we will find some parameters that are common to all services. These parameters are:
    • Log directory: location of the log file
    • Messages logged from Data Store Updater: severity level of the Updater logs
    • Language
    • Generate Performance Metrics: enables performance metrics generation
    • Generate Workload Metrics: enables workload metrics generation
    • Generate O3 Cubes: enables O3 Cubes generation
    In this document we are not going to describe in detail each parameter. But we will mention a few caveats:
    a. the Log directory must be specified in order for the logs to be generated
    b. the Messages logged from Data Store Updater, which indicates the level
    of the logs, should be DEBUG for troubleshooting and WARNING otherwise
    c. Performance and Workload Metrics need to be on for the typical BAM usage and, even when either metric might not be used on the initial project releases, it is recommended to leave them on in case they turn out to be useful in the future
    d. the Generation of O3 Cubes must be off if this service is not used, otherwise the Data Mart Updater service might not work properly .
    The only changes required on this screen was to de-select the ‘Generate O3 Cubes’ as shown in the last section.
    6. Set up the WebLogic configuration
    We need to set up the JDBC data source specified above, so go to Services / JDBC / Data Sources.
    Click on <Lock and Edit> and then <New> to add a New data source.
    Specify:
    ·     the Name – use the name you set up in the Process Administrator
    ·     the JNDI Name – again use the name you set up in the Process Administrator
    ·     the Database Type – Oracle
    ·     use the default Oracle Database Driver
    Then click <Next>
    On the next screen, click <Next>
    On the next screen specify:
    ·     the Database Name – this is the SID – for me that is XE
    ·     the Host Name – as I am running on my laptop, I’ve just specified ‘localhost’
    ·     the Database User Name and Password – this is the BAM database user specified in the Process Administrator
    Then click <Next>
    On the next screen, you can test the configuration to make sure you have got it right, then click <Next>
    On the next screen, select your server as the target server and click <Finish>:
    Finally, click <Activate Changes>.
    7. The Last Step: Starting Up and Shutting Down the Updater Service
    ALBPM distribution is different depending on the Operating System. In the case of the Updater Service:
    -     For Unix like Operating Systems the service is started or stopped with the albpmwarehouse.sh shell script. The command in this case is going to look like this:
    $ALBPM_HOME/bin$ ./albpmwarehouse.sh start
    -     For Windows Operating Systems the service is installed or uninstalled as a Windows Service with the albpmwarehouse.bat batch file. The command will look like:
    %ALBPM_HOME%\bin> albpmwarehouse.bat install
    After installing the service, it has to be started|stopped from the Microsoft Management Console. Note also that Windows will start automatically the installed service when the computer starts. In either case the location of the script is ALBPM_HOME/bin Where ALBPM_HOME is the ALBPM installation directory. An example will be:
    C:\bea\albpm6.0\j2eewl\bin\albpmwarehouse.bat
    8. Finally: Running BAM dashboards to show it is Working
    Now we have finally got the BAM service running, we can run dashboards from within Workspace and see the results:
    9. General BAM and Data Mart Caveats
    a. The basic difference between these two collections of measurements is that BAM keeps track of current processes load and execution times while Data Mart contains a historical view of those same measurements. This is why BAM information is collected frequently (every minute) and cleared out every several hours (or every day) and why Data Mart is updated infrequently (once a day) and grows indefinitely. Moreover, BAM measurements can be though of as a minute-by-minute sequence of Engine Events snapshots, while Data Mart measurements will be a daily sequence of Engine Events snapshots.
    b. BAM and Data Mart table schemas are very similar but they are not the same. Thus, it is important not to use a schema created with the Manage Database for BAM as Data Mart storage or vice-versa. If these schemas are exchanged by mistake, the updater service will run anyway but no data will be added to the tables and there will be errors in the log indicating that the schema is incorrect or that some tables could not be found.
    c. BAM and Data Mart Information and Services are independent from one another. Any of them can be configured and running without the other one. The information is extracted directly from the Engine Database (PPROCINSTEVENT table is the main source of info) for both of them.
    d. So far there has not been a mention of engines, projects or processes in any of the BAM or Data Mart configurations. This is because the metrics of all projects published under the current Process Administrator (or, more precisely, FDI Directory) are going to be collected.
    e. It is also important to note that only activities for which events are generated are going to be measured (and therefore, shown in the metrics). The project default is to generate events only for Interactive activities. This can be changed for any particular activity and for the whole process (where the activity setting, when specified, overrides the process setting). Unfortunately, there is no project setting for events generation so far; thus, remember to edit the level of event generation for every new process that is added to the project.
    f. BAM and Data Mart metrics are usually enriched with Business Variables. These variables are a special type of External Variables. An External Variable is a process variable with the scope of an Instance and whose value is stored on a separate column in the Engine Instances table. This allows the creation of views and filters based on this variable. A Business Variable, then, shares all the properties of an External Variable plus the fact that its value is collected in all BAM and Data Mart measurements (in some cases the value is shown as it is for a particular instance and in others the value is aggregated).
    The caveat here is that there is a maximum number of 256 Business Variables per FDI. Therefore, when publishing several projects into a single FDI directory it is recommendable to reuse business variables. This is achieved by mapping similar Business Variables of different projects with a unique real Variable (on the variable mapping performed at publish time).
    g. Configuring the Updater Service Log
    In section 5. Configuring Common Updater Service Parameters we have seen that there are two common Updater properties related to logging. These properties are “Log directory” and “Messages logged from Data Store Updater”, and they specify the location and level of these two files:
    - dwupdater.log: which is the log for the Data Mart updater service
    - bam-dwupdater.log: which is the log for the BAM updater service
    In addition to these two properties, there is a configuration file called ‘WarehouseService.conf’ that allows us to modify these other properties:
    - wrapper.console.loglevel: level for the updater service log
    - wrapper.logfile.loglevel: level for the updater service log
    - wrapper.java.additional.n: additional argument to the service JVM
    - wrapper.logfile.maxsize: maximum size of the updater service log files
    - wrapper.logfile.maxfiles: maximum number of updater service log files
    - wrapper.logfile: updater service log file name (the default value is dwupdater-service.log)
    9.1. Updater Service Log Configuration Caveats
    a. The first three parameters listed above have to be modified when increasing the log level to DEBUG (since the default is WARNING). The loglevel parameters have to be set to DEBUG and a java.additional.n (where n is a consecutive integer to the already used ones) has to be set to –ea to enable asserts, since without this option no DEBUG message is going to be generated.
    b. Of the other arguments, maxfiles might need to be increased to hold a few more days of data when the log level is set to DEBUG (with the default value up to two days are stored).
    c. The updater service has to be stopped, uninstalled, installed and then started for any of these changes to take effect.
    Hope this helps,
    Dan

  • What is the difference between copy requirement and data transfer routines

    Hi Experts,
    I am writing a copy control routine for maintainence quote
    from sales quote.Where do i have to write it in VOFM?
    Is it in copying requirements or data transfer.
    Actually according to the business all the line items from the sales quote are copied
    into the maintainence quote and additional line items are added into maintainence quote
    and the line items copied from the source document are linked to the two newly added
    line items.
    Please advise.
    And is it possible to add the 2 new line items to the new document into VBAP at the runtime?
    Regards,
    Chitrasen

    First of all you have to have a link between the sales documents. You can check it in Tcode: VTAA. if not, talk to your functional people. Try to make this equal to quote to order but instead make it quote to quote.
    You need to put break point in copy requirement as well as in data transfers and check for your requirement.
    Well, the code you want to put in will be the requirement
    I am not sure if you can add 2 line items runtime into quote. this has to go through different checks and i dont think it is possible.
    Good luck

  • Require name and date before printing form

    I have a 13 page PDF that a customer or our staff prints from our website. Often the customer or staff does not name and date each page of the form. Not having a name and date on each page of the form causes problems when they customer returns the form to the office, and the form pack gets divided for scanning into different areas of the customer file. 1) Is there an Adobe product that changes a PDF to require the downloader of the form to place a first and last name in the name field before printing. 2) Also, I would need a date stamp placed on the form when it prints. Is there I product that could help me program this PDF to do these 2 things?  This post is the first time I can recall posting in this community. Any other suggestions are welcome. Thank you.

    >> 1) Is there an Adobe product that changes a PDF to require the downloader of the form to place a first and last name in the name field before printing.
    Many fields have a "Required" property. This is used when a form is submitted to a scripted web page for processing to throw an error message if any required fields are not completed.
    One can use this property within a an action for a form to check if required fields are competed.
    >> 2) Also, I would need a date stamp placed on the form when it prints.
    I would add a form field that is filled in with the print date and time by some JavaScript in the Will Print action.

  • To read COMTRADE file in Labview there is an example provided. Can somebody provide the sample .cfg and .dat files required for its working?

    To read COMTRADE file in Labview there is an example provided. Can somebody provide the sample .cfg and .dat files required for its working?

    Thanks for the reply.
    But this library doesn't contain any sample .cfg and .dat files which can be read and understood. Can you please provide the same?

  • MRP is conflicting with Sales order requirement Qty and date

    Hi experts,
    We have below scenario:
    Material has the stock 100. It has reorder point 3 and MRP type V2 (Automatic reorder point with external requirements) and planned delivery time 10 days.
    MRP run creates purchase requisition whenever the stock falls below reorder level or/and based on customer orders requirements.
    We created first sales order for Qty 80 and delivery date 03/01/2012.  Qty 80 is confirmed against this sales order.
    We created second sales order for Qty 50 and delivery date 01/31/2012. Now remaining Qty 20 is confirmed against this order. So we have requirements of 30 Qty for this second order.
    When we run MRP, it supposes to create Purchase requisition for Qty 33 (30 + 3) including reorder point (3 Qty). As the delivery date for this order is 01/31/2012 and requirement Qty is 30, and planned delivery time is10 days, MRP run should create purchase requisition. But it is not creating.
    If I change the planned delivery time to 30 days, it is creating purchase requisition. It is considering sales order delivery date of First order (delivery date: 03/01/2012), but not for second order (delivery date: 01/31/2012).
    So MRP is conflicting with Sales order requirement Qty and date.
    Appreciate your help in this regard to resolve the issue
    Thanks,
    Srini

    Hi,
    We are using manual reorder point planning with external requirements (V1)
    Thanks,
    Srini

  • Do iOS apps that are ONLINE translators (e.g requires Wifi or internet, such as Google Translate) store every translation you type in the Documents and Data files?

    I have several apps that are translators. Many of them are for offline use, (especially ones that cost money) but they keep eating away at my memory with every translation I type. What about the ONLINE translators, like GOOGLE TRANSLATE? Do they store the translations inthe app's Documents and Data, or not at all? Please help me! I'm dying to know! Also, does Google Translate charge you money for unlimited translations, or is it entirely free?

    No no no, thank you. However, I learned just now that "flash storage" does not equal "Documents and Data" because I noticed that the Documents and Data for Google Translate increases very slighty with every translation (faster if speaking the translation). But this doesn't bother me at all. Why? Becuz the app size is oh so small (2.3 MB) & every translator (all 50+ of them) is included for free (how dope is that?) without having to download any additional memory for each language. So when the time comes to delete and redownload it due to memory capacity shortage, it still requires only 2 MB to redownload it. I bet that won't harm my device at all. Hey, I can live with that!
    Thanks for your support for my question, Deggie, kudos!!!

  • How can we get Dynamic columns and data with RTF Templates in BI Publisher

    How can we get Dynamic columns and data with RTf Templates.
    My requirement is :
    create table xxinv_item_pei_taginfo(item_id number,
    Organization_id number,
    item varchar2(4000),
    record_type varchar2(4000),
    record_value CLOB,
    State varchar2(4000));
    insert into xxinv_item_pei_taginfo values( 493991 ,224, '1265-D30', 'USES','fever','TX');
    insert into xxinv_item_pei_taginfo values( 493991 ,224, '1265-D30', 'HOW TO USE','one tablet daily','TX');
    insert into xxinv_item_pei_taginfo values( 493991 ,224, '1265-D30', 'SIDE EFFECTS','XYZ','TX');
    insert into xxinv_item_pei_taginfo values( 493991 ,224, '1265-D30', 'DRUG INTERACTION','ABC','TX');
    insert into xxinv_item_pei_taginfo values( 493991 ,224, '1265-D30', 'OVERDOSE','Go and see doctor','TX');
    insert into xxinv_item_pei_taginfo values( 493991 ,224, '1265-D30', 'NOTES','Take after meal','TX');
    select * from xxinv_item_pei_taginfo;
    Item id Org Id Item Record_type Record_value State
    493991     224     1265-D30     USES     fever     TX
    493991     224     1265-D30     HOW TO USE     one tablet daily     TX
    493991     224     1265-D30     SIDE EFFECTS     XYZ     TX
    493991     224     1265-D30     DRUG INTERACTION     ABC     TX
    493991     224     1265-D30     OVERDOSE      Go and see doctor     TX
    493991     224     1265-D30     NOTES     Take after meal     TX
    Above is my data
    I have to fetch the record_type from a lookup where I can have any of the record type, sometime USES, HOW TO USE, SIDE EFFECTS and sometimes some other set of record types
    In my report I have to get these record typpes as field name dynamically whichever is available in that lookup and record values against them.
    its a BI Publisher report.
    please suggest

    if you have data in db then you can create xml with needed structure
    and so you can create bip report
    do you have errors or .... ?

  • Infoset Join condition on Key feilds and data fields

    Hi Guys,
    I have a requirement to biuld the Info set with join conditon on two DSO's  the info objects which i am using in the JOin condition are defined as data fieds in one DSO and defined as key fields in another DSO, is it possible to define join condition on key fields and data fields.
    The two info objects are                
                           0AC_DOC_NO
                           0ITEM_NUM
    These two info objects are defined as  data fields in DSO :   0LIV_DS1   Invocie verificaion
                                                            key fields in DSO:    0FIAP_0o3 FI AP Line Item
    Please suggest me is it possible to define join the condtion on the data fields and key feilds.
    Thanks
    Best regards
    SG

    Hi
    yes you can create join, you will get any issue in reporting level.
    example: Say i want to create Info Set on 0MATERIAL and Sales DSO.
    In 0MATERIAL Info Object it is key filed, but in my DSO 0MATERIAL is data field.Still we can create
    Creation of join is dependent on fields common in your source objects.
    check out the below document
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/2f5aa43f-0c01-0010-a990-9641d3d4eef7?QuickLink=index&overridelayout=true
    Regards,
    Venkatesh
    Edited by: Venkateswarlu Nandimandalam on Sep 27, 2011 2:26 AM

  • Qty and Date Variance using Purchasing InfoCube

    Using Purchasing Cube, I want to find out the Qty Variance and Date variance,
    Iu2019m calculating these using GR Posting date and Requested Delivery Date and GR Qty and Qty in UoM.
    Iu2019ve the following doubts:
    1.     In order to calculate the Variance, do we need to filter the data pertaining to any particular BW Transaction Key (0PROCESSKEY) since we donu2019t want Goods Issues (can be Purchase Returns or intra company Stock Transfers ), If so what are the values for the Process Key to be considered for filter.
    2.     Though the Qty key figure matches the Requested Qty, the PO status is u2018Openu2019. What could be the possible reasons?
    3.     The qty. and days variance for the POs already received can be calculated. But what should be done incase of POs for which Goods are not yet received? Again in that scenario, we can have two cases: Delivery Past Due and Delivery Date in Future. How to tackle these 2 cases. As per our Requirement, we have to calculate the difference between System date and Requested Delivery Date incase of Past Delivery Date so that we can have the variance as of System Date. Like If the Delivery Date is 25th Nov, if we run the Query on 28th, it should show 3 days late and if run on 29th, it has to be 4 days if still undelivered. Please give me the code to be written in Transformations.
    We have not yet decided what has to be done in the second case i.e. Future Delivery Date. Please guide me what is the best option in that case.
    Thanks,
    Aryaman

    Hi,
    1. In order to calculate the Variance, do we need to filter the data pertaining to any particular BW Transaction Key (0PROCESSKEY) since we donu2019t want Goods Issues (can be Purchase Returns or intra company Stock Transfers ), If so what are the values for the Process Key to be considered for filter.
    You need to add Z-Keyfigures and then write Routines based on Process Keys in Update Rules/Transformations and filter the required Process Keys, you need to find required Process Keys in ECC, SPRO-->Reference IMG an dthen search for BW Transactional Key and see it disuss with MM FUnctional COnsultant and Identify it.
    2. Though the Qty key figure matches the Requested Qty, the PO status is u2018Openu2019. What could be the possible reasons?
    I faced the problem with 2LIS_02_* datasources so I created new generic data source and used it.  Just take only one record with that you track in ECC. You must take MM Consultants help, they will guide you.
    3. The qty. and days variance for the POs already received can be calculated. But what should be done incase of POs for which Goods are not yet received? Again in that scenario, we can have two cases: Delivery Past Due and Delivery Date in Future. How to tackle these 2 cases. As per our Requirement, we have to calculate the difference between System date and Requested Delivery Date incase of Past Delivery Date so that we can have the variance as of System Date. Like If the Delivery Date is 25th Nov, if we run the Query on 28th, it should show 3 days late and if run on 29th, it has to be 4 days if still undelivered. Please give me the code to be written in Transformations.
    We have not yet decided what has to be done in the second case i.e. Future Delivery Date. Please guide me what is the best option in that case.
    Once you have Qty and Dates from Transactional Data then it is easy to calculate the Date Varience and Qty varience, see the follwoing URL it is having one article How to Calculate Meterial Aging whcih will give how to calculate the differnce..
    http://wiki.sdn.sap.com/wiki/display/profile/Surendra+Reddy
    Thanks
    Reddy

  • Difference b/w DATA TYPE and DATA OBJECT & differences b/w TYPE and LIKE

    hai
    can any one say the differences between Data type and Data Object.
    And also differences between TYPE and LIKE
    thanks
    Gani

    hi,
    _Data Types and Data Objects_
          Programs work with local program data – that is, with byte sequences in the working memory. Byte sequences that belong together are called fields and are characterized by a length, an identity (name), and – as a further attribute – by a data type. All programming languages have a concept that describes how the contents of a field are interpreted according to the data type.
          In the ABAP type concept, fields are called data objects. Each data object is thus an instance of an abstract data type. There are separate name spaces for data objects and data types. This means that a name can be the name of a data object as well as the name of a data type simultaneously.
    Data Types
       As well as occurring as attributes of a data object, data types can also be defined independently. You can then use them later on in conjunction with a data object. The definition of a user-defined data type is based on a set of predefined elementary data types. You can define data types either locally in the declaration part of a program using the TYPESstatement) or globally in the ABAP Dictionary. You can use your own data types to declare data objects or to check the types of parameters in generic operations.
         All programming languages distinguish between various types of data with various uses, such as ….. type data for storing or displaying values and numerical data for calculations. The attributes in question are described using data types. You can define, for example, how data is stored in the repository, and how the ABAP statements work with the data.
    Data types can be divided into elementary, reference, and complex types.
    a. Elementary Types
    These are data types of fixed or variable length that are not made up of other types.
    The difference between variable length data types and fixed length data types is that the length and the memory space required by data objects of variable length data types can change dynamically during runtime, and that these data types cannot be defined irreversibly while the data object is being declared.
    Predefined and User-Defined Elementary Data Types
    You can also define your own elementary data types in ABAP using the TYPES statement. You base these on the predefined data types. This determines all of the technical attributes of the new data type. For example, you could define a data type P_2 with two decimal places, based on the predefined data type P. You could then use this new type in your data declarations.
    b.  Reference Types
    Reference types are deep data types that describe reference variables, that is, data objects that contain references. A reference variable can be defined as a component of a complex data object such as a structure or internal table as well as a single field.
    c. Complex Data Types
    Complex data types are made up of other data types. A distinction is made here between structured types and table types.
    Data Objects
          Data objects are the physical units with which ABAP statements work at runtime. The contents of a data object occupy memory space in the program. ABAP statements access these contents by addressing the name of the data object and interpret them according to the data type.. For example, statements can write the contents of data objects in lists or in the database, they can pass them to and receive them from routines, they can change them by assigning new values, and they can compare them in logical expressions.
           Each ABAP data object has a set of technical attributes, which are fully defined at all times when an ABAP program is running (field length, number of decimal places, and data type). You declare data objects either statically in the declaration part of an ABAP program (the most important statement for this is DATA), or dynamically at runtime (for example, when you call procedures). As well as fields in the memory area of the program, the program also treats literals like data objects.
            A data object is a part of the repository whose content can be addressed and interpreted by the program. All data objects must be declared in the ABAP program and are not persistent, meaning that they only exist while the program is being executed. Before you can process persistent data (such as data from a database table or from a sequential file), you must read it into data objects first. Conversely, if you want to retain the contents of a data object beyond the end of the program, you must save it in a persistent form.
    Declaring Data Objects
          Apart from the interface parameters of procedures, you declare all of the data objects in an ABAP program or procedure in its declaration part. These declarative statements establish the data type of the object, along with any missing technical attributes. This takes place before the program is actually executed. The technical attributes can then be queried while the program is running.
         The interface parameters of procedures are generated as local data objects, but only when the procedure is actually called. You can define the technical attributes of the interface parameters in the procedure itself. If you do not, they adopt the attributes of the parameters from which they receive their values.
    ABAP contains the following kinds of data objects:
    a.  Literals
    Literals are not created by declarative statements. Instead, they exist in the program source code. Like all data objects, they have fixed technical attributes (field length, number of decimal places, data type), but no name. They are therefore referred to as unnamed data objects.
    b.  Named Data Objects
    Data objects that have a name that you can use to address the ABAP program are known as named objects. These can be objects of various types, including text symbols, variables and constants.
    Text symbols are pointers to texts in the text pool of the ABAP program. When the program starts, the corresponding data objects are generated from the texts stored in the text pool. They can be addressed using the name of the text symbol.
    Variables are data objects whose contents can be changed using ABAP statements. You declare variables using the DATA, CLASS-DATA, STATICS, PARAMETERS, SELECT-OPTIONS, and RANGESstatements.
    Constants are data objects whose contents cannot be changed. You declare constants using the CONSTANTSstatement.
    c.  Anonymous Data  Objects
    Data objects that cannot be addressed using a name are known as anonymous data objects. They are created using the CREATE DATAstatement and can be addressed using reference variables.
    d.  System-Defined Data Objects
    System-defined data objects do not have to be declared explicitly - they are always available at runtime.
    e.  Interface Work Areas
    Interface work areas are special variables that serve as interfaces between programs, screens, and logical databases. You declare interface work areas using the TABLES and NODESstatements.
    What is the difference between Type and Like?
    Answer1:
    TYPE, you assign datatype directly to the data object while declaring.
    LIKE,you assign the datatype of another object to the declaring data object. The datatype is referenced indirectly.
    Answer2:
    Type is a keyword used to refer to a data type whereas Like is a keyword used to copy the existing properties of already existing data object.
    Answer3:
    type refers the existing data type
    like refers the existing data object
    reward if useful
    thanks and regards
    suma sailaja pvn

  • Report, Form, and Data Component portlets go to error page

    I am able to add Report Form and Data Component portlets to my page, but when I click on the define button it takes me to my error page.
    Have I missed configuring something?

    hello friends,
    thanks for replly,
    i m using command noe after table node in main window, but i want display the data below the main window , it is diplaying  new page as required but only missing data below the main window.
    to be more clear i want to print all the window in first page then only go for next page , so is there any condition or parmeter in main window to go next page only agfter completion of first page.
    dont worry abt points.

  • Woes of the Z10, and now i want to use it as a backup phone for my contacts and data. Can I?

    Ok, this has been a real nightmare. I am an extensive user of BB since OS4. Wanted to try the new Z10, so I got myself a spanking new one. With my prior experience in switching devices, it would be a cinch. Transfer device over via blackberry desktop software, change the emails via BIS, and I'm all sorted out. 
    OMG dear blackberry. You made me jump through so many hoops I felt like a circus dog.
    Blacberry desktop doesn't work with the Z10. No one told me. I had to find out the hard way after not being able to detect the phone. wasted half an hour of my life. Downloaded Blackberry link. Transferred my contacts and everything over, couldn't be happier.
    Hurrah, thought the nightmare was done. It was just starting.
    Micro sim. Not everyone has a microsim cutter lying around. Great. Troop off to the shop. Figured after I get the sim cut, I could also transfer the emails via BIS server. So that's what I did. (oh by the way, having to buy a micro SD card was also a pain in the A but still acceptable). The BIS server detected the change in device so it disabled my emails. No problem. Until the technician told me at the store-sorry-BIS DOESNT WORK WITH Z10. Whaaaaat? Come on blackberry! Now its just normal emails, similar to that of iphone and samsung. Emails were your best feature. Ok, this I can stomach. What I cannot stomach is...blackberrry, you just killed your frequent traveller who needs international roaming. BIS emails and BBM was wonderful. I could get by travelling from country to country on 50mb a month unlimited. Apparently not anymore. I need a full fledged international roaming plan that will cost me an arm and a leg. This is insane. 
    Finally I had the Z10 working. I manually entered all my emails. ( i have 9 emails. Thank god they are all gmail based, especially for work, if not I would have ripped all my hair out). For the whole 2 hours, I looked at it and hated it. It was hard to use, I had to keep swiping up and to the right, sometimes it wouldn't swipe well, sometimes I couldn't tell if it was swiping or not. Neither did it help that Whatsapp hasn't even been sorted out on the Z10. Whatsapp is my life. My family's on it. My friends are on it. My suppliers and my customers are on it. Oh wait. Z10-well, you can download it from the appworld, but it doesn't install. Thinking it was a problem with the downloads, I kept at it..only to have the technician tell me, "sorry, Whatsapp not working on Z10, too bad"...anyway I made up my mind to go back to the Torch. So thinking it would be as simple as changing the hub ID on the Z10 and then reactivating the Torch. All will be well. Wrong. 
    I had to run to another tel shop to beg for a spare sim shell so I could stick the micro sim into it and have it back as a normal sim. Another 2 hours of my life later, I realise-changing the hub ID on the Z10 doesn't change anything. It doesn't mean you can just use the original hub Id and all systems are go on the first phone. Changing the hub ID means changing the original one. Argh. This was so frustrating especially when the BIS server refused to switch back to the old device. Ever clicked on detect device 200 times? Well that would be me. Eventually I figured it out. Change back to the old hub ID, security wipe the horrible thing, log in with the original hub ID and reset the device via BIS server. Finally, order is restored.
    Which leads me to my question. I am thinking of porting over my contact list and data to the Z10, and using it as another phone in another country. However I want to use a different Id from my Torch since it is considered a different device but with the same data. Will blackberry link allow me to do this, or will it only switch the data provided the hub ID is the same?
    Blackberry, hand over heart, I'll use the Torch till it dies ( I just changed out the nav pad cause it was completely worn out and would act wonky in sunlight- I call it the blackberry vampire version) and after that I'm off to another smart phone platform. You've made my life miserable. It was great while it lasted, but I'm going to be migrating to another platform eventually. Thanks for the memories.

    Hello,
    BlackBerry 10 has always been advertised as a revolutionary change from prior...not an evolution. And you have both now learned the hard way just how detailed that change is. As with so many things, the assumptions made from past experience with BB are not useful for this very significant change. Rather, research done prior to purchase is the most useful activity one can undertake.
    For BB10, BIS is not required. Indeed, on some carriers, having BIS on the account actually impedes some services for BB10. BB10 uses a full on-device ActiveSync client for email now. BIS-handled email is gone...as are "hosted" email accounts (though you can get 1-year of forwarding).
    Migrating of data from old to new involves many choices before beginning. For example, if you are not afraid of placing your Calendar and Contact data "into the cloud", the Z10 can excellently synchronize that against many different providers (many more than BIS ever could). Done correctly, one can attain full end-to-end OTA synchronization of Calendar/Contact data between Outlook desktop client (and perhaps others, though I've not tested), a Z10, a PlayBook, and more (for me, I've got 2 Outlook client instances, one Z10, one PB, and one Droid...all working beautifully OTA).
    Traveling is no more challenging than in the past...your specific device must be able to function on the networks upon which you wish to roam...this is no different from before (e.g., a GSM-only device simply cannot roam on a CDMA network). To compare specifics, I've found this useful:
    http://www.bbin.in/en/2013/03/03/4-types-blackberry-z10-models-fits-better/?utm_campaign=4-types-of-...
    There are many other differences...but please try to let go of what was before, for pretty much none of it applies anymore. BB10 is revolutionarily different from before.
    Good luck!
    Occam's Razor nearly always applies when troubleshooting technology issues!
    If anyone has been helpful to you, please show your appreciation by clicking the button inside of their post. Please click here and read, along with the threads to which it links, for helpful information to guide you as you proceed. I always recommend that you treat your BlackBerry like any other computing device, including using a regular backup schedule...click here for an article with instructions.
    Join our BBM Channels
    BSCF General Channel
    PIN: C0001B7B4   Display/Scan Bar Code
    Knowledge Base Updates
    PIN: C0005A9AA   Display/Scan Bar Code

  • Need MDX query to find something like date diff and Date Range for last 10 days

    Hi ,
    I need two Query .First Query for below;
    I have below data in table like.
    Cat      StartDate    EndDate     
    A    2000-01-01     2000-01-15   
    B    2000-01-02     2000-01-30    
    C    2000-01-01     2000-01-31    
    D    2000-02-01     2000-02-28    
    A    2000-01-10     2000-01-31    
    I need if Startdate and Date completes whole one month then set status =1 else  0 using MDX query.
    like this ;
    Cat      StartDate    EndDate       Status
    A    2000-01-01     2000-01-15    1
    B    2000-01-02     2000-01-30    0
    C    2000-01-01     2000-01-31    1
    D    2000-02-01     2000-02-28    1
    A    2000-01-10     2000-01-31    1
    In second query I need last 10 days from current days like;
    Now = 8/20/2014
    output will be ;
    8/20/2014
    8/19/2014
    8/18/2014
    8/17/2014
    8/16/2014
    8/15/2014
    8/14/2014
    8/13/2014
    8/12/2014
    8/11/2014
    8/10/2014
    Please help me .
    Thanks

    Hi Prajapati,
    In your scenario, you can use Properties and Datediff function to achieve your requirement. Since not know the structure of your cube, we cannot give you the esact query.
     I have tested it on the AdventureWorks cube, the query below is for you reference.
    WITH MEMBER [Measures].[StartDate]
    AS
    [Employee].[Employee Department].CURRENTMEMBER.PROPERTIES('Start Date')
    MEMBER [Measures].[WorkYear]
    AS
    DATEDIFF('yyyy',[Measures].[StartDate],NOW())
    MEMBER [Measures].[Status]
    AS
    IIF(DATEDIFF('yyyy',[Measures].[StartDate],NOW())>10,1,0)
    SELECT {[Measures].[StartDate],[Measures].[WorkYear],[Measures].[Status]} ON 0,
    [Employee].[Employee Department].[Employee].MEMBERS ON 1
    FROM [Adventure Works]
    Results
    Reference
    http://msdn.microsoft.com/en-us/library/ms144821.aspx
    Regards,
    Charlie Liao
    TechNet Community Support

Maybe you are looking for