How to implement a ruler?

Hi, I am trying to figure out to implement a ruler like the online TLF demo has, http://labs.adobe.com/technologies/textlayout/demos/.
Could you let me know what would be the best way to do this? And also, is it possible to open the source of this demo?
Thanks in advance.
Luke.

We have a SimpleEditor example project, which is useful if you are starting a user interface, but it doesn't include a ruler. Unfortunately the code for the online editor UI is something we can't release at present. I can say that it is possible to do it all in Flex. I think one of the tricky parts is likely to be tracking the tab when it gets dragged.
Sorry!
- robin

Similar Messages

  • How to implement  business rules by using drolls in OSB

    Hi
    I am new to Drools,can any body tell how to implement drools concept in OSB11,provide any useful links or blogs.
    Thanks in Advance
    Mani

    Mani,
    I have implemented Drools by exposing them over web service call through Business Service of OSB.
    As you are using Java Callout, try to set proper return type for the java method called, better to use XMLObjects as the return type.
    http://www.xenta.nl/blog/2011/08/29/oracle-service-bus-java-callouts-with-xmlobjects/
    http://mazanatti.info/index.php?/archives/63-Oracle-Service-Bus-generating-XML-Objects-from-Java-Callouts.html
    http://blog.xebia.com/2009/10/11/java-callout-on-the-alsb/
    http://itnewscast.com/middleware/oracle-service-bus-java-callouts-xmlobjects
    How to retrieve the java object in a proxy service in osb -- Plz help
    Hope it helps !!
    Abhinav
    Edited by: Abhinav on Dec 12, 2012 4:21 PM

  • ADF BC, how implements business rules?

    Hi.
    I am new in oracles technology. Please help me understand how implements business rules.
    For example: I have two tables operation(id,code) and operation_item(id, operation_id, code). I create entities objects for it: OperationEntity, OperationItemEntity and association: OperationItemToOperation. Then create entity-based view object OperationById which have bind parameter id for find operation by id. I also create entity-based view object OperationItem and view link OperationItemToOperationLink.
    My ApplicationModule contains OperationById and OperationItem via OperationItemToOperationLink
    I want implements "clone" method at OperationById view for current row. This method create copy selected row from operation table and copy all rows from operation_item table where operation_item.operation_id = operation.id.
    I add method createCurrentRowClone to OperationById:
    public void createCurrentRowClone() {
    OperationByIdRowImpl currentRow =
    (OperationByIdRowImpl ) getCurrentRow();
    OperationByIdRowImpl newRow = (OperationByIdRowImpl ) createRow();
    insertRow(newRow);
    newRow.setAttributes(currentRow);
    //TODO create copy for all rows from OperationItem
    //??? how get OperationItem instance
    and I add method clone to OperationItem:
    public void createClone {
    // for all row from OperationItem create clone
    but I don't understand how get OperationItem instance in OperationById scope. I have method getOperationItems() in OperationByIdRowImpl class which return RowIterator, and it give me instances of OperationItemRowImpl class, but not OperationItem with rows where operation_item.operation_id = operation.id
    Thanks in advance

    For realizations "clone" method which make copy for current row from OPERATION table and all rows from OPERATION_ITEM table where
    operation_item.operation_id = operation.id I moved realisation business logic from View Object to Entity.
    For it enough:
    1) entities objects: OperationEntity, OperationItemEntity;
    2) association for this entities: OperationItemToOperation;
    3) entity-based view object: OperationById which have bind parameter "id" for find operation by ID;
    4) application module: ApplicationModule;
    I add method createCurrentRowClone to OperationByIdImpl class. It just call createClone method for current row:
    public void createCurrentRowClone() {   
    OperationByIdRowImpl currentRow = (OperationById) getCurrentRow();
    currentRow.createClone();
    method createClone in OperationByIdRowImpl class call createClone method from entity layer:
    public void createClone() {
    getOperationEntity().createClone();
    method createClone in OperationEntityImpl class make new row and call createClone for all row from OperationItemEntity:
    public void createClone() {
    OperationEntityImpl newRow = (OperationEntityImpl) getDefinitionObject().createInstance2(
    getDBTransaction(),null);
    newAtm.setCode(getCode());
    //create copy for all rows atm_operation
    RowIterator itemIterator = getOperationItemEntities();
    while ( itemIterator .hasNext() ) {
    OperationItemEntity itemRow = (OperationItemEntity ) itemIterator.next();
    itemRow.createClone(newRow);
    and finally method createClone(OperationEntityImpl operation) in OperationItemEntityImpl class make new row in OPERATION_ITEM table
    public void createClone(OperationEntityImpl operation) {
    OperationItemEntityImpl newItem = (OperationItemEntityImpl) getDefinitionObject().createInstance2(
    getDBTransaction(), operation);
    newItem.setCode(getCode());
    If you have another solution, please post it here.

  • How to implement a conversion rule in KC7R (External Data transfer-Tool)

    Hi,
    I am familiar with programming conversion rules in ABAP in the LSMW and IS-U Migration workbench.
    Unfortunately we have to work with EDT (transaction KCLJ).
    I need to implement a simple conversion rule: migration file has the external BP number BPEXT and I must retrieve the internalone (PARTNER) for the migration program.
    So I must implement a short ABAP code (select single * from BUT000 ...).
    Now I see that in KC7R you can use constants and conversion rules. There are also "General Ruels" and "conversion routines".
    Can someone explain how to implement this simple ABAP conversion rule in KC7R giving a short example?
    This would be great.
    EDT is the workst of all SAP Standard Migration Tools I have seen.....
    Kind regards,
    Thomas

    Thomas,
    Coud you please share your findings as I am now in a similar to your original post.
    ie I too would like to know how to code convesions for use within the transfer rules used by KCLJ.
    Kind Regards,
    Hiten Mistry.

  • How to implement simple CEV rules?

    Hi,
    I have some doubts how to implement very simple CEV rules (ex: When enddate is filled then determinationdate is sysdate)
    When i follow the CDM Guidelines then I have to create a custom routine and make a derivation expression for the column.
    When i use commom sense I do a update in the BR implementation. Saves a lot of work.
    What is the opinion of the Headstart team?
    Regards, Jan-Derk

    Hi Kathyaini,
    For using a XML connection as data source then you have to follow the below:
    Configure the following elements as described below:
    u2022 <JavaDir>
    u2022 <Classpath>
    u2022 <JDBCURL>
    u2022 <JDBCUserName>
    u2022 <JDBCClassName>
    It will prompt you for the class name and path of the URL using, that also depends upon the data base you use.
    It is different for each database like oracle, sql and DB2.
    For configuring the connection you should follow the process for each database.
    JDBCURL  - The JDBCURL parameter value is the default JDBC connection URL that will be displayed in Crystal Reports when you create a new JDBC data connection. The exact format of the connection URL is specific to the database driver and is provided by the database driver vendor.
    For example, the connection URL for the Oracle JDBC driver is:
    jdbc:oracle:thin:@<hostname>:<port>:<sid>
    Sample for SQL Server:
    jdbc:microsoft:sqlserver://SERVERNAME:1433
    Sample for DB2:
    jdbc:db2:XTREME
    JDBCUserName  - Default username for the Database
    JDBCDriverName - The JDBCDriverName is the default full classname of the JDBC driver that will be displayed in Crystal Reports when creating a new JDBC data connection..
    Oracle: oracle.jdbc.driver.OracleDriver
    Let me know if any.
    Regards,
    Naveen.

  • How to Implement Dynamic Modification Rule for Material & multiple vendor Combination

    Hi Team,
         In dynamic modification rule for Material &  vendor combination, i have maintained material & vendor in Inspection plan at Material Assignments
    1.If i purchase same material from different vendors then how to implement ?

    Hi Balaji,
    I am not sure if i have followed your requirement.
    What do you want to do with Vendor Z? Do you want to inspect each lot from Z?
    If yes:
    Then Create 2 Inspection plans with same Operations and Same MICs, In Material Assignment, mention Vendor as Z so that 1st inspection plan will only be for Vendor Z. Do not enter DMR in Header.
    In 2nd inspection plan Enter DMR and do not mention any vendor so this plan will be applicable for all vendors except Z.
    If No:
    Then create Qm info record for vendor Z with "No inspection" set in QI06.
    Amol.

  • TREX Error while implementing a Rule Modeler in ERMS

    Hi Experts,
    I am getting the below error when i am implementing a Rule in ERMS. Please suggest how to rectify the error.
    Exception in method execute(CL_CRM_ERMS_SERVICE_MANAGER) : Service CL_CRM_ERMS_ADD2FB_CA failed. TREX error:Error in the RFC communication: Error when opening an RFC connection (IMSDEFAULT)[error code =] at: CL_CRM_ERMS_ADD2FB_CA=========CP : CL_CRM_ERMS_
    Processing policy ZTEST_RULE1
    Exception in method execute(CL_CRM_ERMS_SERVICE_MANAGER) : Service CL_CRM_ERMS_RULE_EXEC failed. Policy not found in repository:ERMS:ZTEST_RULE1 at: CL_CRM_ERMS_RULE_REPOSITORY===CP : CL_CRM_ERMS_RULE_REPOSITORY===CM005 : 44
    BOR: returned from Service Manager.
    Regards,
    Niloufer

    Hi Niloufer123
    Have you regenerated your Repository after making changes?
    If not, go to SA38 and run program CRM_ERMS_REGEN_RULES for ERMS context for your Policy (check box checkboxes).
    Now re-run your scenario.  Hopefully this has helped.
    Thanks,
    Andrew G.

  • How to create an rule with action to subtract from the event log of Ips manager express console?

    how to create an rule with action to subtract from the event log of Ips manager express console?, some knows of has an guide?.
    Thank you.
    Sent from Cisco Technical Support iPad App

    Hi,
    http://www.cisco.com/en/US/products/sw/secursw/ps2113/products_tech_note09186a0080bc7910.shtml
    HTH
    Luis Silva
    "If you need PDI (Planning, Design, Implement) assistance feel free to reach us"
    http://www.cisco.com/web/partners/tools/pdihd.html

  • How to implement FI-CA module in BI

    Hi Experts,
                      I am doing SAP BI project on FI-CA module.I have no idea about this module.
                please guide me how to implement this module
    thanks in Advance....youwill  surley get  points.
    regards
    ABHI

    Abhishek,
    Forum rules clearly state that basic searches should be performed before posting a question, and then post only questions about specific issues.  These forums are unfortunately ill-suited for teaching a novice how to implement a complex SAP module.
    Standard SAP Help FI-CA
    http://help.sap.com/erp2005_ehp_06/helpdata/EN/73/834f3e58717937e10000000a114084/frameset.htm
    Available training
    https://training3.sap.com/us/en/course/ac240-contract-accounts-receivable-and-payable-classroom-095-us-en/
    The BPE General forum does not deal with the topic you mention.  You will have better luck posting to an FI or BW related forum. 
    Basic FI Forum
    SAP ERP Financials
    Basic BW Forum
    SAP Business Warehouse
    Best Regards & Good Luck,
    DB49

  • How to set Legislation rule for Enhanced Retro

    how to set Legislation rule for Enhanced Retro
    How can i setup Enhanced Retro for Global HR International.
    Iam implementing HR/Payroll on latest 12i. I am using HR International.
    when i execute the following query , there is no records
    select * from pay_legislation_rules
    where rule_type like '%ADVANCED%RETRO%'
    and legislation_code = 'IQ'
    I have created events, event groups, enable dynamic triggers for my business group and legislation(IQ) and enter the corresponding event group and retro element in the Element screen(retro component).
    And i update one allowance in the back period , and I found one row has been inserted in pay_process_events table. But i am not able to run enhanced Retro Notification report and not able to find any Enhanced Retro concurrent request.
    Any one have any idea?
    thanks,

    There are no specific concurrent programs for Enhanced/Advanced retropay. The old ones still do, and derive under the hood the method of retropay.
    I do not recall by hard the setting to activate enhanced retro (but it is a legislation rule), but it should show already in the element type screen. If you have the retro tab in the screen, and first you have to specify the method (Adjustment, ...) before specifying the retro element type (and time span), then you know that enhanced retro is configured correctly.

  • How do I "Apply Rules" to a list of messages via Applescript?

    I am attempting to write an Applescript. It needs to enable a particular rule (got that figured out), then re-apply all rules (all of them) for all of the messages in the Inbox and the Sent box. I figured out how to get the list of messages in to a variable but I do not see anything in the Mail dictionary to apply the rules -- (or evaluate the rules or nothing).
    I suppose I could use the GUI style "System Event" form of scripting but that is always really hard for me to figure out how to do.

    Amann says about this script:
    Note: this script is somewhat ugly and slower than Mail's internal apply rules command - things would be so much easier if Mail would allow rules to be applied to sent messages or at least supply an AppleScript command "apply rules to selection".
    If you examine it, it does pretty much what I suggested regarding grabbing the conditions for each rule & testing for them within the script (in the "matchConditions" loop). As his notes make clear, certain conditions are excluded & the script only performs "move" actions, so you would have to do something similar for any other rule conditions & actions you want the script to perform, as well as targeting the mailbox you want.
    I suspect it would be faster & easier in a lot of cases just to implement the rules as AppleScripts in the first place & call them with a simple "Every Message" Mail rule.

  • How to implement OSS notes : 207260

    Hi All,
    Can anybody please let me know how to impement oss note : 207260.
    I need to implement Note : 207260 which is settlement rule for incorrect last used period.In this note its saying to run ZMIGCOBR to repair incorrect settlement rule for incorrect objects. But in this notes its not given any such program.
    Please let me know how to implement OSS notes since i never done this before.
    Thanks in advance.
    Regards,
    Vishal

    Check the below point:
    SAP R/3 Document : Using Transaction SNOTE
    Implementing OSS Notes Using the new Transaction Code SNOTE
    In this document we will see how to implement an OSS note using the SNOTE
    transaction code.
    The transaction code SNOTE is used to implement OSS Notes. With the SNOTE
    transaction, it is no longer necessary to register ABAP objects such as report function
    modules etc. manually. But data dictionary objects such as screens, tables need to
    be modified manually by registering them in OSS system.
    In order to use the transaction code SNOTE, the relevant transport which implement
    this transaction in the system need to be transported. Please refer to the SNOTE
    guide available at http://service.sap.com
    Execute the Transaction code SNOTE
    One of the first steps to do is to upload the note into your system.
    Follow Goto à SAP Note Download
    You can also use the SAP Note upload if the note has been saved in your local
    machine
    SAP R/3 Document : Using Transaction SNOTE
    http://www.sappoint.com
    In the following box that you get, enter the note number. We will take Note 388732
    as an example
    and click on the Execute icon. You will get the note number displayed in the list of the
    OSS notes
    In the above screen, we have two notes listed.
    By double clicking on or on the note number you can display the OSS note.
    You can also check the Status of the note by clicking on the Check SAP Note icon
    SAP R/3 Document : Using Transaction SNOTE
    http://www.sappoint.com
    Select the note and click on Check SAP Note icon. In this case we have selected
    note number 388732.
    If the note is not implemented as part of some support pack as per the support pack
    level of your system, you should get the following pop up box.
    Select the note number that you want to implement and goto Edit à Select/deselect
    node
    To implement the OSS note click on the Implement OSS Note icon
    SAP R/3 Document : Using Transaction SNOTE
    http://www.sappoint.com
    You will get a confirmation box like the following
    Click on Yes
    Click on the Continue icon
    SAP R/3 Document : Using Transaction SNOTE
    http://www.sappoint.com
    You will be prompted to enter a change request number.
    Click on the Create Request icon
    Enter a brief description of for the note and click on the continue icon
    SAP R/3 Document : Using Transaction SNOTE
    http://www.sappoint.com
    A change Request number is created for the change that you are making
    Click on the continue icon
    The system will display the objects that are going to be modified during the process
    of note application.
    In our case, Report LTXW0F10 and RTXWCHK1 are going to be modified.
    Click on the Continue icon
    For while the system will show the note in IN PROCESS status
    SAP R/3 Document : Using Transaction SNOTE
    http://www.sappoint.com
    After a while, select the note and click on the Check SAP Note icon on the application
    toolbar
    But still the Status of the note is displayed under In Process category.
    SAP R/3 Document : Using Transaction SNOTE
    http://www.sappoint.com
    You can also check the status of the note by checking the meaning of the icon next
    to the note number.
    To check the legend, follow Utilities à Color Legend
    As you can see from the legends box, means Implemented Correctly.
    Since the note is implemented, you can change the status of the note from In
    Process to Completed manually.
    SAP R/3 Document : Using Transaction SNOTE
    http://www.sappoint.com
    Select the note and click on the Set Processing Status icon
    In the Pop up box that you get, select the Completed radio button
    And click on the Continue icon
    SAP R/3 Document : Using Transaction SNOTE
    http://www.sappoint.com
    The note number is removed from the list
    ü You can also check the logs of the activities performed during the OSS note
    implementation by clicking on the Logs icon on the application toolbar. The log
    information will contain all the steps that were performed from the point of
    downloading the note will its implementation is completed.
    ü If there are any pre-requisite notes for the note that is being applied, the
    system will prompt you to load those notes too into the system. Depending on
    their applicability to your system, the system will prompt you accordingly to
    apply the pre-requisite notes.
    ü You can register the manually implemented SAP notes by executing the report
    SCWN_REGISTER_NOTES.
    ü SNOTE cannot change or modify data dictionary objects. If there is a note
    which requires changes to be made to a structure or a screen then SNOTE will
    not help. Such objects have to be registered and modified manually.
    http://www.sappoint.com/basis/snote.pdf
    Regards,
    Prakash.

  • What is BI ? How we implement & what is the cost to implement ?

    What is BI ? How we implement & what is the cost to implement ?
    Thanks,
    Sumit.

    Hi Sumit,
                        Below is the description according to ur query
    Business Intelligence is a process for increasing the competitive advantage of a business by intelligent use of available data in decision making. This process is pictured below.
    The five key stages of Business Intelligence:
    1.     Data Sourcing
    2.     Data Analysis
    3.     Situation Awareness
    4.     Risk Assessment
    5.     Decision Support
    Data sourcing
    Business Intelligence is about extracting information from multiple sources of data. The data might be: text documents - e.g. memos or reports or email messages; photographs and images; sounds; formatted tables; web pages and URL lists. The key to data sourcing is to obtain the information in electronic form. So typical sources of data might include: scanners; digital cameras; database queries; web searches; computer file access; etcetera.
    Data analysis
    Business Intelligence is about synthesizing useful knowledge from collections of data. It is about estimating current trends, integrating and summarising disparate information, validating models of understanding, and predicting missing information or future trends. This process of data analysis is also called data mining or knowledge discovery. Typical analysis tools might use:-
    u2022     probability theory - e.g. classification, clustering and Bayesian networks; 
    u2022     statistical methods - e.g. regression; 
    u2022     operations research - e.g. queuing and scheduling; 
    u2022     artificial intelligence - e.g. neural networks and fuzzy logic.
    Situation awareness
    Business Intelligence is about filtering out irrelevant information, and setting the remaining information in the context of the business and its environment. The user needs the key items of information relevant to his or her needs, and summaries that are syntheses of all the relevant data (market forces, government policy etc.).  Situation awareness is the grasp of  the context in which to understand and make decisions.  Algorithms for situation assessment provide such syntheses automatically.
    Risk assessment
    Business Intelligence is about discovering what plausible actions might be taken, or decisions made, at different times. It is about helping you weigh up the current and future risk, cost or benefit of taking one action over another, or making one decision versus another. It is about inferring and summarising your best options or choices.
    Decision support
    Business Intelligence is about using information wisely.  It aims to provide warning you of important events, such as takeovers, market changes, and poor staff performance, so that you can take preventative steps. It seeks to help you analyse and make better business decisions, to improve sales or customer satisfaction or staff morale. It presents the information you need, when you need it.
    This section describes how we are using extraction, transformation and loading (ETL) processes and a data warehouse architecture to build our enterprise-wide data warehouse in incremental project steps. Before an enterprise-wide data warehouse could be delivered, an integrated architecture and a companion implementation methodology needed to be adopted. A productive and flexible tool set was also required to support ETL processes and the data warehouse architecture in a production service environment. The resulting data warehouse architecture has the following four principal components:
    u2022 Data Sources
    u2022 Data Warehouses
    u2022 Data Marts
    u2022 Publication Services
    ETL processing occurs between data sources and the data warehouse, between the data warehouse and data marts and may also be used within the data warehouse and data marts.
    Data Sources
    The university has a multitude of data sources residing in different Data Base Management System (DBMS) tables and non-DBMS data sets. To ensure that all relevant data source candidates were identified, a physical inventory and logical inventory was conducted. The compilation of these inventories ensures that we have an enterprise-wide view of the university data resource.
    The physical inventory was comprised of a review of DBMS cataloged tables as well as data sets used by business processes. These data sets had been identified through developing the enterprise-wide information needs model.
    3
    SUGI 30 Focus Session
    The logical inventory was constructed from u201Cbrain-stormingu201D sessions which focused on common key business terms which must be referenced when articulating the institutionu2019s vision and mission (strategic direction, goals, strategies, objectives and activities). Once the primary terms were identified, they were organized into directories such as u201CProjectu201D, u201CLocationu201D, u201CAcademic Entityu201D, u201CUniversity Personu201D, u201CBudget Envelopeu201D etc. Relationships were identified by recognizing u201Cnatural linkagesu201D within and among directories, and the u201Cdrill-downsu201D and u201Croll-upsu201D that were required to support u201Creport byu201D and u201Creport onu201D information hierarchies. This exercise allowed the directories to be sub-divided into hierarchies of business terms which were useful for presentation and validation purposes.
    We called this important deliverable the u201CConceptual Data Modelu201D (CDM) and it was used as the consolidated conceptual (paper) view of all of the Universityu2019s diverse data sources. The CDM was then subjected to a university-wide consultative process to solicit feedback and communicate to the university community that this model would be adopted by the Business Intelligence (BI) project as a governance model in managing the incremental development of its enterprise-wide data warehousing project.
    Data Warehouse
    This component of our data warehouse architecture (DWA) is used to supply quality data to the many different data marts in a flexible, consistent and cohesive manner. It is a u2018landing zoneu2019 for inbound data sources and an organizational and re-structuring area for implementing data, information and statistical modeling. This is where business rules which measure and enforce data quality standards for data collection in the source systems are tested and evaluated against appropriate data quality business rules/standards which are required to perform the data, information and statistical modeling described previously.
    Inbound data that does not meet data warehouse data quality business rules is not loaded into the data warehouse (for example, if a hierarchy is incomplete). While it is desirable for rejected and corrected records to occur in the operational system, if this is not possible then start dates for when the data can begin to be collected into the data warehouse may need to be adjusted in order to accommodate necessary source systems data entry u201Cre-worku201D. Existing systems and procedures may need modification in order to permanently accommodate required data warehouse data quality measures. Severe situations may occur in which new data entry collection transactions or entire systems will need to be either built or acquired.
    We have found that a powerful and flexible extraction, transformation and loading (ETL) process is to use Structured Query Language (SQL) views on host database management systems (DBMS) in conjunction with a good ETL tool such as SAS® ETL Studio. This tool enables you to perform the following tasks:
    u2022 The extraction of data from operational data stores
    u2022 The transformation of this data
    u2022 The loading of the extracted data into your data warehouse or data mart
    When the data source is a u201Cnon-DBMSu201D data set it may be advantageous to pre-convert this into a SAS® data set to standardize data warehouse metadata definitions. Then it may be captured by SAS® ETL Studio and included in the data warehouse along with any DBMS source tables using consistent metadata terms. SAS® data sets, non-SAS® data sets, and any DBMS table will provide the SAS® ETL tool with all of the necessary metadata required to facilitate productive extraction, transformation and loading (ETL) work.
    Having the ability to utilize standard structured query language (SQL) views on host DBMS systems and within SAS® is a great advantage for ETL processing. The views can serve as data quality filters without having to write any procedural code. The option exists to u201Cmaterializeu201D these views on the host systems or leave them u201Cun-materializedu201D on the hosts and u201Cmaterializeu201D them on the target data structure defined in the SAS® ETL process. These choices may be applied differentially depending upon whether you are working with u201Ccurrent onlyu201D or u201Ctime seriesu201D data. Different deployment configurations may be chosen based upon performance issues or cost considerations. The flexibility of choosing different deployment options based upon these factors is a considerable advantage.
    4
    SUGI 30 Focus Session
    Data Marts
    This component of the data warehouse architecture may manifest as the following:
    u2022 Customer u201Cvisibleu201D relational tables
    u2022 OLAP cubes
    u2022 Pre-determined parameterized and non-parameterized reports
    u2022 Ad-hoc reports
    u2022 Spreadsheet applications with pre-populated work sheets and pivot tables
    u2022 Data visualization graphics
    u2022 Dashboard/scorecards for performance indicator applications
    Typically a business intelligence (BI) project may be scoped to deliver an agreed upon set of data marts in a project. Once these have been well specified, the conceptual data model (CDM) is used to determine what parts need to be built or used as a reference to conform the inbound data from any new project. After the detailed data mart specifications (DDMS) have been verified and the conceptual data model (CDM) components determined, a source and target logical data model (LDM) can be designed to integrate the detailed data mart specification (DDMS) and conceptual data model (CMD). An extraction, transformation and loading (ETL) process can then be set up and scheduled to populate the logical data models (LDM) from the required data sources and assist with any time series and data audit change control requirements.
    Over time as more and more data marts and logical data models (LDMu2019s) are built the conceptual data model (CDM) becomes more complete. One very important advantage to this implementation methodology is that the order of the data marts and logical data models can be entirely driven by project priority, project budget allocation and time-to-completion constraints/requirements. This data warehouse architecture implementation methodology does not need to dictate project priorities or project scope as long as the conceptual data model (CDM) exercise has been successfully completed before the first project request is initiated.
    McMasteru2019s Data Warehouse design
    DevelopmentTestProductionWarehouseWarehouseWarehouseOtherDB2 OperationalOracle OperationalETLETLETLETLETLETLETLETLETLDataMartsETLETLETLDataMartsDataMartsDB2/Oracle BIToolBIToolBIToolNoNoUserUserAccessAccessUserUserAccessAccess(SAS (SAS Data sets)Data sets)Staging Area 5
    SUGI 30 Focus Session
    Publication Services
    This is the visible presentation environment that business intelligence (BI) customers will use to interact with the published data mart deliverables. The SAS® Information Delivery Portal will be utilized as a web delivery channel to deliver a u201Cone-stop information shoppingu201D solution. This software solution provides an interface to access enterprise data, applications and information. It is built on top of the SAS Business Intelligence Architecture, provides a single point of entry and provides a Portal API for application development. All of our canned reports generated through SAS® Enterprise Guide, along with a web-based query and reporting tool (SAS® Web Report Studio) will be accessed through this publication channel.
    Using the portalu2019s personalization features we have customized it for a McMaster u201Clook and feelu201D. Information is organized using pages and portlets and our stakeholders will have access to public pages along with private portlets based on role authorization rules. Stakeholders will also be able to access SAS® data sets from within Microsoft Word and Microsoft Excel using the SAS® Add-In for Microsoft Office. This tool will enable our stakeholders to execute stored processes (a SAS® program which is hosted on a server) and embed the results in their documents and spreadsheets. Within Excel, the SAS® Add-In can:
    u2022 Access and view SAS® data sources
    u2022 Access and view any other data source that is available from a SAS® server
    u2022 Analyze SAS® or Excel data using analytic tasks
    The SAS® Add-In for Microsoft Office will not be accessed through the SAS® Information Delivery Portal as this is a client component which will be installed on individual personal computers by members of our Client Services group. Future stages of the project will include interactive reports (drill-down through OLAP cubes) as well as balanced scorecards to measure performance indicators (through SAS® Strategic Performance Management software). This, along with event notification messages, will all be delivered through the SAS® Information Delivery Portal.
    Publication is also channeled according to audience with appropriate security and privacy rules.
    SECURITY u2013 AUTHENTICATION AND AUTHORIZATION
    The business value derived from using the SAS® Value Chain Analytics includes an authoritative and secure environment for data management and reporting. A data warehouse may be categorized as a u201Ccollection of integrated databases designed to support managerial decision making and problem solving functionsu201D and u201Ccontains both highly detailed and summarized historical data relating to various categories, subjects, or areasu201D. Implementation of the research funding data mart at McMaster has meant that our stakeholders now have electronic access to data which previously was not widely disseminated. Stakeholders are now able to gain timely access to this data in the form that best matches their current information needs. Security requirements are being addressed taking into consideration the following:
    u2022 Data identification
    u2022 Data classification
    u2022 Value of the data
    u2022 Identifying any data security vulnerabilities
    u2022 Identifying data protection measures and associated costs
    u2022 Selection of cost-effective security measures
    u2022 Evaluation of effectiveness of security measures
    At McMaster access to data involves both authentication and authorization. Authentication may be defined as the process of verifying the identity of a person or process within the guidelines of a specific
    6
    SUGI 30 Focus Session
    security policy (who you are). Authorization is the process of determining which permissions the user has for which resources (permissions). Authentication is also a prerequisite for authorization. At McMaster business intelligence (BI) services that are not public require a sign on with a single university-wide login identifier which is currently authenticated using the Microsoft Active Directory. After a successful authentication the SAS® university login identifier can be used by the SAS® Meta data server. No passwords are ever stored in SAS®. Future plans at the university call for this authentication to be done using Kerberos.
    At McMaster aggregate information will be open to all. Granular security is being implemented as required through a combination of SAS® Information Maps and stored processes. SAS® Information Maps consist of metadata that describe a data warehouse in business terms. Through using SAS® Information Map Studio which is an application used to create, edit and manage SAS® Information Maps, we will determine what data our stakeholders will be accessing through either SAS® Web Report Studio (ability to create reports) or SAS® Information Delivery Portal (ability to view only). Previously access to data residing in DB-2 tables was granted by creating views using structured query language (SQL). Information maps are much more powerful as they capture metadata about allowable usage and query generation rules. They also describe what can be done, are database independent and can cross databases and they hide the physical structure of the data from the business user. Since query code is generated in the background, the business user does not need to know structured query language (SQL). As well as using Information Maps, we will also be using SAS® stored processes to implement role based granular security.
    At the university some business intelligence (BI) services are targeted for particular roles such as researchers. The primary investigator role of a research project needs access to current and past research funding data at both the summary and detail levels for their research project. A SAS® stored process (a SAS® program which is hosted on a server) is used to determine the employee number of the login by checking a common university directory and then filtering the research data mart to selectively provide only the data that is relevant for the researcher who has signed onto the decision support portal.
    Other business intelligence (BI) services are targeted for particular roles such as Vice-Presidents, Deans, Chairs, Directors, Managers and their Staff. SAS® stored processes are used as described above with the exception that they filter data on the basis of positions and organizational affiliations. When individuals change jobs or new appointments occur the authorized business intelligence (BI) data will always be correctly presented.
    As the SAS® stored process can be executed from many environments (for example, SAS® Web Report Studio, SAS® Add-In for Microsoft Office, SAS® Enterprise Guide) authorization rules are consistently applied across all environments on a timely basis. There is also potential in the future to automatically customize web portals and event notifications based upon the particular role of the person who has signed onto the SAS® Information Delivery Portal.
    ARCHITECTURE (PRODUCTION ENVIRONMENT)
    We are currently in the planning stages for building a scalable, sustainable infrastructure which will support a scaled deployment of the SAS® Value Chain Analytics. We are considering implementing the following three-tier platform which will allow us to scale horizontally in the future:
    Our development environment consists of a server with 2 x Intel Xeon 2.8GHz Processors, 2GB of RAM and is running Windows 2000 u2013 Service Pack 4.
    We are considering the following for the scaled roll-out of our production environment.
    A. Hardware
    1. Server 1 - SAS® Data Server
    - 4 way 64 bit 1.5Ghz Itanium2 server
    7
    SUGI 30 Focus Session
    - 16 Gb RAM
    - 2 73 Gb Drives (RAID 1) for the OS
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for Itanium
    2 Mid-Tier (Web) Server
    - 2 way 32 bit 3Ghz Xeon Server
    - 4 Gb RAM
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for x86
    3. SAN Drive Array (modular and can grow with the warehouse)
    - 6 u2013 72GB Drives (RAID 5) total 360GB for SAS® and Data
    B. Software
    1. Server 1 - SAS® Data Server
    - SAS® 9.1.3
    - SAS® Metadata Server
    - SAS® WorkSpace Server
    - SAS® Stored Process Server
    - Platform JobScheduler
    2. Mid -Tier Server
    - SAS® Web Report Studio
    - SAS® Information Delivery Portal
    - BEA Web Logic for future SAS® SPM Platform
    - Xythos Web File System (WFS)
    3. Client u2013Tier Server
    - SAS® Enterprise Guide
    - SAS® Add-In for Microsoft Office
    REPORTING
    We have created a number of parameterized stored processes using SAS® Enterprise Guide, which our stakeholders will access as both static (HTML as well as PDF documents) and interactive reports (drill-down) through SAS® Web Report Studio and the SAS® Add-In for Microsoft Office. All canned reports along with SAS® Web Report Studio will be accessed through the SAS® Information Delivery Portal.
    NEXT STEPS
    Next steps of the project include development of a financial data mart along with appropriate data quality standards, monthly frozen snapshots and implementation of university-wide financial reporting standards. This will facilitate electronic access to integrated financial information necessary for the development and maintenance of an integrated, multi-year financial planning framework. Canned reports to include monthly web-based financial statements, with drill-down capability along with budget templates automatically populated with data values and saved in different workbooks for different subgroups (for example by Department). The later will be accomplished using Microsoft Direct Data Exchange (DDE).
    8
    SUGI 30 Focus Session
    As well, we will begin the implementation of SAS® Strategic Performance Management Software to support the performance measurement and monitoring initiative that is a fundamental component of McMasteru2019s strategic plan. This tool will assist in critically assessing and identifying meaningful and statistically relevant measures and indicators. This software can perform causal analyses among various measures within and across areas providing useful information on inter-relationships between factors and measures. As well as demonstrating how decisions in one area affect other areas, these cause-and-effect analyses can reveal both good performance drivers and also possible detractors and enable u2018evidenced-basedu2019 decision-making. Finally, the tool provides a balanced scorecard reporting format, designed to identify statistically significant trends and results that can be tailored to the specific goals, objectives and measures of the various operational areas of the University.
    LESSONS LEARNED
    Lessons learned include the importance of taking a consultative approach not only in assessing information needs, but also in building data hierarchies, understanding subject matter, and in prioritizing tasks to best support decision making and inform senior management. We found that a combination of training and mentoring (knowledge transfer) helped us accelerate learning the new tools. It was very important to ensure that time and resources were committed to complete the necessary planning and data quality initiatives prior to initiating the first project. When developing a project plan, it is important to

  • How to Implement BW in IT Service Desk/IT Help Desk /IT Complain Surveillance Dept/IT Customer Support Dept?

    Hi
    If a organization have 200 to 300 daily complains of there IT equipment/Software/Network e.t.c.
    How to Implement BW in IT Service Desk/IT Help Desk /IT Complain Surveillance Dept/IT Customer Support Dept?
    Is there any standard DataSources/InfoObjects/DSOs/InfoCubes etc. available in SAP BI Content?

    Imran,
    The point I think was to ensure that you knew exactly what was required. A customer service desk can have many interpretations from a BI perspective.
    You could have :
    1. Operational reports - calls attended per shift , Average number of calls per person , Seasonality in the calls coming in etc
    2. Analytic views - Utilization of resources , Average call time and trending , customer satisfaction , average wait time
    3. Strategic - Call volumes corresponding to campaigns etc , Employee churn and related call times
    Based on these you would then have to construct your models which would be populated by data from the MySQL instance for you to report.
    Else if you have BWA you could have data discovery instead or if you have HANA - you could do even more and if you have a HANA sidecar - you technically dont need BW. The possibilities are virtually endless - it depends on how you want to drive it and how the end user ( client ) sees value in the same.

  • How to implement implicit and explicit enhancement points

    Hi,
    Can anybody please provide some technical aspects of enhancement spots. I have gone through several sap sites and help poratl but have not get much technical things (how to implement or related t codes). please do not provide link to read theories.
    Rgds
    sudhanshu

    Hi,
    Refer to this link...
    http://help.sap.com/saphelp_nw2004s/helpdata/en/5f/103a4280da9923e10000000a155106/content.htm

Maybe you are looking for

  • Bug Report - DB toolkit returns error when calling the DefaultDatabase property with SQLite

    There's an old bug in the DB toolkit where calling the DefaultDatabase property returns error -2147217887 if you're using certain DBs (such as SQLite or PostgreSQL and I believe MySQL as well). The problem is that this property is called by a VI whic

  • PDFMaker-Macro

    Ich suche den Source von diesem PDFMaker-Macro, das es in WinWord gibt! danke im Voraus Uwe

  • Imported Object Enterprise Service Builder

    Hi.. when Imported Object Enterprise Service Builder Connection data for the R/3 import is incomplete Open the software component version SAP BASIS 7.10 and enter the system and the client can not import MATMAS object How to resolve that problem rega

  • Search Service Distribution in SharePoint 2013

    Hi We have   APP 1, APP2, WFE1, WFE2, INDEX servers in the farm. Question is should we run the search service on all servers? Thanks, srabon

  • How to use bean as attribute?

    I create a bean(sql result) from a servlet and put it in the request using setAttribute(bean) and then use dispacher to send the request to the display.jsp. In display.jsp i have a <jsp:usebean id="bean" class="beanClass"> but when i use <jsp:getProp