How to implement Process Train in Jdev 10.1.3

Hi..
Somebody know where can I find tutorial about how to implement process train in jdev 10.3.1
thanks

http://download-west.oracle.com/docs/html/B25947_01/web_complex005.htm#CEGIGJID

Similar Messages

  • How to Implement Process Train In ADF

    I am using Jdeveloper 10.1.3.1.0
    Our requirement is to use process train in our project, so could anyone please knows this things that how to implement it please tell me.
    In Single page i am having so many tabs and each tab having one detail page and atlast it is going to submit all the details.
    Thanks In Advance
    Ramkumar
    Message was edited by:
    user636100

    Hi!
    http://www.oracle.com/webapps/online-help/jdeveloper/10.1.3?topic=af_maintrain_html should give you all the information you require.
    BB

  • How to create process train model in jheadstart

    Hello
    im trying to create a process train model through jheadstart but i can't
    please if there is any idea about this help me

    Hello,
    In JHeadstart you can create a wizard that contains a process train model. The JHeadstart Developers Guide that comes with your download of JHeadstart contains the documentation you need to create such a wizard.
    Hope this helps,
    Evert-Jan de Bruin

  • How to set the active Step in a process Train using JSF

    I have a process train that I am pointing to a list of steps, I want to be able to set the current step, but cannot find the property to set.
    Does anyone know how to accomplish this? This process train is for readonly purposes only, no navigation needed.
    Thanks
    Kelly

    Right, I think I've been able to do what you're trying to do.
    I've defined a subclass of ProcessMenuModel, which simply has a currentViewId property and a constructor with an extra argument for it:
    public class SettableProcessMenuModel
      extends ProcessMenuModel
      private String _currentViewId = null;
      public SettableProcessMenuModel(Object instance, String viewIdProperty, Object maxPathKey, String currentViewId) throws IntrospectionException
        super(instance, viewIdProperty, maxPathKey);
        _currentViewId = currentViewId;
      public void setCurrentViewId(String currentViewId)
        this._currentViewId = currentViewId;
      public String getCurrentViewId()
        return _currentViewId;
    }Then in the model adapter I use this class instead of MenuModel, plus a currentViewId property:
    public class TrainModelAdapter implements Serializable {
        private String _propertyName = null;
        private Object _instance = null;
        protected transient MenuModel _model = null;
        private Object _maxPathKey = null;
        private String _currentViewId = null;
        public MenuModel getModel() throws IntrospectionException {
            if (_model == null)
              _model = new SettableProcessMenuModel(getInstance(),  // SettableProcessMenuModel instead of ProcessMenuModel
                                            getViewIdProperty(),
                                            getMaxPathKey(),
                                            getCurrentViewId()); // Extra argument
            return _model;
        public void setCurrentViewId(String currentViewId)
            _currentViewId = currentViewId;
            _model = null;
        public String getCurrentViewId()
          return _currentViewId;
    ...Then when you want to set the current step, you set the currentViewId on the TrainModelAdapter to the viewId you need.
    This basically makes the value you pass in override the currentViewId value in the menu model.
    I hope you are able to use this in your implementation.

  • How to Implement HTTP Request Status Code Processing

    I actually have two questions. First, I wondering how to add multiple status code processing to an http request. Secondly, I was wondering how to go about using alternate http requests to different servers in case the primary server is down. What kind of parameter would the program use to determine that the server is unavailable and switch to another server??
    Currently, the program I've written calls an rdf server (http://www.rdfabout.com/sparql) using a sparql query,
    the server returns an xml string, the program parses it, and calculates numbers
    from the string. The program works, but the problem is that the server is down occasionally.
    When the server is down, we need to add calls to another server to
    increase reliability. So, the next task is to call this server:
    http://www.melissadata.com/lookups/ZipDemo2000.asp
    I need to do exactly the same things I did with the rdf server. The
    difference will be constructing a request and a bit different parsing of
    the response.
    current SPARQL query is defined as follows:
    PREFIX dc:  <http://purl.org/dc/elements/1.1/>
    PREFIX census: <http://www.rdfabout.com/rdf/schema/census/>
    PREFIX census1: <tag:govshare.info,2005:rdf/census/details/100pct/>
    DESCRIBE ?table WHERE {
    <http://www.rdfabout.com/rdf/usgov/geo/census/zcta/90292> census:details
    ?details .
    ?details census1:totalPopulation ?table .
    ?table dc:title "SEX BY AGE (P012001)" .
    }current HTTP Request is defined as follows:
    import java.net.*;
    import java.net.URL;
    import java.net.URLConnection;
    import java.io.*;
    import java.io.DataOutputStream;
    import java.io.BufferedReader;
    import java.io.StringReader;
    import java.io.InputStreamReader;
    import java.io.PrintStream;
    import java.util.Scanner;
    import java.util.regex.Matcher;
    import java.util.regex.Pattern;
    import java.util.Arrays; 
    public class MyConnection
         static Scanner sc = new Scanner(System.in);//allows user to input zipcode
        public static void main(String[] args) throws Exception
             int zip;//zipcode is declared as integer format
            //User defines zip through input
            //proceed to put SPARQL query into string, which is then used to call the server
            String requestPart1 =
            "query=PREFIX+dc%3A++%3Chttp%3A%2F%2Fpurl.org%2Fdc%2Felements%2F1.1%2F%3E+%0D%0APREFIX+census%3A+%3Chttp%3A%2F%2Fwww.rdfabout.com%2Frdf%2Fschema%2Fcensus%2F%3E+%0D%0APREFIX+census1%3A+%3Ctag%3Agovshare.info%2C2005%3Ardf%2Fcensus%2Fdetails%2F100pct%2F%3E+%0D%0A%0D%0ADESCRIBE+%3Ftable+WHERE+%7B+%0D%0A+%3Chttp%3A%2F%2Fwww.rdfabout.com%2Frdf%2Fusgov%2Fgeo%2Fcensus%2Fzcta%2F";
            String requestPart2 = "" + zip; // zipcode is transformed from int to string format and plugged into SPARQL query here
            String requestPart3 =
            "%3E+census%3Adetails+%3Fdetails+.+%0D%0A+%3Fdetails+census1%3AtotalPopulation+%3Ftable+.+%0D%0A+%3Ftable+dc%3Atitle+%22SEX+BY+AGE+%28P012001%29%22+.+%0D%0A%7D%0D%0A&outputMimeType=text%2Fxml";
            String response = "";
            URL url = new URL("http://www.rdfabout.com/sparql");//designates server to connect to
            URLConnection conn = url.openConnection();//opens connection to server
            // Set connection parameters.
            conn.setDoInput (true);
            conn.setDoOutput (true);
            conn.setUseCaches (false);
            // Make server believe we are form data…
            conn.setRequestProperty("Content-Type","application/x-www-form-urlencoded");
            DataOutputStream out = new DataOutputStream (conn.getOutputStream ());
            // Write out the bytes of the content string to the stream.
            out.writeBytes(requestPart1 + requestPart2 + requestPart3);
            out.flush ();
            out.close ();
            // Read response from the input stream.
            BufferedReader in = new BufferedReader (new InputStreamReader(conn.getInputStream ()));
            String temp;
            while ((temp = in.readLine()) != null)
                 response += temp + "\n";
            temp = null;
            in.close ();
            //parsing stuff is taken care of after here
    }What remains now is to:
    1) add status code processing: notify if the server is not available, ect.
    2) add ability to connect to additional server if primary server is down.
    I'm thinking an if/else statement, which I've tried a few different ways,
    but I don't quite know how to implement that...Also trying to add the
    status code processing/error handling, but I'm not sure how to do that
    for multiple/different errors, such as 404, 503, 504, ect.. try/catch statements?
    So yeah, just been scratching my head on this trying to figure out how to work it..
    If you can help me out on this, I've been going nuts trying to figure this out...

    I think your issue comes form the fact that you are not casting URLConnection to HttpURLConnection.
    Doing the cast would allow you to use getResponseCode() - among other methods - and test for a response different than 200.
    Read: [http://mindprod.com/jgloss/urlconnection.html|http://mindprod.com/jgloss/urlconnection.html]

  • How to do Process Audit... after a implementation..

    Hi,
    We are currently in the final configuration stage, Our client insting as to do
    final audit to know whether we met as per BPML.
    Kindly suggest..
    How generally Audit happens after a Implementation ?
    Is there any SAP tools?
    What are all the important object needs to be covered?
    Little urgent..
    Laxmanan

    Hi Lakshmanan
    To my knowledge there is no standard SAP tools for audit.  But as you may be aware for implementing SAP, there are 6 stages of implementation process which are called roadmaps.
    The stage what you are asking is in third stage, viz. <b>Realization Phase</b>.  In stage only, you do Unit Testing and Integration Testing thoroughly.  If at all if there is any error in your configuration, through Integration testing, we can find out the error and rectify before implementation.
    Of course, issues will come after implementation and you have to support which is part of 5th phase in Roadmap - <b>Go Live and Support</b>.
    Thanks
    G. Lakshmipathi

  • Webinar: How to implement secure scenarios with SAP NW PI 7.1

    SAP Intelligence Platform & NetWeaver RIG APJ Expert Call
    Dear valued SAP Experts,
    Next SAP Intelligence Platform & NetWeaver RIG Expert Call Session will take place on Tuesday, August 18.
    The SAP Intelligence Platform & NetWeaver RIG Expert Call Sessions are designed to support consultants, partners and customers  during their implementation projects. The sessions cover all different aspects of SAP NetWeaver and are aimed at
    thus provide knowledge which is not available via standard training courses. The session duration is typically 60min and includes questions and answers.
    Tuesday, August 18, 2009:
    How to implement secure scenarios with SAP NetWeaver Process Integration 7.1
    Time: 2.00 - 3.00 p.m. Singapore Time (UTC +8)
    This event will feature Makoto Sugishita with the SAP Intelligence Platform & NetWeaver Regional Implementation Group.
    Makoto provides the following abstract:
    In this session you will learn more about the core security concepts that are provided with the service-oriented architecture (SOA)
    management capabilities in SAP NetWeaver Process Integration (SAP NetWeaver PI). This session will cover main use cases and
    supported scenarios of secure SAP NetWeaver PI deployments. 
    SAP Connect Link: https://sap.emea.pgiconnect.com/I016095
    (no passcode needed)
    Dial in:
    For dial in details please register here http://www.surveymonkey.com/s.aspx?sm=EFeuZl9PxrwKOW5i5W556g_3d_3d
    Kind regards,
    Sarma Sishta
    SAP Intelligence Platform & NetWeaver RIG APJ

    hi,
    I'm making this a sticky thread till August 18 so it will have better visibility
    Regards,
    Michal Krawczyk

  • How to implement the Seibel response in the ADF ?

    Hi All,
    JDev ver : 11.1.1.5
    I have integrated the Fusion middle ware with Seibel using the 'REST' services(The URI based service).
    I followed this link to integrate : http://siebel-essentials.blogspot.com/2011/02/first-encounter-with-sai-ofm.html
    I want to know how to use the response and how can I implement the URI response in my java/ADF code ?
    How to send the request ? and how to receive the response ?
    If anyone worked on this, please share me the implementation process..
    Regards,
    Gopinath

    Hi,
    JDeveloper 11.1.1.5 supports REST get requests from the URL Data Control (if you wanted to use ADF). Note however that this release expects an XML or CSV payload. Alternatively you build a REST client using Jersey libraries in which case the client is a Java object that then you use to provide the data for display
    Frank

  • SAP Implementation Process !!!

    Hi Folks,
    I need brief information about the SAP implementation process. As per the ASAP Methodology we have different phases involved in the entire implementation process from Project preparation,Blue print analysis,Realization,Final Preparation and Go Live Support.
    I did not involve myself into implementation. Being a Basis consultant I would like to know what is the involvement of basis consultant in each phase in detail and what are the activities being carry out by a basis team, what would be the avg team size and what would be the duration of the project approximately. I know it depends on the project and modules being used,architecture and all. i just wanted to know the rough details for information.
    I would really appreciate for any useful information.
    Regards,
    Vinod

    Hi Viond,
    Please find below an elaborated details for SAP Implementaion.
    The ASAP (Accelerated SAP) Roadmap provides the methodology for implementing and continuously optimizing your SAP System. It divides the implementation process into five major phases and offers detailed Project Plans to assist you (Microsoft Project format). The documentation stored at each level of the Roadmap tree structure contains recommendations on implementing your SAP System and links to helpful tools and accelerators. We use SAP Solution Manager for the same and define the steps and this information can be transformed into MPP.
    The implementation of your SAP System covers the following major phases:
    Project Preparation
    In this phase you plan your project and lay the foundations for successful implementation. It is at this stage that you make the strategic decisions crucial to your project:
    Define your project goals and objectives
    Clarify the scope of your implementation
    Define your project schedule, budget plan, and implementation sequence
    Establish the project organization and relevant committees and assign resources
    Sizing for the SAP Servers (SAPS) Important for Basis.
    Business Blueprint
    In this phase you create a blueprint using the Question & Answer database (Q&Adb) older versions and now it has been replaced by new value adds from SAP in Solution Manager, which documents your enterpriseu2019s requirements and establishes how your business processes and organizational structure are to be represented in the SAP System. You also refine the original project goals and objectives and revise the overall project schedule in this phase.  In short its As is >> To be its from from functional consultant (mapping of business process) Basis will be involved how and when backup will be taken what 3rd party/vendor will be used, how users will be handled (CUA or normal procedure).
    Realization
    In this phase, you configure the requirements contained in the Business Blueprint. Baseline configuration (major scope) is followed by final configuration (remaining scope), which can consist of up to four cycles. Other key focal areas of this phase are conducting integration tests and drawing up end user documentation. Major part will be creation of transport requests and transportation to QAS system. Configuration of SAProuter, Solution manager and its various configuration
    Final Preparation
    In this phase you complete your preparations, including testing, end user training, system management, and cutover activities. You also need to resolve all open issues in this phase. At this stage you need to ensure that all the prerequisites for your system to go live have been fulfilled...Final Check.
    Go Live & Support
    In this phase you move from a pre-production environment to the live system. The most important elements include setting up production support, monitoring system transactions, and optimizing overall system performance.
    You can also download an HTML version of the ASAP Core Roadmap 7.1 on [SAP Services Marketplace - ASAP Roadmaps and Run SAP Roadmaps|http://service.sap.com/asap] (SMP login required).
    All the best !

  • What is BI ? How we implement & what is the cost to implement ?

    What is BI ? How we implement & what is the cost to implement ?
    Thanks,
    Sumit.

    Hi Sumit,
                        Below is the description according to ur query
    Business Intelligence is a process for increasing the competitive advantage of a business by intelligent use of available data in decision making. This process is pictured below.
    The five key stages of Business Intelligence:
    1.     Data Sourcing
    2.     Data Analysis
    3.     Situation Awareness
    4.     Risk Assessment
    5.     Decision Support
    Data sourcing
    Business Intelligence is about extracting information from multiple sources of data. The data might be: text documents - e.g. memos or reports or email messages; photographs and images; sounds; formatted tables; web pages and URL lists. The key to data sourcing is to obtain the information in electronic form. So typical sources of data might include: scanners; digital cameras; database queries; web searches; computer file access; etcetera.
    Data analysis
    Business Intelligence is about synthesizing useful knowledge from collections of data. It is about estimating current trends, integrating and summarising disparate information, validating models of understanding, and predicting missing information or future trends. This process of data analysis is also called data mining or knowledge discovery. Typical analysis tools might use:-
    u2022     probability theory - e.g. classification, clustering and Bayesian networks; 
    u2022     statistical methods - e.g. regression; 
    u2022     operations research - e.g. queuing and scheduling; 
    u2022     artificial intelligence - e.g. neural networks and fuzzy logic.
    Situation awareness
    Business Intelligence is about filtering out irrelevant information, and setting the remaining information in the context of the business and its environment. The user needs the key items of information relevant to his or her needs, and summaries that are syntheses of all the relevant data (market forces, government policy etc.).  Situation awareness is the grasp of  the context in which to understand and make decisions.  Algorithms for situation assessment provide such syntheses automatically.
    Risk assessment
    Business Intelligence is about discovering what plausible actions might be taken, or decisions made, at different times. It is about helping you weigh up the current and future risk, cost or benefit of taking one action over another, or making one decision versus another. It is about inferring and summarising your best options or choices.
    Decision support
    Business Intelligence is about using information wisely.  It aims to provide warning you of important events, such as takeovers, market changes, and poor staff performance, so that you can take preventative steps. It seeks to help you analyse and make better business decisions, to improve sales or customer satisfaction or staff morale. It presents the information you need, when you need it.
    This section describes how we are using extraction, transformation and loading (ETL) processes and a data warehouse architecture to build our enterprise-wide data warehouse in incremental project steps. Before an enterprise-wide data warehouse could be delivered, an integrated architecture and a companion implementation methodology needed to be adopted. A productive and flexible tool set was also required to support ETL processes and the data warehouse architecture in a production service environment. The resulting data warehouse architecture has the following four principal components:
    u2022 Data Sources
    u2022 Data Warehouses
    u2022 Data Marts
    u2022 Publication Services
    ETL processing occurs between data sources and the data warehouse, between the data warehouse and data marts and may also be used within the data warehouse and data marts.
    Data Sources
    The university has a multitude of data sources residing in different Data Base Management System (DBMS) tables and non-DBMS data sets. To ensure that all relevant data source candidates were identified, a physical inventory and logical inventory was conducted. The compilation of these inventories ensures that we have an enterprise-wide view of the university data resource.
    The physical inventory was comprised of a review of DBMS cataloged tables as well as data sets used by business processes. These data sets had been identified through developing the enterprise-wide information needs model.
    3
    SUGI 30 Focus Session
    The logical inventory was constructed from u201Cbrain-stormingu201D sessions which focused on common key business terms which must be referenced when articulating the institutionu2019s vision and mission (strategic direction, goals, strategies, objectives and activities). Once the primary terms were identified, they were organized into directories such as u201CProjectu201D, u201CLocationu201D, u201CAcademic Entityu201D, u201CUniversity Personu201D, u201CBudget Envelopeu201D etc. Relationships were identified by recognizing u201Cnatural linkagesu201D within and among directories, and the u201Cdrill-downsu201D and u201Croll-upsu201D that were required to support u201Creport byu201D and u201Creport onu201D information hierarchies. This exercise allowed the directories to be sub-divided into hierarchies of business terms which were useful for presentation and validation purposes.
    We called this important deliverable the u201CConceptual Data Modelu201D (CDM) and it was used as the consolidated conceptual (paper) view of all of the Universityu2019s diverse data sources. The CDM was then subjected to a university-wide consultative process to solicit feedback and communicate to the university community that this model would be adopted by the Business Intelligence (BI) project as a governance model in managing the incremental development of its enterprise-wide data warehousing project.
    Data Warehouse
    This component of our data warehouse architecture (DWA) is used to supply quality data to the many different data marts in a flexible, consistent and cohesive manner. It is a u2018landing zoneu2019 for inbound data sources and an organizational and re-structuring area for implementing data, information and statistical modeling. This is where business rules which measure and enforce data quality standards for data collection in the source systems are tested and evaluated against appropriate data quality business rules/standards which are required to perform the data, information and statistical modeling described previously.
    Inbound data that does not meet data warehouse data quality business rules is not loaded into the data warehouse (for example, if a hierarchy is incomplete). While it is desirable for rejected and corrected records to occur in the operational system, if this is not possible then start dates for when the data can begin to be collected into the data warehouse may need to be adjusted in order to accommodate necessary source systems data entry u201Cre-worku201D. Existing systems and procedures may need modification in order to permanently accommodate required data warehouse data quality measures. Severe situations may occur in which new data entry collection transactions or entire systems will need to be either built or acquired.
    We have found that a powerful and flexible extraction, transformation and loading (ETL) process is to use Structured Query Language (SQL) views on host database management systems (DBMS) in conjunction with a good ETL tool such as SAS® ETL Studio. This tool enables you to perform the following tasks:
    u2022 The extraction of data from operational data stores
    u2022 The transformation of this data
    u2022 The loading of the extracted data into your data warehouse or data mart
    When the data source is a u201Cnon-DBMSu201D data set it may be advantageous to pre-convert this into a SAS® data set to standardize data warehouse metadata definitions. Then it may be captured by SAS® ETL Studio and included in the data warehouse along with any DBMS source tables using consistent metadata terms. SAS® data sets, non-SAS® data sets, and any DBMS table will provide the SAS® ETL tool with all of the necessary metadata required to facilitate productive extraction, transformation and loading (ETL) work.
    Having the ability to utilize standard structured query language (SQL) views on host DBMS systems and within SAS® is a great advantage for ETL processing. The views can serve as data quality filters without having to write any procedural code. The option exists to u201Cmaterializeu201D these views on the host systems or leave them u201Cun-materializedu201D on the hosts and u201Cmaterializeu201D them on the target data structure defined in the SAS® ETL process. These choices may be applied differentially depending upon whether you are working with u201Ccurrent onlyu201D or u201Ctime seriesu201D data. Different deployment configurations may be chosen based upon performance issues or cost considerations. The flexibility of choosing different deployment options based upon these factors is a considerable advantage.
    4
    SUGI 30 Focus Session
    Data Marts
    This component of the data warehouse architecture may manifest as the following:
    u2022 Customer u201Cvisibleu201D relational tables
    u2022 OLAP cubes
    u2022 Pre-determined parameterized and non-parameterized reports
    u2022 Ad-hoc reports
    u2022 Spreadsheet applications with pre-populated work sheets and pivot tables
    u2022 Data visualization graphics
    u2022 Dashboard/scorecards for performance indicator applications
    Typically a business intelligence (BI) project may be scoped to deliver an agreed upon set of data marts in a project. Once these have been well specified, the conceptual data model (CDM) is used to determine what parts need to be built or used as a reference to conform the inbound data from any new project. After the detailed data mart specifications (DDMS) have been verified and the conceptual data model (CDM) components determined, a source and target logical data model (LDM) can be designed to integrate the detailed data mart specification (DDMS) and conceptual data model (CMD). An extraction, transformation and loading (ETL) process can then be set up and scheduled to populate the logical data models (LDM) from the required data sources and assist with any time series and data audit change control requirements.
    Over time as more and more data marts and logical data models (LDMu2019s) are built the conceptual data model (CDM) becomes more complete. One very important advantage to this implementation methodology is that the order of the data marts and logical data models can be entirely driven by project priority, project budget allocation and time-to-completion constraints/requirements. This data warehouse architecture implementation methodology does not need to dictate project priorities or project scope as long as the conceptual data model (CDM) exercise has been successfully completed before the first project request is initiated.
    McMasteru2019s Data Warehouse design
    DevelopmentTestProductionWarehouseWarehouseWarehouseOtherDB2 OperationalOracle OperationalETLETLETLETLETLETLETLETLETLDataMartsETLETLETLDataMartsDataMartsDB2/Oracle BIToolBIToolBIToolNoNoUserUserAccessAccessUserUserAccessAccess(SAS (SAS Data sets)Data sets)Staging Area 5
    SUGI 30 Focus Session
    Publication Services
    This is the visible presentation environment that business intelligence (BI) customers will use to interact with the published data mart deliverables. The SAS® Information Delivery Portal will be utilized as a web delivery channel to deliver a u201Cone-stop information shoppingu201D solution. This software solution provides an interface to access enterprise data, applications and information. It is built on top of the SAS Business Intelligence Architecture, provides a single point of entry and provides a Portal API for application development. All of our canned reports generated through SAS® Enterprise Guide, along with a web-based query and reporting tool (SAS® Web Report Studio) will be accessed through this publication channel.
    Using the portalu2019s personalization features we have customized it for a McMaster u201Clook and feelu201D. Information is organized using pages and portlets and our stakeholders will have access to public pages along with private portlets based on role authorization rules. Stakeholders will also be able to access SAS® data sets from within Microsoft Word and Microsoft Excel using the SAS® Add-In for Microsoft Office. This tool will enable our stakeholders to execute stored processes (a SAS® program which is hosted on a server) and embed the results in their documents and spreadsheets. Within Excel, the SAS® Add-In can:
    u2022 Access and view SAS® data sources
    u2022 Access and view any other data source that is available from a SAS® server
    u2022 Analyze SAS® or Excel data using analytic tasks
    The SAS® Add-In for Microsoft Office will not be accessed through the SAS® Information Delivery Portal as this is a client component which will be installed on individual personal computers by members of our Client Services group. Future stages of the project will include interactive reports (drill-down through OLAP cubes) as well as balanced scorecards to measure performance indicators (through SAS® Strategic Performance Management software). This, along with event notification messages, will all be delivered through the SAS® Information Delivery Portal.
    Publication is also channeled according to audience with appropriate security and privacy rules.
    SECURITY u2013 AUTHENTICATION AND AUTHORIZATION
    The business value derived from using the SAS® Value Chain Analytics includes an authoritative and secure environment for data management and reporting. A data warehouse may be categorized as a u201Ccollection of integrated databases designed to support managerial decision making and problem solving functionsu201D and u201Ccontains both highly detailed and summarized historical data relating to various categories, subjects, or areasu201D. Implementation of the research funding data mart at McMaster has meant that our stakeholders now have electronic access to data which previously was not widely disseminated. Stakeholders are now able to gain timely access to this data in the form that best matches their current information needs. Security requirements are being addressed taking into consideration the following:
    u2022 Data identification
    u2022 Data classification
    u2022 Value of the data
    u2022 Identifying any data security vulnerabilities
    u2022 Identifying data protection measures and associated costs
    u2022 Selection of cost-effective security measures
    u2022 Evaluation of effectiveness of security measures
    At McMaster access to data involves both authentication and authorization. Authentication may be defined as the process of verifying the identity of a person or process within the guidelines of a specific
    6
    SUGI 30 Focus Session
    security policy (who you are). Authorization is the process of determining which permissions the user has for which resources (permissions). Authentication is also a prerequisite for authorization. At McMaster business intelligence (BI) services that are not public require a sign on with a single university-wide login identifier which is currently authenticated using the Microsoft Active Directory. After a successful authentication the SAS® university login identifier can be used by the SAS® Meta data server. No passwords are ever stored in SAS®. Future plans at the university call for this authentication to be done using Kerberos.
    At McMaster aggregate information will be open to all. Granular security is being implemented as required through a combination of SAS® Information Maps and stored processes. SAS® Information Maps consist of metadata that describe a data warehouse in business terms. Through using SAS® Information Map Studio which is an application used to create, edit and manage SAS® Information Maps, we will determine what data our stakeholders will be accessing through either SAS® Web Report Studio (ability to create reports) or SAS® Information Delivery Portal (ability to view only). Previously access to data residing in DB-2 tables was granted by creating views using structured query language (SQL). Information maps are much more powerful as they capture metadata about allowable usage and query generation rules. They also describe what can be done, are database independent and can cross databases and they hide the physical structure of the data from the business user. Since query code is generated in the background, the business user does not need to know structured query language (SQL). As well as using Information Maps, we will also be using SAS® stored processes to implement role based granular security.
    At the university some business intelligence (BI) services are targeted for particular roles such as researchers. The primary investigator role of a research project needs access to current and past research funding data at both the summary and detail levels for their research project. A SAS® stored process (a SAS® program which is hosted on a server) is used to determine the employee number of the login by checking a common university directory and then filtering the research data mart to selectively provide only the data that is relevant for the researcher who has signed onto the decision support portal.
    Other business intelligence (BI) services are targeted for particular roles such as Vice-Presidents, Deans, Chairs, Directors, Managers and their Staff. SAS® stored processes are used as described above with the exception that they filter data on the basis of positions and organizational affiliations. When individuals change jobs or new appointments occur the authorized business intelligence (BI) data will always be correctly presented.
    As the SAS® stored process can be executed from many environments (for example, SAS® Web Report Studio, SAS® Add-In for Microsoft Office, SAS® Enterprise Guide) authorization rules are consistently applied across all environments on a timely basis. There is also potential in the future to automatically customize web portals and event notifications based upon the particular role of the person who has signed onto the SAS® Information Delivery Portal.
    ARCHITECTURE (PRODUCTION ENVIRONMENT)
    We are currently in the planning stages for building a scalable, sustainable infrastructure which will support a scaled deployment of the SAS® Value Chain Analytics. We are considering implementing the following three-tier platform which will allow us to scale horizontally in the future:
    Our development environment consists of a server with 2 x Intel Xeon 2.8GHz Processors, 2GB of RAM and is running Windows 2000 u2013 Service Pack 4.
    We are considering the following for the scaled roll-out of our production environment.
    A. Hardware
    1. Server 1 - SAS® Data Server
    - 4 way 64 bit 1.5Ghz Itanium2 server
    7
    SUGI 30 Focus Session
    - 16 Gb RAM
    - 2 73 Gb Drives (RAID 1) for the OS
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for Itanium
    2 Mid-Tier (Web) Server
    - 2 way 32 bit 3Ghz Xeon Server
    - 4 Gb RAM
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for x86
    3. SAN Drive Array (modular and can grow with the warehouse)
    - 6 u2013 72GB Drives (RAID 5) total 360GB for SAS® and Data
    B. Software
    1. Server 1 - SAS® Data Server
    - SAS® 9.1.3
    - SAS® Metadata Server
    - SAS® WorkSpace Server
    - SAS® Stored Process Server
    - Platform JobScheduler
    2. Mid -Tier Server
    - SAS® Web Report Studio
    - SAS® Information Delivery Portal
    - BEA Web Logic for future SAS® SPM Platform
    - Xythos Web File System (WFS)
    3. Client u2013Tier Server
    - SAS® Enterprise Guide
    - SAS® Add-In for Microsoft Office
    REPORTING
    We have created a number of parameterized stored processes using SAS® Enterprise Guide, which our stakeholders will access as both static (HTML as well as PDF documents) and interactive reports (drill-down) through SAS® Web Report Studio and the SAS® Add-In for Microsoft Office. All canned reports along with SAS® Web Report Studio will be accessed through the SAS® Information Delivery Portal.
    NEXT STEPS
    Next steps of the project include development of a financial data mart along with appropriate data quality standards, monthly frozen snapshots and implementation of university-wide financial reporting standards. This will facilitate electronic access to integrated financial information necessary for the development and maintenance of an integrated, multi-year financial planning framework. Canned reports to include monthly web-based financial statements, with drill-down capability along with budget templates automatically populated with data values and saved in different workbooks for different subgroups (for example by Department). The later will be accomplished using Microsoft Direct Data Exchange (DDE).
    8
    SUGI 30 Focus Session
    As well, we will begin the implementation of SAS® Strategic Performance Management Software to support the performance measurement and monitoring initiative that is a fundamental component of McMasteru2019s strategic plan. This tool will assist in critically assessing and identifying meaningful and statistically relevant measures and indicators. This software can perform causal analyses among various measures within and across areas providing useful information on inter-relationships between factors and measures. As well as demonstrating how decisions in one area affect other areas, these cause-and-effect analyses can reveal both good performance drivers and also possible detractors and enable u2018evidenced-basedu2019 decision-making. Finally, the tool provides a balanced scorecard reporting format, designed to identify statistically significant trends and results that can be tailored to the specific goals, objectives and measures of the various operational areas of the University.
    LESSONS LEARNED
    Lessons learned include the importance of taking a consultative approach not only in assessing information needs, but also in building data hierarchies, understanding subject matter, and in prioritizing tasks to best support decision making and inform senior management. We found that a combination of training and mentoring (knowledge transfer) helped us accelerate learning the new tools. It was very important to ensure that time and resources were committed to complete the necessary planning and data quality initiatives prior to initiating the first project. When developing a project plan, it is important to

  • How to implement multiple Value Helps within the same Application ??

    Dear Experts,
    I want to implement multiple value helps in the same view.For that I have declared exporting parameters of type 'wdy_key_value_table.' within the component controller for each of the value helps.While I do activate and test the application I get the following error :
    The following error text was processed in the system HE6 : A row with the same key already exists.
    The error occurred on the application server hsdnt24s11_HE6_00 and in the work process 4 .
    The termination type was: RABAX_STATE
    The ABAP call stack was:
    Method: VALUESET_BSART of program /1BCWDY/9VSHJWRNR0EZPKFT3ZKC==CP
    Method: IF_PO_VIEW1~VALUESET_BSART of program /1BCWDY/9VSHJWRNR0EZPKFT3ZKC==CP
    Method: WDDOINIT of program /1BCWDY/9VSHJWRNR0EZPKFT3ZKC==CP
    Method: IF_WDR_VIEW_DELEGATE~WD_DO_INIT of program /1BCWDY/9VSHJWRNR0EZPKFT3ZKC==CP
    Method: DO_INIT of program CL_WDR_DELEGATING_VIEW========CP
    Method: INIT_CONTROLLER of program CL_WDR_CONTROLLER=============CP
    Method: INIT_CONTROLLER of program CL_WDR_VIEW===================CP
    Method: INIT of program CL_WDR_CONTROLLER=============CP
    Method: GET_VIEW of program CL_WDR_VIEW_MANAGER===========CP
    Method: BIND_ROOT of program CL_WDR_VIEW_MANAGER===========CP
    I dont know how to implement multiple value helps.Need your help on this.
    Regards,
    Mamai.

    Hi
    Hint is : A row with the same key already exists it means , It is assigning the same value/Key to row and you are calling it at WDDOINIT  so it giving error at the time of initialization .
    Better way to do the coding at some event in view OR if not possible than just execute the first value help in wddoinit later clear all the value before gettig the other Value help. Code it at WdDoModify View to get its run time behaviour.
    BR
    Satish Kumar

  • How to implement SEARCH HELP for input field in WDA

    Hi All,
    I am doing a tool for my team. in this tool there are 4 input fields to show different processes. I implemented the 4 input fields and also bind the respective tables to these fields successfully. Actually I want to filter the data between 2 fields . Means the data in the 2nd input field should be based on the 1st input field. Means when I'll select for example 'CHI' (code for CHINA) in the 1st input field the 2nd field meant for country name should show all country name with value 'CHINA'. But the problem is that all values related to each field are in different table with no foreign key relationship and also some combinations are in single table..
    I have tried the import and export parameter method for search help but no luck till now.
    Can anyone please suggest me how to implement this scenario..
    Thanks in advance...
    sekhar

    Hi sekhar,
    Your Requirement can be implemented by OVS(Object Value selector) This is the custom search help .So that you can define on what basis the value has to be fetched.
    Look at the below link
    http://wiki.sdn.sap.com/wiki/display/WDABAP/ABAPWDObjectValueSelector(OVS)
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/606288d6-04c6-2c10-b5ae-a240304c88ea?quicklink=index&overridelayout=true
    You need to use the component WDR_OVS.
    It has 3 phases
    In phase 1
         You will fetch the value from input field to F4 help dialog box.
    In phase 2
         You will generate the search results , in this phase look for the input value given in field 2 and accordingly generate search result for the field 3.
    In phase 3
         The selected value will be returned to the original screen.
    Regards
    Karthiheyan M

  • How to implement this function in JSP/Servlet env?

    Hi all,
    I working on a project, it provides functionality to upload file using JSP/Servlet. In the first JSP page, there is file location and submit button. After user select a file to upload and click submit button, a message, like "sending file to XXXX", will be shown on the screen. Once uploading and validation are done on the server-side, a successful/error msg will be shown to user.
    Here I have a question for the "sending..." msg and the successful/error msg. They should be put in one jsp page or in two separate page? how to implement them?
    Thanks for any help!
    Tranquil

    For the sending message... Well, the thing is, when you click submit, it's sending the file to the server and the server is processing it, and this is all done before the "complete" page is sent to the server. So one would need to use some Javascript on the page before the actual submit happens to show some message. This is done on Ebay when you put something for sale, you can upload an image, and there is a little popup message telling you that it's uploading, and it is removed when the process is done. Now, I'm not sure the exact details of how this works, but my educated guess is this:
    1) The onsubmit function of the form checks that the file upload fields have a value (no need to popup a message if no file upload, since that's what usually takes the time, although it could just be assumed there is a file). If a file is to be uploaded, or just want to show the message anyway, a new popup window is opened with the window.open method and the "sending" message is shown (either written via Javascript or just load a small web page to the window).
    2) The popup window, since you can't transfer the window object from the form page to the next page, has to check window.opener for some value that the success/error page would have to set. The success/error page could use it's body onload function to set a variable in it's own window object to denote that the page is loaded. The popup window can use a looping check using setTimeout or setInterval in Javascript to check for window.opener.isLoadedVariable to be present, and if so, close itself.
    I've never done that, but I see no reason why it wouldn't work.

  • How to implement this typical sign workflow? Urgent

    Hi Adobe Experts,
    I am new to LC (two months only). Recently company is doing some evaluation on LC 8.2 and my boss assigned the following task to me  (I simplified it):
    Form: (Enclosed)
    * Two text fields: CommentTextField1 and CommentTextField2
    * Two signature fields: SignatureField1 and SignatureField2
    I've associated the SignatureField1 to lock CommentTextField1 and associated the SignatureField2 to lock CommentTextField2.
    The progress to implement is:
    Step 1) Ray Woodard open the form in workspace, type in CommentTextField1, Sign the SignatureField1, Submit.
    Step 2) Alex Pink open the form in workspace, type in CommentTextField2, Sign the SignatureField2, Submit.
    Any suggestions on how to implement the progress? Please help! Thanks a lot!
    (What I am able to accomplish so far is the step 1, but I am not confident about that. Please point me the right solution. Should I use xfaForm or Document Form, render/submit as default or PDF, submit as XDP or PDF etc...? I am so confused and frustrated now!)

    Steve, thank you so much!!!
    Your example helped me a lot. I am able to run the process, but I still have three doubts:
    1) When I set "Submit As" to "PDF" rather than "XDP" by default, I am not able to submit the form in the workspace client with "Adobe Reader" installed. But I am able to submit the form in the workspace client with "Adobe Acrobat" installed. Is it normal?
    2) What does "Form Bridge" do in this whole process here?
    3) If I don't want to hard code the submit URL  in the form, do I have another option, like using other renders?
    Thanks again!
    Wayne

  • How to implement Self Registration in EP 7.0

    Hello All,
    I am working in Enterprise Portal 7.0.
    I wanted to customize self registraion page in my portal in SAP Netweaver 7.0.
    Please tell me how to implement self registration in Portal 7.0.
    Thankx.

    Hi,
    I am able to run that component, but its giving another error such as:
    Java iView Runtime
    Version : 7.00.200702010738
    An exception occured while processing your request.
    com.sapportals.portal.prt.dispatcher.DispatcherException: Could not find connection portla
    com.sapportals.portal.prt.dispatcher.DispatcherException: Could not find connection portla
         at com.sapportals.portal.prt.dispatcher.Dispatcher$doService.run(Dispatcher.java:528)
         at java.security.AccessController.doPrivileged(Native Method)
         at com.sapportals.portal.prt.dispatcher.Dispatcher.service(Dispatcher.java:405)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at com.sap.engine.services.servlets_jsp.server.servlet.InvokerServlet.service(InvokerServlet.java:156)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java:401)
         at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java:266)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:387)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java:365)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java:944)
         at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java:266)
         at com.sap.engine.services.httpserver.server.Client.handle(Client.java:95)
         at com.sap.engine.services.httpserver.server.Processor.request(Processor.java:160)
         at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java:33)
         at com.sap.engine.core.cluster.impl6.session.MessageRunner.run(MessageRunner.java:41)
         at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
         at java.security.AccessController.doPrivileged(Native Method)
         at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:100)
         at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:170)
    If this situation persists, please contact your system administrator. 
    Please let me know your comment about this Error.
    Thank you so much in Advance.
    Thanks,
    Pallavi

Maybe you are looking for