Acct claiming/activation process. How to implement it ...

hi We have a customer Req to do the following:
1. student enrolls in a univ. Student info makes it to psft.
2. IDM pulls the student info from src psft and creates a IDM user acct.
3. IDM also creates a random activation-code which is paper mailed
to user.
4. user recieves the letter, logs to foo.edu/activation and enters the activationcode.
he is then provided with his accountID and prompted to reset his password as
well as the self-service Qs and As.
Forget abt the part that does paper mail. I am more focused on how a user with jst a
activationCode can claim his acct on idm for the first time.

We are not using SIS+ but another SIS specifically Sungard/SCT Banner. It is an Oracle based application.
We create an Oracle view with entries for each user and then used the IdM database table resource adapter to connect to it.
The field defined as the password in this view contains the activation code for the user and when a new password is sent to the view, instead of triggers use that value to set the SIS self-service password to the value supplied by IdM.
I created a login module group with a login module for the database table and a requirement of sufficient.
When a user attempts to login, IdM runs a select statement against the database looking for a match on the credentials provided, if it finds one, it allows the user to login as the user who is assigned that resource account.
As the accountId for that account is their student ID number, and the password is their activation code, they are allowed into IdM with only those two pieces of information.
Hope that helps,
Jason

Similar Messages

  • How to fix this error "this iPad is not able to complete the activation process. Please press Home and start over. If the issue persists, please visit your nearest Apple Store or Authorized service provider for more information or replacement"?

    How to fix this error "this iPad is not able to complete the activation process. Please press Home and start over. If the issue persists, please visit your nearest Apple Store or Authorized service provider for more information or replacement"? When I plugged in my iPad this popped up!

    Hi csreddy, 
    If you are receiving a message to contact an Apple Retail Store or Authorized Service Provider for help updating from iOS 3, click on the link below to initiate that support:
    Update the iOS software on your iPhone, iPad, and iPod touch - Apple Support
    http://support.apple.com/en-us/HT204204
    Update your device using iTunes
    If you can’t update wirelessly, or if you want to update with iTunes, follow these steps:
    Install the latest version of iTunes on your computer.
    Plug in your device to your computer.
    In iTunes, select your device.
    In the Summary pane, click Check for Update. 
    Click Download and Update.
    If you don't have enough free space to update using iTunes, you'll need to delete content manually from your device.
    Find out what to do if you get other error messages while updating your device.
    Last Modified: Jan 12, 2015
    Apple - Find Locations
    https://locate.apple.com
    Contact Apple for support and service - Apple Support
    http://support.apple.com/en-us/HT201232
    Regards,
    - Judy

  • How to Implement Process Train In ADF

    I am using Jdeveloper 10.1.3.1.0
    Our requirement is to use process train in our project, so could anyone please knows this things that how to implement it please tell me.
    In Single page i am having so many tabs and each tab having one detail page and atlast it is going to submit all the details.
    Thanks In Advance
    Ramkumar
    Message was edited by:
    user636100

    Hi!
    http://www.oracle.com/webapps/online-help/jdeveloper/10.1.3?topic=af_maintrain_html should give you all the information you require.
    BB

  • How to implement Process Train in Jdev 10.1.3

    Hi..
    Somebody know where can I find tutorial about how to implement process train in jdev 10.3.1
    thanks

    http://download-west.oracle.com/docs/html/B25947_01/web_complex005.htm#CEGIGJID

  • How to Implement HTTP Request Status Code Processing

    I actually have two questions. First, I wondering how to add multiple status code processing to an http request. Secondly, I was wondering how to go about using alternate http requests to different servers in case the primary server is down. What kind of parameter would the program use to determine that the server is unavailable and switch to another server??
    Currently, the program I've written calls an rdf server (http://www.rdfabout.com/sparql) using a sparql query,
    the server returns an xml string, the program parses it, and calculates numbers
    from the string. The program works, but the problem is that the server is down occasionally.
    When the server is down, we need to add calls to another server to
    increase reliability. So, the next task is to call this server:
    http://www.melissadata.com/lookups/ZipDemo2000.asp
    I need to do exactly the same things I did with the rdf server. The
    difference will be constructing a request and a bit different parsing of
    the response.
    current SPARQL query is defined as follows:
    PREFIX dc:  <http://purl.org/dc/elements/1.1/>
    PREFIX census: <http://www.rdfabout.com/rdf/schema/census/>
    PREFIX census1: <tag:govshare.info,2005:rdf/census/details/100pct/>
    DESCRIBE ?table WHERE {
    <http://www.rdfabout.com/rdf/usgov/geo/census/zcta/90292> census:details
    ?details .
    ?details census1:totalPopulation ?table .
    ?table dc:title "SEX BY AGE (P012001)" .
    }current HTTP Request is defined as follows:
    import java.net.*;
    import java.net.URL;
    import java.net.URLConnection;
    import java.io.*;
    import java.io.DataOutputStream;
    import java.io.BufferedReader;
    import java.io.StringReader;
    import java.io.InputStreamReader;
    import java.io.PrintStream;
    import java.util.Scanner;
    import java.util.regex.Matcher;
    import java.util.regex.Pattern;
    import java.util.Arrays; 
    public class MyConnection
         static Scanner sc = new Scanner(System.in);//allows user to input zipcode
        public static void main(String[] args) throws Exception
             int zip;//zipcode is declared as integer format
            //User defines zip through input
            //proceed to put SPARQL query into string, which is then used to call the server
            String requestPart1 =
            "query=PREFIX+dc%3A++%3Chttp%3A%2F%2Fpurl.org%2Fdc%2Felements%2F1.1%2F%3E+%0D%0APREFIX+census%3A+%3Chttp%3A%2F%2Fwww.rdfabout.com%2Frdf%2Fschema%2Fcensus%2F%3E+%0D%0APREFIX+census1%3A+%3Ctag%3Agovshare.info%2C2005%3Ardf%2Fcensus%2Fdetails%2F100pct%2F%3E+%0D%0A%0D%0ADESCRIBE+%3Ftable+WHERE+%7B+%0D%0A+%3Chttp%3A%2F%2Fwww.rdfabout.com%2Frdf%2Fusgov%2Fgeo%2Fcensus%2Fzcta%2F";
            String requestPart2 = "" + zip; // zipcode is transformed from int to string format and plugged into SPARQL query here
            String requestPart3 =
            "%3E+census%3Adetails+%3Fdetails+.+%0D%0A+%3Fdetails+census1%3AtotalPopulation+%3Ftable+.+%0D%0A+%3Ftable+dc%3Atitle+%22SEX+BY+AGE+%28P012001%29%22+.+%0D%0A%7D%0D%0A&outputMimeType=text%2Fxml";
            String response = "";
            URL url = new URL("http://www.rdfabout.com/sparql");//designates server to connect to
            URLConnection conn = url.openConnection();//opens connection to server
            // Set connection parameters.
            conn.setDoInput (true);
            conn.setDoOutput (true);
            conn.setUseCaches (false);
            // Make server believe we are form data…
            conn.setRequestProperty("Content-Type","application/x-www-form-urlencoded");
            DataOutputStream out = new DataOutputStream (conn.getOutputStream ());
            // Write out the bytes of the content string to the stream.
            out.writeBytes(requestPart1 + requestPart2 + requestPart3);
            out.flush ();
            out.close ();
            // Read response from the input stream.
            BufferedReader in = new BufferedReader (new InputStreamReader(conn.getInputStream ()));
            String temp;
            while ((temp = in.readLine()) != null)
                 response += temp + "\n";
            temp = null;
            in.close ();
            //parsing stuff is taken care of after here
    }What remains now is to:
    1) add status code processing: notify if the server is not available, ect.
    2) add ability to connect to additional server if primary server is down.
    I'm thinking an if/else statement, which I've tried a few different ways,
    but I don't quite know how to implement that...Also trying to add the
    status code processing/error handling, but I'm not sure how to do that
    for multiple/different errors, such as 404, 503, 504, ect.. try/catch statements?
    So yeah, just been scratching my head on this trying to figure out how to work it..
    If you can help me out on this, I've been going nuts trying to figure this out...

    I think your issue comes form the fact that you are not casting URLConnection to HttpURLConnection.
    Doing the cast would allow you to use getResponseCode() - among other methods - and test for a response different than 200.
    Read: [http://mindprod.com/jgloss/urlconnection.html|http://mindprod.com/jgloss/urlconnection.html]

  • Plz help me how i remove previous id that is stuck in activation process and me not know password and id  plzzzz help m,

    plz help me how i remove previous id that is stuck in activation process and me not know password and id  plzzzz help m

    Read this: http://support.apple.com/kb/PH13695
    If it is locked with your ID, then re-set the password at https://iforgot.apple.com

  • Activity Monitor:  How to know what processes to turn off?

    I cant seem to find something like a glossary for the processes that appear in Activity Monitor.
    I get the idea for using the AM - to find out what processes are running and how much memory they are using, however, I just don't know what processes are "suspicious", uneccary, good, bad or whatever. Anyone know of some resource to help me know what I should turn off?
    Many thanks!

    Ah, in that case, watch your CPU usage.
    When Activity Monitor starts, select either +All Processes+ or +Active Processes+ in the toolbar, and sort the list by the +% CPU+ column, so the highest numbers are on top, and select the CPU tab towards the bottom.
    For this purpose, also select +View > Update Frequency > Less Often+ from the Menubar.
    Various processes will "spike" up and down normally as they do things, then are idle, so you need to watch it for a while and get a feel for the processes that are using the most CPU, and the +% Idle+ at the bottom.
    When the slowdown occurs, see if something changes significantly, especially if the +% idle+ goes to near zero. If that happens, see what process(es) are using the most CPU.
    If the process(es) involved aren't familiar to you, post their names here (and/or check this out: http://triviaware.com/macprocess/all)
    You may also want to select the +System Memory+ tab towards the bottom. Keep an eye on the total of Free and Inactive memory (if they're near zero, you may need more), and the +Page outs+ (if that's high, it's another indication you may need more memory).

  • How to monitor the Activation process of a request in DSO

    Hi all,
    I have loaded the data ( 7 million) records in DSO as a new request. I have clicked on the activation button.
    Can any one tell me where can we check the status of this activation process of this request?
    When clicked on monitor or logs button it shows me all as status green, but not able to figure out where to check the status of activation process.(the data is still not activated )
    any help would be great.
    Thanks in advance.
    Regds,
    Maddy

    Hi,
    There are a couple of places where you can see the activation progress.
    1) To start with SM37 job log > (give  BI_PROCESS_ODSACTIVAT  or BI_ODSA*)                        
    and it should give you the details about the activation job. If its active make sure that the job log is getting updated at frequent intervals.
    2) For very huge number of records SM37 job log may not get update frequently. In that case you have a workaround.
    Go to contents > Active Data table > No. of entries > This will keep on increasing at frequent intervals.
    3) SM66 > Get the job details (server name PID etc from SM37) and see in SM66 if the job is running or not. See if its accessing/updating some tables or is not doing anything at all.
    4) Go to Activate button in Target > Request tab. Inside you can see a job log button. It will also show some log regarding the last activation.
    Thanks,
    JituK

  • How to stop th activation process of DSO in process chain

    Hi,
         we are loading the DSO by two load processand followed by activation process, but here one of the load process failed and i need to correct it manually, but when i make the failed request red in dso the next activation process is started. can i stop this activation process?, because i had to run this process after i manually correct the load request.
    thanks

    Hi,
    You can stop or kill this job, by identifying the work process number and app. server from job log of activation job. by right clicking on activation process -> display message-> you can get this details. once you know about the server and work process, then go to SM51 -> select the app server and then select the work process and cancel it from menu options.
    When you are loading 2 requests in parllel to a data target, it is always better to have an AND process before activating the Data in DSO. It will aviod this kind of errors and killing other jobs.

  • Activity Monitor: how to quickly find "not responding" processes?

    Is there a quick way, in Activity Monitor, to identify the apps/processes that are crashed/"not responding" other than scrolling down the list, looking for those highlighted in red?
    If there's a particular column/attribute to sort on, I haven't figured it out.
    Thanks!

    Ah, in that case, watch your CPU usage.
    When Activity Monitor starts, select either +All Processes+ or +Active Processes+ in the toolbar, and sort the list by the +% CPU+ column, so the highest numbers are on top, and select the CPU tab towards the bottom.
    For this purpose, also select +View > Update Frequency > Less Often+ from the Menubar.
    Various processes will "spike" up and down normally as they do things, then are idle, so you need to watch it for a while and get a feel for the processes that are using the most CPU, and the +% Idle+ at the bottom.
    When the slowdown occurs, see if something changes significantly, especially if the +% idle+ goes to near zero. If that happens, see what process(es) are using the most CPU.
    If the process(es) involved aren't familiar to you, post their names here (and/or check this out: http://triviaware.com/macprocess/all)
    You may also want to select the +System Memory+ tab towards the bottom. Keep an eye on the total of Free and Inactive memory (if they're near zero, you may need more), and the +Page outs+ (if that's high, it's another indication you may need more memory).

  • How to implement mapping for a slowly changing dimension

    Hello,
    I don't have any experience with OWB and I need some help.
    I just don't know how to create the ETL process for a slowly changing dimension.
    My scenario is that I have 2 operative systems providing customer information, a staging area and a dwh with a customer dimension with SCD type 2 (created within OWB).
    The oltp data is already transferred to the staging area. But how should the mapping for the dwh table look like? Which operators have to be used?
    I have to check whether the customer record is new or just updated. How can I check every attribute? A new record shall be loaded, an updated record shall be historized (as I configured it in the SCD type 2). I just don't know how the trigger of the SCD is activated. Do I have to try an update on the trigger attribute and then automaticalle a new record is created? But with which operator can I do this? How should the mapping look like? Or is this impossible and do I have to implement this functionality with SQL code only?
    I know how to implement this with SQL code, but my task is to implement this in OWB.
    As you see I did not understand the logic of OWB so far and I hope somebody can help me.
    Greetings,
    Joerg

    Joerg,
    Check the blog below which provides good detail and also check the OWB documentation
    http://www.rittmanmead.com/2006/09/21/working-through-some-scd-2-and-3-examples-using-owb10gr2/
    Thanks,
    Sam.

  • How to implement the WIP in production planning module?

    Hi Gurus,
    Good afternoon! We are planning to Implement the WIP in Production Planning Module. Current set-up, we create production order and confirmed for all process in production transaction especially the consumption of raw materials and Goods receipt of FG from one Production order. We have two activity in production transaction. But we would like to separate the process per activity. Can we ask on how to separate the transaction or implementing the WIP? To create production order as semi-finished goods and one production order for final finished goods. We would like to ask for your expertise and advise on how to implement WIP in SAP PP that there is no duplication of manufacturing FG cost.
    We would like also to know all FI entries per transaction to make sure there is no double entries in manufacturing cost of FG.
    thank you,
    weldin angelo

    Hello  Angelo,
    Please refer this thread for your requirement
    Wip calculation in production Order
    WIP Settlement
    I hope this information helpful to you
    Regards
    Umesh Mali

  • How may implementations can we have for a BaDI?

    How may implementations can we have for a BaDI?

    if BADI check Multi-USE , then u can
    Multiple-Use Business Add-Ins
    You can differentiate between single-use and multiple use Business Add-Ins. The distinction is based on the procedure or event character of an enhancement. In the first case, the program waits for the enhancement to return something, usually a return code. A typical example could be a benefit calculation in HR. Depending on the implementation, alternative calculations can be executed. With multiple use add-ins, an event that may be of interest to other components is processed in program flow. Any number of components could use this event as a “hook” to hang their own additional actions on to.
    In addition to importing parameters, you can also use changing parameters for multiple-use Business Add-Ins. There is no sequence control for multiple-use implementations of BadIs. Therefore, using changing parameters can cause problems. There is no guarantee that implementations will not overwrite the results of previous implementations. Sequence control is technically impossible, since at the time of the definition the interface does not know which implementations there will be and which parameters will be changed by implementations. It is not possible to have a decision as to which implementation should be executed before which other (future) implementation.
    Example:
    In a particular application, you want to be able to continue processing indexes after another component has saved data (in other words, the system should allow you to use an add-in after saving). Since this point in time can be useful for different purposes, you can create an enhancement here that can be used by multiple subscribers.
    To create a multiple-use Business Add-In, proceed as follows:
           1.      Define an add-in and select the Multiple Use checkbox from the Administration tab.
           2.      Define an interface with the method OBJECT_SAVED'and the importing parameter OBJECTNAME.
    Calling your enhancement in the application program:
    program event.
    data exit_obj type ref to if_ex_event.
    call method cl_exithandler =>get_instance
         changing instance = exit.
    form save_object using obj_name type c.
    update …
    call method exit_obj->object_saved
         exporting objectname = obj_name.
    endform.
    For the caller it is irrelevant whether (and how many) subscribers use the event as a starting point for further actions. The active implementations are called in the adapter method.
    more in http://help.sap.com/saphelp_nw04/helpdata/en/ee/a1d548892b11d295d60000e82de14a/content.htm
    Regards
    Prabhu

  • What is BI ? How we implement & what is the cost to implement ?

    What is BI ? How we implement & what is the cost to implement ?
    Thanks,
    Sumit.

    Hi Sumit,
                        Below is the description according to ur query
    Business Intelligence is a process for increasing the competitive advantage of a business by intelligent use of available data in decision making. This process is pictured below.
    The five key stages of Business Intelligence:
    1.     Data Sourcing
    2.     Data Analysis
    3.     Situation Awareness
    4.     Risk Assessment
    5.     Decision Support
    Data sourcing
    Business Intelligence is about extracting information from multiple sources of data. The data might be: text documents - e.g. memos or reports or email messages; photographs and images; sounds; formatted tables; web pages and URL lists. The key to data sourcing is to obtain the information in electronic form. So typical sources of data might include: scanners; digital cameras; database queries; web searches; computer file access; etcetera.
    Data analysis
    Business Intelligence is about synthesizing useful knowledge from collections of data. It is about estimating current trends, integrating and summarising disparate information, validating models of understanding, and predicting missing information or future trends. This process of data analysis is also called data mining or knowledge discovery. Typical analysis tools might use:-
    u2022     probability theory - e.g. classification, clustering and Bayesian networks; 
    u2022     statistical methods - e.g. regression; 
    u2022     operations research - e.g. queuing and scheduling; 
    u2022     artificial intelligence - e.g. neural networks and fuzzy logic.
    Situation awareness
    Business Intelligence is about filtering out irrelevant information, and setting the remaining information in the context of the business and its environment. The user needs the key items of information relevant to his or her needs, and summaries that are syntheses of all the relevant data (market forces, government policy etc.).  Situation awareness is the grasp of  the context in which to understand and make decisions.  Algorithms for situation assessment provide such syntheses automatically.
    Risk assessment
    Business Intelligence is about discovering what plausible actions might be taken, or decisions made, at different times. It is about helping you weigh up the current and future risk, cost or benefit of taking one action over another, or making one decision versus another. It is about inferring and summarising your best options or choices.
    Decision support
    Business Intelligence is about using information wisely.  It aims to provide warning you of important events, such as takeovers, market changes, and poor staff performance, so that you can take preventative steps. It seeks to help you analyse and make better business decisions, to improve sales or customer satisfaction or staff morale. It presents the information you need, when you need it.
    This section describes how we are using extraction, transformation and loading (ETL) processes and a data warehouse architecture to build our enterprise-wide data warehouse in incremental project steps. Before an enterprise-wide data warehouse could be delivered, an integrated architecture and a companion implementation methodology needed to be adopted. A productive and flexible tool set was also required to support ETL processes and the data warehouse architecture in a production service environment. The resulting data warehouse architecture has the following four principal components:
    u2022 Data Sources
    u2022 Data Warehouses
    u2022 Data Marts
    u2022 Publication Services
    ETL processing occurs between data sources and the data warehouse, between the data warehouse and data marts and may also be used within the data warehouse and data marts.
    Data Sources
    The university has a multitude of data sources residing in different Data Base Management System (DBMS) tables and non-DBMS data sets. To ensure that all relevant data source candidates were identified, a physical inventory and logical inventory was conducted. The compilation of these inventories ensures that we have an enterprise-wide view of the university data resource.
    The physical inventory was comprised of a review of DBMS cataloged tables as well as data sets used by business processes. These data sets had been identified through developing the enterprise-wide information needs model.
    3
    SUGI 30 Focus Session
    The logical inventory was constructed from u201Cbrain-stormingu201D sessions which focused on common key business terms which must be referenced when articulating the institutionu2019s vision and mission (strategic direction, goals, strategies, objectives and activities). Once the primary terms were identified, they were organized into directories such as u201CProjectu201D, u201CLocationu201D, u201CAcademic Entityu201D, u201CUniversity Personu201D, u201CBudget Envelopeu201D etc. Relationships were identified by recognizing u201Cnatural linkagesu201D within and among directories, and the u201Cdrill-downsu201D and u201Croll-upsu201D that were required to support u201Creport byu201D and u201Creport onu201D information hierarchies. This exercise allowed the directories to be sub-divided into hierarchies of business terms which were useful for presentation and validation purposes.
    We called this important deliverable the u201CConceptual Data Modelu201D (CDM) and it was used as the consolidated conceptual (paper) view of all of the Universityu2019s diverse data sources. The CDM was then subjected to a university-wide consultative process to solicit feedback and communicate to the university community that this model would be adopted by the Business Intelligence (BI) project as a governance model in managing the incremental development of its enterprise-wide data warehousing project.
    Data Warehouse
    This component of our data warehouse architecture (DWA) is used to supply quality data to the many different data marts in a flexible, consistent and cohesive manner. It is a u2018landing zoneu2019 for inbound data sources and an organizational and re-structuring area for implementing data, information and statistical modeling. This is where business rules which measure and enforce data quality standards for data collection in the source systems are tested and evaluated against appropriate data quality business rules/standards which are required to perform the data, information and statistical modeling described previously.
    Inbound data that does not meet data warehouse data quality business rules is not loaded into the data warehouse (for example, if a hierarchy is incomplete). While it is desirable for rejected and corrected records to occur in the operational system, if this is not possible then start dates for when the data can begin to be collected into the data warehouse may need to be adjusted in order to accommodate necessary source systems data entry u201Cre-worku201D. Existing systems and procedures may need modification in order to permanently accommodate required data warehouse data quality measures. Severe situations may occur in which new data entry collection transactions or entire systems will need to be either built or acquired.
    We have found that a powerful and flexible extraction, transformation and loading (ETL) process is to use Structured Query Language (SQL) views on host database management systems (DBMS) in conjunction with a good ETL tool such as SAS® ETL Studio. This tool enables you to perform the following tasks:
    u2022 The extraction of data from operational data stores
    u2022 The transformation of this data
    u2022 The loading of the extracted data into your data warehouse or data mart
    When the data source is a u201Cnon-DBMSu201D data set it may be advantageous to pre-convert this into a SAS® data set to standardize data warehouse metadata definitions. Then it may be captured by SAS® ETL Studio and included in the data warehouse along with any DBMS source tables using consistent metadata terms. SAS® data sets, non-SAS® data sets, and any DBMS table will provide the SAS® ETL tool with all of the necessary metadata required to facilitate productive extraction, transformation and loading (ETL) work.
    Having the ability to utilize standard structured query language (SQL) views on host DBMS systems and within SAS® is a great advantage for ETL processing. The views can serve as data quality filters without having to write any procedural code. The option exists to u201Cmaterializeu201D these views on the host systems or leave them u201Cun-materializedu201D on the hosts and u201Cmaterializeu201D them on the target data structure defined in the SAS® ETL process. These choices may be applied differentially depending upon whether you are working with u201Ccurrent onlyu201D or u201Ctime seriesu201D data. Different deployment configurations may be chosen based upon performance issues or cost considerations. The flexibility of choosing different deployment options based upon these factors is a considerable advantage.
    4
    SUGI 30 Focus Session
    Data Marts
    This component of the data warehouse architecture may manifest as the following:
    u2022 Customer u201Cvisibleu201D relational tables
    u2022 OLAP cubes
    u2022 Pre-determined parameterized and non-parameterized reports
    u2022 Ad-hoc reports
    u2022 Spreadsheet applications with pre-populated work sheets and pivot tables
    u2022 Data visualization graphics
    u2022 Dashboard/scorecards for performance indicator applications
    Typically a business intelligence (BI) project may be scoped to deliver an agreed upon set of data marts in a project. Once these have been well specified, the conceptual data model (CDM) is used to determine what parts need to be built or used as a reference to conform the inbound data from any new project. After the detailed data mart specifications (DDMS) have been verified and the conceptual data model (CDM) components determined, a source and target logical data model (LDM) can be designed to integrate the detailed data mart specification (DDMS) and conceptual data model (CMD). An extraction, transformation and loading (ETL) process can then be set up and scheduled to populate the logical data models (LDM) from the required data sources and assist with any time series and data audit change control requirements.
    Over time as more and more data marts and logical data models (LDMu2019s) are built the conceptual data model (CDM) becomes more complete. One very important advantage to this implementation methodology is that the order of the data marts and logical data models can be entirely driven by project priority, project budget allocation and time-to-completion constraints/requirements. This data warehouse architecture implementation methodology does not need to dictate project priorities or project scope as long as the conceptual data model (CDM) exercise has been successfully completed before the first project request is initiated.
    McMasteru2019s Data Warehouse design
    DevelopmentTestProductionWarehouseWarehouseWarehouseOtherDB2 OperationalOracle OperationalETLETLETLETLETLETLETLETLETLDataMartsETLETLETLDataMartsDataMartsDB2/Oracle BIToolBIToolBIToolNoNoUserUserAccessAccessUserUserAccessAccess(SAS (SAS Data sets)Data sets)Staging Area 5
    SUGI 30 Focus Session
    Publication Services
    This is the visible presentation environment that business intelligence (BI) customers will use to interact with the published data mart deliverables. The SAS® Information Delivery Portal will be utilized as a web delivery channel to deliver a u201Cone-stop information shoppingu201D solution. This software solution provides an interface to access enterprise data, applications and information. It is built on top of the SAS Business Intelligence Architecture, provides a single point of entry and provides a Portal API for application development. All of our canned reports generated through SAS® Enterprise Guide, along with a web-based query and reporting tool (SAS® Web Report Studio) will be accessed through this publication channel.
    Using the portalu2019s personalization features we have customized it for a McMaster u201Clook and feelu201D. Information is organized using pages and portlets and our stakeholders will have access to public pages along with private portlets based on role authorization rules. Stakeholders will also be able to access SAS® data sets from within Microsoft Word and Microsoft Excel using the SAS® Add-In for Microsoft Office. This tool will enable our stakeholders to execute stored processes (a SAS® program which is hosted on a server) and embed the results in their documents and spreadsheets. Within Excel, the SAS® Add-In can:
    u2022 Access and view SAS® data sources
    u2022 Access and view any other data source that is available from a SAS® server
    u2022 Analyze SAS® or Excel data using analytic tasks
    The SAS® Add-In for Microsoft Office will not be accessed through the SAS® Information Delivery Portal as this is a client component which will be installed on individual personal computers by members of our Client Services group. Future stages of the project will include interactive reports (drill-down through OLAP cubes) as well as balanced scorecards to measure performance indicators (through SAS® Strategic Performance Management software). This, along with event notification messages, will all be delivered through the SAS® Information Delivery Portal.
    Publication is also channeled according to audience with appropriate security and privacy rules.
    SECURITY u2013 AUTHENTICATION AND AUTHORIZATION
    The business value derived from using the SAS® Value Chain Analytics includes an authoritative and secure environment for data management and reporting. A data warehouse may be categorized as a u201Ccollection of integrated databases designed to support managerial decision making and problem solving functionsu201D and u201Ccontains both highly detailed and summarized historical data relating to various categories, subjects, or areasu201D. Implementation of the research funding data mart at McMaster has meant that our stakeholders now have electronic access to data which previously was not widely disseminated. Stakeholders are now able to gain timely access to this data in the form that best matches their current information needs. Security requirements are being addressed taking into consideration the following:
    u2022 Data identification
    u2022 Data classification
    u2022 Value of the data
    u2022 Identifying any data security vulnerabilities
    u2022 Identifying data protection measures and associated costs
    u2022 Selection of cost-effective security measures
    u2022 Evaluation of effectiveness of security measures
    At McMaster access to data involves both authentication and authorization. Authentication may be defined as the process of verifying the identity of a person or process within the guidelines of a specific
    6
    SUGI 30 Focus Session
    security policy (who you are). Authorization is the process of determining which permissions the user has for which resources (permissions). Authentication is also a prerequisite for authorization. At McMaster business intelligence (BI) services that are not public require a sign on with a single university-wide login identifier which is currently authenticated using the Microsoft Active Directory. After a successful authentication the SAS® university login identifier can be used by the SAS® Meta data server. No passwords are ever stored in SAS®. Future plans at the university call for this authentication to be done using Kerberos.
    At McMaster aggregate information will be open to all. Granular security is being implemented as required through a combination of SAS® Information Maps and stored processes. SAS® Information Maps consist of metadata that describe a data warehouse in business terms. Through using SAS® Information Map Studio which is an application used to create, edit and manage SAS® Information Maps, we will determine what data our stakeholders will be accessing through either SAS® Web Report Studio (ability to create reports) or SAS® Information Delivery Portal (ability to view only). Previously access to data residing in DB-2 tables was granted by creating views using structured query language (SQL). Information maps are much more powerful as they capture metadata about allowable usage and query generation rules. They also describe what can be done, are database independent and can cross databases and they hide the physical structure of the data from the business user. Since query code is generated in the background, the business user does not need to know structured query language (SQL). As well as using Information Maps, we will also be using SAS® stored processes to implement role based granular security.
    At the university some business intelligence (BI) services are targeted for particular roles such as researchers. The primary investigator role of a research project needs access to current and past research funding data at both the summary and detail levels for their research project. A SAS® stored process (a SAS® program which is hosted on a server) is used to determine the employee number of the login by checking a common university directory and then filtering the research data mart to selectively provide only the data that is relevant for the researcher who has signed onto the decision support portal.
    Other business intelligence (BI) services are targeted for particular roles such as Vice-Presidents, Deans, Chairs, Directors, Managers and their Staff. SAS® stored processes are used as described above with the exception that they filter data on the basis of positions and organizational affiliations. When individuals change jobs or new appointments occur the authorized business intelligence (BI) data will always be correctly presented.
    As the SAS® stored process can be executed from many environments (for example, SAS® Web Report Studio, SAS® Add-In for Microsoft Office, SAS® Enterprise Guide) authorization rules are consistently applied across all environments on a timely basis. There is also potential in the future to automatically customize web portals and event notifications based upon the particular role of the person who has signed onto the SAS® Information Delivery Portal.
    ARCHITECTURE (PRODUCTION ENVIRONMENT)
    We are currently in the planning stages for building a scalable, sustainable infrastructure which will support a scaled deployment of the SAS® Value Chain Analytics. We are considering implementing the following three-tier platform which will allow us to scale horizontally in the future:
    Our development environment consists of a server with 2 x Intel Xeon 2.8GHz Processors, 2GB of RAM and is running Windows 2000 u2013 Service Pack 4.
    We are considering the following for the scaled roll-out of our production environment.
    A. Hardware
    1. Server 1 - SAS® Data Server
    - 4 way 64 bit 1.5Ghz Itanium2 server
    7
    SUGI 30 Focus Session
    - 16 Gb RAM
    - 2 73 Gb Drives (RAID 1) for the OS
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for Itanium
    2 Mid-Tier (Web) Server
    - 2 way 32 bit 3Ghz Xeon Server
    - 4 Gb RAM
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for x86
    3. SAN Drive Array (modular and can grow with the warehouse)
    - 6 u2013 72GB Drives (RAID 5) total 360GB for SAS® and Data
    B. Software
    1. Server 1 - SAS® Data Server
    - SAS® 9.1.3
    - SAS® Metadata Server
    - SAS® WorkSpace Server
    - SAS® Stored Process Server
    - Platform JobScheduler
    2. Mid -Tier Server
    - SAS® Web Report Studio
    - SAS® Information Delivery Portal
    - BEA Web Logic for future SAS® SPM Platform
    - Xythos Web File System (WFS)
    3. Client u2013Tier Server
    - SAS® Enterprise Guide
    - SAS® Add-In for Microsoft Office
    REPORTING
    We have created a number of parameterized stored processes using SAS® Enterprise Guide, which our stakeholders will access as both static (HTML as well as PDF documents) and interactive reports (drill-down) through SAS® Web Report Studio and the SAS® Add-In for Microsoft Office. All canned reports along with SAS® Web Report Studio will be accessed through the SAS® Information Delivery Portal.
    NEXT STEPS
    Next steps of the project include development of a financial data mart along with appropriate data quality standards, monthly frozen snapshots and implementation of university-wide financial reporting standards. This will facilitate electronic access to integrated financial information necessary for the development and maintenance of an integrated, multi-year financial planning framework. Canned reports to include monthly web-based financial statements, with drill-down capability along with budget templates automatically populated with data values and saved in different workbooks for different subgroups (for example by Department). The later will be accomplished using Microsoft Direct Data Exchange (DDE).
    8
    SUGI 30 Focus Session
    As well, we will begin the implementation of SAS® Strategic Performance Management Software to support the performance measurement and monitoring initiative that is a fundamental component of McMasteru2019s strategic plan. This tool will assist in critically assessing and identifying meaningful and statistically relevant measures and indicators. This software can perform causal analyses among various measures within and across areas providing useful information on inter-relationships between factors and measures. As well as demonstrating how decisions in one area affect other areas, these cause-and-effect analyses can reveal both good performance drivers and also possible detractors and enable u2018evidenced-basedu2019 decision-making. Finally, the tool provides a balanced scorecard reporting format, designed to identify statistically significant trends and results that can be tailored to the specific goals, objectives and measures of the various operational areas of the University.
    LESSONS LEARNED
    Lessons learned include the importance of taking a consultative approach not only in assessing information needs, but also in building data hierarchies, understanding subject matter, and in prioritizing tasks to best support decision making and inform senior management. We found that a combination of training and mentoring (knowledge transfer) helped us accelerate learning the new tools. It was very important to ensure that time and resources were committed to complete the necessary planning and data quality initiatives prior to initiating the first project. When developing a project plan, it is important to

  • How to implement multiple Value Helps within the same Application ??

    Dear Experts,
    I want to implement multiple value helps in the same view.For that I have declared exporting parameters of type 'wdy_key_value_table.' within the component controller for each of the value helps.While I do activate and test the application I get the following error :
    The following error text was processed in the system HE6 : A row with the same key already exists.
    The error occurred on the application server hsdnt24s11_HE6_00 and in the work process 4 .
    The termination type was: RABAX_STATE
    The ABAP call stack was:
    Method: VALUESET_BSART of program /1BCWDY/9VSHJWRNR0EZPKFT3ZKC==CP
    Method: IF_PO_VIEW1~VALUESET_BSART of program /1BCWDY/9VSHJWRNR0EZPKFT3ZKC==CP
    Method: WDDOINIT of program /1BCWDY/9VSHJWRNR0EZPKFT3ZKC==CP
    Method: IF_WDR_VIEW_DELEGATE~WD_DO_INIT of program /1BCWDY/9VSHJWRNR0EZPKFT3ZKC==CP
    Method: DO_INIT of program CL_WDR_DELEGATING_VIEW========CP
    Method: INIT_CONTROLLER of program CL_WDR_CONTROLLER=============CP
    Method: INIT_CONTROLLER of program CL_WDR_VIEW===================CP
    Method: INIT of program CL_WDR_CONTROLLER=============CP
    Method: GET_VIEW of program CL_WDR_VIEW_MANAGER===========CP
    Method: BIND_ROOT of program CL_WDR_VIEW_MANAGER===========CP
    I dont know how to implement multiple value helps.Need your help on this.
    Regards,
    Mamai.

    Hi
    Hint is : A row with the same key already exists it means , It is assigning the same value/Key to row and you are calling it at WDDOINIT  so it giving error at the time of initialization .
    Better way to do the coding at some event in view OR if not possible than just execute the first value help in wddoinit later clear all the value before gettig the other Value help. Code it at WdDoModify View to get its run time behaviour.
    BR
    Satish Kumar

Maybe you are looking for

  • 99% CPU on Front Row with Snow Leopard, possible workaround...

    This might be helpful for some who have experienced this issue as well. It seems that Front Row will continue to work in the background when you exit from it using Apple - Q, but if you exit using the remote, holding down the Menu button for a couple

  • Is a SATA 6Gbp/s SSD compatible with my 2007 MB

    Hello, I was wandering whether a 6Gbp/s SATA 3 SSD is compatible with my 2007 Black MacBook. I know the MacBook is limited to 1.5Gbp/s but I need the speed improvements. Just wandering if the cables are the same and all that. Thanks in Advance

  • Routing and Work Center

    My Friend, What tables and fields I can user to create a report with link ( Work center X Routing X machine times X labor times) I need of list with this information to analysis. I need comparete the times SAP information(Routing) verses Real Operati

  • Table partition based multiple date columns

    Hi , I am using Oracle 10G db. One of the source table has 26 million rows. Partition is based on business send date(Quaterly). Now the query that used to load the target table is based on ETL load date. Since the load date is not the partition colum

  • Can success message be poped up

    Hi All,           I need to populate success or failure  message when the value get updated in my table view , please help me on this.