How to implement Queuing of WebService call.

I have to implement a queuing mechanism for webservice.
The scenario is like a user will invoke the client from his sytem(name:A), this webservice call is received by a server (name:B).
The webservice will internally call a third system(name:C) , but only one instance or only one call to System C is allowed.
So system B can get any number of webservice calls but can only invoke system C one at a time.
The webservice has to be a sync webservice.
Any help forthese issue is highly appreciated.

Hi Anton,
In WD ABAP, if we want to reimport the service call, we need to follow the same steps which we follow to create a service call.
One thing to notice is that wizard does not allow to give the same method name which we gave/system proposed during the last time service call creation. Hence in the second time when we reimport the service call provide all details and new method name. A new method will be created in Controller (if we use existing controller). During adapting the nodes new nodes/attributes will be created.
Hence the best approach is to before reimporting the service call is to delete the context, attributes and the method created during the previous service call creation.
Best regards,
Suresh

Similar Messages

  • How to read response from WebService call

    Hi,
    I am consuming webservices in ABAP. I got proxy classes created. Port is also created.
    I am calling classes generated by proxy in ABAP. Classes creates few structure by system. when i make a call to service, I am not getting importing values in veariables declared. Trace file (in SOAMANAGER ) shows call successful & XML file shows the correct response values, but are not appearing in program variables. I declared variables of same type generated by system.
    My question is, am I missing any step.configuration? or how to read the variables?
    Thanks in advance.
    Sunil

    You coud create a procedure like this:
    CREATE OR REPLACE PROCEDURE soap_env_pr (p_soap_env IN VARCHAR2)
    IS
       v_start     NUMBER;
       v_end       NUMBER;
       v_message   VARCHAR2 (400);
       v_length    NUMBER;
    BEGIN
       v_start := INSTR (p_soap_env, '<ns0:Ok>');
       v_length := LENGTH ('<ns0:Ok>');
       v_end := INSTR (p_soap_env, '</ns0:Ok>');
       v_message :=
              SUBSTR (p_soap_env, v_start + v_length, v_end - v_start - v_length);
       HTP.prn ('Ok: ' || v_message);
       v_start := INSTR (p_soap_env, '<ns0:SessionNumber>');
       v_length := LENGTH ('<ns0:SessionNumber>');
       v_end := INSTR (p_soap_env, '</ns0:SessionNumber>');
       v_message :=
              SUBSTR (p_soap_env, v_start + v_length, v_end - v_start - v_length);
       HTP.prn ('SessionNumber: ' || v_message);
       v_start := INSTR (p_soap_env, '<ns0:ErrorMessage>');
       v_length := LENGTH ('v_length');
       v_end := INSTR (p_soap_env, '</ns0:ErrorMessage/>');
       v_message :=
              SUBSTR (p_soap_env, v_start + v_length, v_end - v_start - v_length);
       HTP.prn ('ErrorMessage: ' || v_message);
    END;and call it like this:
    BEGIN
       soap_env_pr
          ('<env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/"><env:Header/>
    <env:Body>
    * <invokeScenarioResponse xmlns:ns0="xmlns.oracle.com/odi/OdiInvoke/" xmlns="xmlns.oracle.com/odi/OdiInvoke/">*
    * <ns0:Ok>true</ns0:Ok>*
    * <ns0:SessionNumber>2223201</ns0:SessionNumber>*
    * <ns0:ErrorMessage/>*
    * </invokeScenarioResponse>*
    </env:Body>
    </env:Envelope>'
    END;That would give you this output:
    Ok: true
    SessionNumber: 2223201
    ErrorMessage:Denes Kubicek
    http://deneskubicek.blogspot.com/
    http://www.opal-consulting.de/training
    http://apex.oracle.com/pls/otn/f?p=31517:1
    http://www.amazon.de/Oracle-APEX-XE-Praxis/dp/3826655494
    -------------------------------------------------------------------

  • Breadth first traversal in PLSQL (How to implement Queue?)

    Hi,
    I have a tree Stored in different tables,basically its a XML tree and all the nodes are stored in different tables,those tables are related with integrity constraints.I want to traverse the tree using BFS algo,how can i implement a queue using PL/SQL.if u have any reference doc or any sample code to implement a queue please post in this thread.
    Thanks
    somy

    fgb wrote:
    Instead of enqueuing just rt.right, you need to enqueue all of the node's siblings before enqueuing rt.left. This would be easier if you knew whether a node was the leftmost child and only enqueing its siblings in that case.
    Thanks, that actually did it right there, I should have realized that.
    The modified code is this, for anyone interested:
    if(rt != null)
                queue.enqueue(rt);
                while(!queue.isEmpty())
                    rt = (Node)queue.dequeue();
                    left = rt.left;
                    if(left != null && left.right != null)
                        right = left.right;
                    else
                        right = null;
                    visit(rt);
                    if(left != null)
                        queue.enqueue(left);
                        while(right != null)
                            queue.enqueue(right);
                            if(right.right != null)
                                right = right.right;
                            else
                                right = null;
            }

  • Need help regarding configuring the WebService Call from RTD to Siebel

    Hi All,
    Can someone help me with the information on how do i configure a Webservice Call from RTD to Siebel?
    Any high-level or granular details on this would be very helpful as I am new working on this product. How can a jax-ws be utilized to achieve the same?
    Thanks in advance.
    Best Regards,
    Hariharan

    If you actually need a portal service though, this will not work. However, you could have the portal service return a Document object, which is basically the text of the HTML file you want to display. Then, when calling the portal service, you can simply output the text to the IPortalComponentResponse object
    I hope this helps
    Darrell

  • Webservice calls asyncrunes implementation

    Hi all,
    In Weblogic portal Footer we are making two webservice call to display some data. Because of those two calls whole portal landing page performance is effected.
    How to chage those two calls as asyncrunes.
    First load the footer and make those two webservice calls. Any suggession.
    Thanks,
    Venkata Sarvabatla

    You could use the WLP DISC (javascript) library and it's inherent XMLHttpRequest (AJAX) in your footer to asyncronously call back to the server resource after the portal page has loaded. The ajax call would be to some server resource (a JSP perhaps) that executes the Web Service call and returns the data (or html) back for rendering.
    See the Client Side Developer's Guide and specifically the portal-aware XMLHttpRequest object at:
    http://download.oracle.com/docs/cd/E13155_01/wlp/docs103/clientdev/disc.html#wp1019436
    Brad

  • How to handle a return statement from a login webservice call

    Hi all,
    I am new to this iPhone apps. I am trying to wirte a webservice call in which I am comparing my username and password details in webservice and getting an boolean statement as a response. By using that response i should display valid or invalid statement on the screen.
    Now i am able to get the response from the webservice, but i am unable to handle that boolean variable present in the xml statement.
    Can any one please suggest me how to handle that boolean value from the xml statement?
    Thanks
    SRI.

    Since you are a registered developer you would be far better off posting this in the developers site, this forum is for users of apps.

  • How to implement a webservice for datasource side SAP BI

    Hi
    I have a non-SAP source system (EXACT). the customer has opted for the implementation of the webservice side SAP BI. I use the standard datsource to create my webservice. Does anyone has it screenshot of webservice or steps for implementaiton.
    Cordialy.

    Hi
    Please check these links
    http://help.sap.com/saphelp_nw04/helpdata/en/0d/2eac5a56d7e345853fe9c935954ff1/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/71/421640033ae569e10000000a155106/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/e9/ae1b9a5d2cef4ea4b579f19d902871/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/0d/2eac5a56d7e345853fe9c935954ff1/frameset.htm
    Hope it helps
    Thanks,
    Rajesh.
    Please assign points to answers if you find them helpful

  • How to make a PDF form call Web service and return a static pdf for user to print?

    Hi all,
    Can anyone help me regarding the feasibility of using PDF forms for my following case?
    I would like to create a Dynamic pdf form. User only have the Acrobat reader. they can enter some information. Then have a submit button. when user click the submit button, it can call the web service with data. then Web service returns a Static PDF document based on data and the user can print it out. (and maybe save as separate pdf file)
    1. Is that possible to implement? Because I know PDF can call web service, but dont know how it handle when the webservice returns another static PDF document. could it able to handle the responds and open up in another acrobat reader?
    2. As I understand I need to have Live Cycle Designer to create a pdf  and make it Reader Enabled. So user can user reader to call webservice? am I correct?
    3. What minimum reader that user need to have? PDF reader 7 or above?
    4. I have a webservice serve the same purpose for web. But if I want the same web services can serve both web and PDF form. So, whatever client (PDF or Web)make the web service call, server returns the PDF document to client. Is that possible ? Do I need to make any changes on web service?
    5. Do I need to get any other Adobe server product? (other than Live Cycle Designer )
    Thanks a lot

    We have done a similar approach in the past and yes, it can be doable.
    1. Is that possible to implement? Because I know PDF can call web service, but dont know how it handle when the webservice returns another static PDF document. could it able to handle the responds and open up in another acrobat reader?
    Srini: We have developed a Servlet to talk to Webservice. Based on the Webservice response, the Servlet, prepares the Byte stream and sends it to Webbrowser to display as a PDF. The PDF data was submitted to Servlet in XML format.
    But if you do not want to use the above approach, then you have to use the Workbench Process.
    Submit the PDF data to a Workbench process and the inside process, execute Webservice Service with the data. Once the response is received, prepare the data XML and render a PDF with it.
    To do this, you need LiveCycle Server and Reader Extensions server component.
    2. As I understand I need to have Live Cycle Designer to create a pdf  and make it Reader Enabled. So user can user reader to call webservice? am I correct?
    Srini: If you want to use the Servlet, you can Reader extend the PDF with Acrobat.. But if you want to submit the data directly to Webservice, then you need Reader Extensions server component.
    3. What minimum reader that user need to have? PDF reader 7 or above?
    Srini: Not sure but Reader 8 and above should work.
    4. I have a webservice serve the same purpose for web. But if I want the same web services can serve both web and PDF form. So, whatever client (PDF or Web)make the web service call, server returns the PDF document to client. Is that possible ? Do I need to make any changes on web service?
    Srini: If you use the Servlet approach, then you can re-use the same webservice. But if you want to submit directly to the same webservice, you may need to change it to suit your data XML.
    5. Do I need to get any other Adobe server product? (other than Live Cycle Designer )
    Srini: If you use the Servlet approach, you do not need any server component but other approach, you need Livecycle Server and Reader Extensions server component.
    Thanks
    Srini

  • Question about implementation of a webservice

    Hi
    I have successful implemented a webservice call in my webdnypro application. But I am not sure if I have done this the right way
    Scenario: I have a view which displays data from a business partner. These data can be accessed over a webservice. The webservice requires a parameter with the id of the bussiness partner.
    My solution: I have implemented a method in the component controller (*comp.java).
    public void getBusinessPartnerInfo( java.lang.String businessPartnerCode )  {
        //@@begin getBusinessPartnerInfo()
           service = new BPService();
           Request_GetBPInfo request = new Request_GetBPInfo(service);
           GetBPInfo getBPInfo = new GetBPInfo(service);
           getBPInfo.setCardCode(businessPartnerCode);
           request.setGetBPInfo(getBPInfo);
           try {
                request.execute();
           } catch(WDWSModelExecuteException ex) {
           wdContext.nodeRequest_GetBPInfo().bind(request);
        //@@end
    and I invoke this method from the wdInit method of the same controller.
    public void wdDoInit()
        //@@begin wdDoInit()
           this.getBusinessPartnerInfo("c1000");
        //@@end
    Therefor the data is available when the view is loaded. But I am asking myself if this is the right way to do that? Wouldn't it be better to call this webservice invocation from the init method of the view?
    Additionally I do not know at the moment how to access the environment of the portal for information. E.g. the business partner code c1000 is hard coded at the moment. I would like to read this code from the environment, e.g. the portal. Over which API can I access data from the portal or better say from the ume (where i can map a field to the business partner code)?
    Thanks for your answers,
    Thierry

    Hi,
    As far as your first question is concerned you can access the controller method from the view linked with the controller as follows:
    wdThis.wdGet<Contoller Name>Controller().yourMethod();
    I have no idea that you can store generic information such as BP code specific to a user in EP. Lets see.
    thanks & regards,
    Manoj

  • How to implement an independent static stub client ? Please, help

    Hi everybody!
    I'm using the jwsdp1.2 and i got the static stub tutorial client to test my webservice. It works fine! However, i tried to compile the client code separately and it didn't worked. The jvm can't find the webservice class. So, the questions are:
    - How to implement an independent static stub whithout the ant help?
    - How can the client get the endpoint to create the proxy?
    - Must I use any javax.xml.rpc interface (call or service maybe) to do the job?
    - Could anybody show me some sample code?
    Well, that's it. I'm waiting for some answer. Please, help me.
    Tks in advance,
    Rodrigo.

    Can you explain what you mean by "independent static stub" ?
    JWSDP Tutorial explains all the steps required to create a static stubs client and invoke the service. In addition, https://jax-rpc.dev.java.net/whitepaper/1.1/index-part2.html#Scenario2 explains the detailed steps to achieve the same from command line.
    Hope that helps.
    Post your JAX-RPC related questions to [email protected] for a quicker resolution.
    Send an email to [email protected] to subscribe to the alias.
    Send an mail to [email protected] for a complete list of help commands.
    -Arun

  • How to execute an internal WebService from CRM On Demand?

    Hello,
    I would like to know how to execute an internal WebService, for example Contact's WebService, from CRM On Demand?
    I have seen in the online documentation:
    "An integration event is a mechanism for triggering external processes that are based on changes (create, update, delete, associate, dissociate) to the records in Oracle CRM On Demand.
    You can specify which fields on a record that you want to track. If your company wants to use workflow rules to create integration events, contact Customer Care to request support for Integration Event Administration and to specify the total size of the integration event queues that you require".
    I have this option activated. In fact, I have tried to create a workflow rule to assign an integration event to call the WebService process, but I couldn't do it...
    Thank you and regards.

    Hi, sorry for the late answer. I now understand what you are trying to do, it is an automatic approach so you will need to use integration events.
    So if custom object 2 field is changed you have to generate an integration event with a workflow that triggers when modified record is saved. The event should be transfered to the queue you defined.
    What you need is an external componenet e.g. java component that is running in an application server and polls every (let's say 30 seconds for events in the queue. If there is an event in the queue you will need the custom object 3 Id and the value for the field you would like to update and afterwards you can update the related customonject3 record with a ws call.
    If you use java you can start the timer with a servlet lifecycle event (when application is up start the timer for polling)
    Best Regards
    SL

  • Making a REST webservice call. Error code: 401 Access to the requested resource is not allowed

    Hi All,
    I’m having a hard time figuring out how to make Rest WebService calls.
    I tried executing this directly through browser and I get an error.
    http:localhost:8080/rest/bean/atg/userprofiling/ProfileServices/loginUser?arg1=[email protected]&arg2=Password
    13:18:20,613 ERROR [RestSecurityServlet] Error code: 401
    Access to the requested resource is not allowed: /atg/userprofiling/ProfileServices
    atg.rest.RestException: Access to the requested resource is not allowed: /atg/userprofiling/ProfileServices
    at atg.rest.processor.RestSecurityProcessor.checkAccess(RestSecurityProcessor.java:546)
    at atg.rest.processor.RestSecurityProcessor.handleGetRequest(RestSecurityProcessor.java:313)
    at atg.rest.processor.RestSecurityProcessor.doRESTGet(RestSecurityProcessor.java:199)
    at atg.rest.servlet.RestPipelineServlet.serviceRESTRequest(RestPipelineServlet.java:417)
    at atg.rest.servlet.RestPipelineServlet.service(RestPipelineServlet.java:260)
    at atg.servlet.pipeline.PipelineableServletImpl.passRequest(PipelineableServletImpl.java:157)
    at atg.servlet.pipeline.PipelineableServletImpl.service(PipelineableServletImpl.java:320)
    at atg.rest.servlet.RestPipelineServlet.service(RestPipelineServlet.java:264)
    at atg.rest.servlet.HeadRestServlet.service(HeadRestServlet.java:130)
    at atg.servlet.pipeline.PipelineableServletImpl.service(PipelineableServletImpl.java:267)
    From the documentation I understand that I need to create a session, is the session only necessary to access secured components since this
    particular method “ProfileServices.loginUser “ has been declared as not secure in restSecurityConfiguration.xml
    Also, are there two different ways in which I can log in
    1.       Using RestSession.createSession providing the username and password.
    2.       Or using ProfileServices.loginUser or ProfileFormHandler
      Can someone please clarify

    If you are invoking the REST web-service from a Java client then you can create a RestSession object using the createSession method. But in your case you seem to be invoking it with a HTTP request which by default would be treated as a GET request by ATG's REST implementation. Therefore being a GET, it would try to fetch a property "loginUser" from /atg/userprofiling/ProfileServices component (based on your URL) which would always fail.
    To invoke loginUser() method of ProfileServices with your passed argument you need to tell ATG's REST system to treat your incoming request not as GET but as a POST request which you can do using  atg-rest-http-method control parameter in your request like this
    http:localhost:8080/rest/bean/atg/userprofiling/ProfileServices/loginUser?arg1=[email protected]&arg2=Password&atg-rest-http-method=POST
    It should work this way provided your restSecurityConfiguration.xml is proper.

  • How to extract data via webservices and configure webservices in BI 7

    Hi to all,
    Can any body tell me How to extract data via webservices and configure webservices in BI 7.
    i have created a remote functionmodule which extract data from R/3 , now i want to upload data to BI 7 using that remote function module.
    i have use webservice (push) as adapter mode, as i want to connect function module with SOAP , via web services.
    please can any body tell how to do that.
    also how to configure the webserive , what is it .
    I SHALL BE THANKFULL TO YOU FOR THAT
    Regards
    Pavneet rana

    Hi,
    1. Using the function library (transaction SE37), call the Web service creation wizard.
    To do this, select the desired function module in the function library and choose Utilities ®Generate Web Service ® From the Function Module.
    2. Go through the following steps, shown in the wizard:
    a. Create a virtual interface.
    The virtual interface represents the interface between the Web Service and the outside.
    b. Choose the end point.
    The name of the function module that is to be offered as Web service is already entered here.
    c. Create the Web service definition.
    The Web service definition helps with assigning the Web service features, such as how security can be guaranteed in data transfer.
    d. Release the Web service.
    The wizard generates the object virtual interface and Web service definition in the object navigator.
    The function group that was generated when the XML DataSource was created is not transportable and is thus assigned to a local package. To prevent errors due to transports, make sure that the objects that were generated in the Web service creation wizard are assigned to a local non-transportable package.
    The Web service is released for the SOAP runtime.
    3. In the virtual interface for the import parameter DATASOURCE, define the name of the XML DataSource as the fixed value.
    A separate function group is generated for each XML DataSource. It makes sense to pre-assign the parameter DATASOURCE with the name of the XML DataSource in the virtual interface of the Web service for which the function group was generated.
    If you do not pre-assign the parameter, it will be necessary to transfer the data sent with the appropriate filled DataSource element, for example, by setting the value in the application that implements the Web service.
    a. In the object navigator, choose the name of the package in which the Web service was created and choose Enterprise Services ® Web Service Library ® Virtual Interfaces.
    b. Choose Change in the context menu for the virtual interface.
    c. For the virtual interface, remove the flags exposed and initial and enter the name of the XML DataSource in apostrophes, for example u20196ADATASOURCENAMEu2019.
    d. Activate the virtual interface.
    Regards,
    Marasa.

  • 11g B2B support of webservice call across two B2B systems

    Hi All,
    Has anyone used the 11g B2B to receive plain SOAP message (a webservice call from TP)
    - does the 11g B2B support to receive SOAP message over HTTP from a TP(trading partner),
    if yes can you please let me know the details on how to make a webservice call between two B2B Gateways
    Regards
    RD

    how much is this different from the use of webservice?Web-service standard is based on SOAP whereas the scenario which you are referring is for synchronous request-response over HTTP (generic or standard based like ebMS). SOAP defines both packaging and transport methods for web-services so any system implementing SOAP communication must adhere to the SOAP protocols whereas the blog which you are referring is talking about one form of transport method only which has nothing to do with SOAP.
    Sync request-response is one type of communication method which is also used in SOAP but the sync request-response which B2B supports as of now is not part of SOAP communication.
    Regards,
    Anuj

  • What is BI ? How we implement & what is the cost to implement ?

    What is BI ? How we implement & what is the cost to implement ?
    Thanks,
    Sumit.

    Hi Sumit,
                        Below is the description according to ur query
    Business Intelligence is a process for increasing the competitive advantage of a business by intelligent use of available data in decision making. This process is pictured below.
    The five key stages of Business Intelligence:
    1.     Data Sourcing
    2.     Data Analysis
    3.     Situation Awareness
    4.     Risk Assessment
    5.     Decision Support
    Data sourcing
    Business Intelligence is about extracting information from multiple sources of data. The data might be: text documents - e.g. memos or reports or email messages; photographs and images; sounds; formatted tables; web pages and URL lists. The key to data sourcing is to obtain the information in electronic form. So typical sources of data might include: scanners; digital cameras; database queries; web searches; computer file access; etcetera.
    Data analysis
    Business Intelligence is about synthesizing useful knowledge from collections of data. It is about estimating current trends, integrating and summarising disparate information, validating models of understanding, and predicting missing information or future trends. This process of data analysis is also called data mining or knowledge discovery. Typical analysis tools might use:-
    u2022     probability theory - e.g. classification, clustering and Bayesian networks; 
    u2022     statistical methods - e.g. regression; 
    u2022     operations research - e.g. queuing and scheduling; 
    u2022     artificial intelligence - e.g. neural networks and fuzzy logic.
    Situation awareness
    Business Intelligence is about filtering out irrelevant information, and setting the remaining information in the context of the business and its environment. The user needs the key items of information relevant to his or her needs, and summaries that are syntheses of all the relevant data (market forces, government policy etc.).  Situation awareness is the grasp of  the context in which to understand and make decisions.  Algorithms for situation assessment provide such syntheses automatically.
    Risk assessment
    Business Intelligence is about discovering what plausible actions might be taken, or decisions made, at different times. It is about helping you weigh up the current and future risk, cost or benefit of taking one action over another, or making one decision versus another. It is about inferring and summarising your best options or choices.
    Decision support
    Business Intelligence is about using information wisely.  It aims to provide warning you of important events, such as takeovers, market changes, and poor staff performance, so that you can take preventative steps. It seeks to help you analyse and make better business decisions, to improve sales or customer satisfaction or staff morale. It presents the information you need, when you need it.
    This section describes how we are using extraction, transformation and loading (ETL) processes and a data warehouse architecture to build our enterprise-wide data warehouse in incremental project steps. Before an enterprise-wide data warehouse could be delivered, an integrated architecture and a companion implementation methodology needed to be adopted. A productive and flexible tool set was also required to support ETL processes and the data warehouse architecture in a production service environment. The resulting data warehouse architecture has the following four principal components:
    u2022 Data Sources
    u2022 Data Warehouses
    u2022 Data Marts
    u2022 Publication Services
    ETL processing occurs between data sources and the data warehouse, between the data warehouse and data marts and may also be used within the data warehouse and data marts.
    Data Sources
    The university has a multitude of data sources residing in different Data Base Management System (DBMS) tables and non-DBMS data sets. To ensure that all relevant data source candidates were identified, a physical inventory and logical inventory was conducted. The compilation of these inventories ensures that we have an enterprise-wide view of the university data resource.
    The physical inventory was comprised of a review of DBMS cataloged tables as well as data sets used by business processes. These data sets had been identified through developing the enterprise-wide information needs model.
    3
    SUGI 30 Focus Session
    The logical inventory was constructed from u201Cbrain-stormingu201D sessions which focused on common key business terms which must be referenced when articulating the institutionu2019s vision and mission (strategic direction, goals, strategies, objectives and activities). Once the primary terms were identified, they were organized into directories such as u201CProjectu201D, u201CLocationu201D, u201CAcademic Entityu201D, u201CUniversity Personu201D, u201CBudget Envelopeu201D etc. Relationships were identified by recognizing u201Cnatural linkagesu201D within and among directories, and the u201Cdrill-downsu201D and u201Croll-upsu201D that were required to support u201Creport byu201D and u201Creport onu201D information hierarchies. This exercise allowed the directories to be sub-divided into hierarchies of business terms which were useful for presentation and validation purposes.
    We called this important deliverable the u201CConceptual Data Modelu201D (CDM) and it was used as the consolidated conceptual (paper) view of all of the Universityu2019s diverse data sources. The CDM was then subjected to a university-wide consultative process to solicit feedback and communicate to the university community that this model would be adopted by the Business Intelligence (BI) project as a governance model in managing the incremental development of its enterprise-wide data warehousing project.
    Data Warehouse
    This component of our data warehouse architecture (DWA) is used to supply quality data to the many different data marts in a flexible, consistent and cohesive manner. It is a u2018landing zoneu2019 for inbound data sources and an organizational and re-structuring area for implementing data, information and statistical modeling. This is where business rules which measure and enforce data quality standards for data collection in the source systems are tested and evaluated against appropriate data quality business rules/standards which are required to perform the data, information and statistical modeling described previously.
    Inbound data that does not meet data warehouse data quality business rules is not loaded into the data warehouse (for example, if a hierarchy is incomplete). While it is desirable for rejected and corrected records to occur in the operational system, if this is not possible then start dates for when the data can begin to be collected into the data warehouse may need to be adjusted in order to accommodate necessary source systems data entry u201Cre-worku201D. Existing systems and procedures may need modification in order to permanently accommodate required data warehouse data quality measures. Severe situations may occur in which new data entry collection transactions or entire systems will need to be either built or acquired.
    We have found that a powerful and flexible extraction, transformation and loading (ETL) process is to use Structured Query Language (SQL) views on host database management systems (DBMS) in conjunction with a good ETL tool such as SAS® ETL Studio. This tool enables you to perform the following tasks:
    u2022 The extraction of data from operational data stores
    u2022 The transformation of this data
    u2022 The loading of the extracted data into your data warehouse or data mart
    When the data source is a u201Cnon-DBMSu201D data set it may be advantageous to pre-convert this into a SAS® data set to standardize data warehouse metadata definitions. Then it may be captured by SAS® ETL Studio and included in the data warehouse along with any DBMS source tables using consistent metadata terms. SAS® data sets, non-SAS® data sets, and any DBMS table will provide the SAS® ETL tool with all of the necessary metadata required to facilitate productive extraction, transformation and loading (ETL) work.
    Having the ability to utilize standard structured query language (SQL) views on host DBMS systems and within SAS® is a great advantage for ETL processing. The views can serve as data quality filters without having to write any procedural code. The option exists to u201Cmaterializeu201D these views on the host systems or leave them u201Cun-materializedu201D on the hosts and u201Cmaterializeu201D them on the target data structure defined in the SAS® ETL process. These choices may be applied differentially depending upon whether you are working with u201Ccurrent onlyu201D or u201Ctime seriesu201D data. Different deployment configurations may be chosen based upon performance issues or cost considerations. The flexibility of choosing different deployment options based upon these factors is a considerable advantage.
    4
    SUGI 30 Focus Session
    Data Marts
    This component of the data warehouse architecture may manifest as the following:
    u2022 Customer u201Cvisibleu201D relational tables
    u2022 OLAP cubes
    u2022 Pre-determined parameterized and non-parameterized reports
    u2022 Ad-hoc reports
    u2022 Spreadsheet applications with pre-populated work sheets and pivot tables
    u2022 Data visualization graphics
    u2022 Dashboard/scorecards for performance indicator applications
    Typically a business intelligence (BI) project may be scoped to deliver an agreed upon set of data marts in a project. Once these have been well specified, the conceptual data model (CDM) is used to determine what parts need to be built or used as a reference to conform the inbound data from any new project. After the detailed data mart specifications (DDMS) have been verified and the conceptual data model (CDM) components determined, a source and target logical data model (LDM) can be designed to integrate the detailed data mart specification (DDMS) and conceptual data model (CMD). An extraction, transformation and loading (ETL) process can then be set up and scheduled to populate the logical data models (LDM) from the required data sources and assist with any time series and data audit change control requirements.
    Over time as more and more data marts and logical data models (LDMu2019s) are built the conceptual data model (CDM) becomes more complete. One very important advantage to this implementation methodology is that the order of the data marts and logical data models can be entirely driven by project priority, project budget allocation and time-to-completion constraints/requirements. This data warehouse architecture implementation methodology does not need to dictate project priorities or project scope as long as the conceptual data model (CDM) exercise has been successfully completed before the first project request is initiated.
    McMasteru2019s Data Warehouse design
    DevelopmentTestProductionWarehouseWarehouseWarehouseOtherDB2 OperationalOracle OperationalETLETLETLETLETLETLETLETLETLDataMartsETLETLETLDataMartsDataMartsDB2/Oracle BIToolBIToolBIToolNoNoUserUserAccessAccessUserUserAccessAccess(SAS (SAS Data sets)Data sets)Staging Area 5
    SUGI 30 Focus Session
    Publication Services
    This is the visible presentation environment that business intelligence (BI) customers will use to interact with the published data mart deliverables. The SAS® Information Delivery Portal will be utilized as a web delivery channel to deliver a u201Cone-stop information shoppingu201D solution. This software solution provides an interface to access enterprise data, applications and information. It is built on top of the SAS Business Intelligence Architecture, provides a single point of entry and provides a Portal API for application development. All of our canned reports generated through SAS® Enterprise Guide, along with a web-based query and reporting tool (SAS® Web Report Studio) will be accessed through this publication channel.
    Using the portalu2019s personalization features we have customized it for a McMaster u201Clook and feelu201D. Information is organized using pages and portlets and our stakeholders will have access to public pages along with private portlets based on role authorization rules. Stakeholders will also be able to access SAS® data sets from within Microsoft Word and Microsoft Excel using the SAS® Add-In for Microsoft Office. This tool will enable our stakeholders to execute stored processes (a SAS® program which is hosted on a server) and embed the results in their documents and spreadsheets. Within Excel, the SAS® Add-In can:
    u2022 Access and view SAS® data sources
    u2022 Access and view any other data source that is available from a SAS® server
    u2022 Analyze SAS® or Excel data using analytic tasks
    The SAS® Add-In for Microsoft Office will not be accessed through the SAS® Information Delivery Portal as this is a client component which will be installed on individual personal computers by members of our Client Services group. Future stages of the project will include interactive reports (drill-down through OLAP cubes) as well as balanced scorecards to measure performance indicators (through SAS® Strategic Performance Management software). This, along with event notification messages, will all be delivered through the SAS® Information Delivery Portal.
    Publication is also channeled according to audience with appropriate security and privacy rules.
    SECURITY u2013 AUTHENTICATION AND AUTHORIZATION
    The business value derived from using the SAS® Value Chain Analytics includes an authoritative and secure environment for data management and reporting. A data warehouse may be categorized as a u201Ccollection of integrated databases designed to support managerial decision making and problem solving functionsu201D and u201Ccontains both highly detailed and summarized historical data relating to various categories, subjects, or areasu201D. Implementation of the research funding data mart at McMaster has meant that our stakeholders now have electronic access to data which previously was not widely disseminated. Stakeholders are now able to gain timely access to this data in the form that best matches their current information needs. Security requirements are being addressed taking into consideration the following:
    u2022 Data identification
    u2022 Data classification
    u2022 Value of the data
    u2022 Identifying any data security vulnerabilities
    u2022 Identifying data protection measures and associated costs
    u2022 Selection of cost-effective security measures
    u2022 Evaluation of effectiveness of security measures
    At McMaster access to data involves both authentication and authorization. Authentication may be defined as the process of verifying the identity of a person or process within the guidelines of a specific
    6
    SUGI 30 Focus Session
    security policy (who you are). Authorization is the process of determining which permissions the user has for which resources (permissions). Authentication is also a prerequisite for authorization. At McMaster business intelligence (BI) services that are not public require a sign on with a single university-wide login identifier which is currently authenticated using the Microsoft Active Directory. After a successful authentication the SAS® university login identifier can be used by the SAS® Meta data server. No passwords are ever stored in SAS®. Future plans at the university call for this authentication to be done using Kerberos.
    At McMaster aggregate information will be open to all. Granular security is being implemented as required through a combination of SAS® Information Maps and stored processes. SAS® Information Maps consist of metadata that describe a data warehouse in business terms. Through using SAS® Information Map Studio which is an application used to create, edit and manage SAS® Information Maps, we will determine what data our stakeholders will be accessing through either SAS® Web Report Studio (ability to create reports) or SAS® Information Delivery Portal (ability to view only). Previously access to data residing in DB-2 tables was granted by creating views using structured query language (SQL). Information maps are much more powerful as they capture metadata about allowable usage and query generation rules. They also describe what can be done, are database independent and can cross databases and they hide the physical structure of the data from the business user. Since query code is generated in the background, the business user does not need to know structured query language (SQL). As well as using Information Maps, we will also be using SAS® stored processes to implement role based granular security.
    At the university some business intelligence (BI) services are targeted for particular roles such as researchers. The primary investigator role of a research project needs access to current and past research funding data at both the summary and detail levels for their research project. A SAS® stored process (a SAS® program which is hosted on a server) is used to determine the employee number of the login by checking a common university directory and then filtering the research data mart to selectively provide only the data that is relevant for the researcher who has signed onto the decision support portal.
    Other business intelligence (BI) services are targeted for particular roles such as Vice-Presidents, Deans, Chairs, Directors, Managers and their Staff. SAS® stored processes are used as described above with the exception that they filter data on the basis of positions and organizational affiliations. When individuals change jobs or new appointments occur the authorized business intelligence (BI) data will always be correctly presented.
    As the SAS® stored process can be executed from many environments (for example, SAS® Web Report Studio, SAS® Add-In for Microsoft Office, SAS® Enterprise Guide) authorization rules are consistently applied across all environments on a timely basis. There is also potential in the future to automatically customize web portals and event notifications based upon the particular role of the person who has signed onto the SAS® Information Delivery Portal.
    ARCHITECTURE (PRODUCTION ENVIRONMENT)
    We are currently in the planning stages for building a scalable, sustainable infrastructure which will support a scaled deployment of the SAS® Value Chain Analytics. We are considering implementing the following three-tier platform which will allow us to scale horizontally in the future:
    Our development environment consists of a server with 2 x Intel Xeon 2.8GHz Processors, 2GB of RAM and is running Windows 2000 u2013 Service Pack 4.
    We are considering the following for the scaled roll-out of our production environment.
    A. Hardware
    1. Server 1 - SAS® Data Server
    - 4 way 64 bit 1.5Ghz Itanium2 server
    7
    SUGI 30 Focus Session
    - 16 Gb RAM
    - 2 73 Gb Drives (RAID 1) for the OS
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for Itanium
    2 Mid-Tier (Web) Server
    - 2 way 32 bit 3Ghz Xeon Server
    - 4 Gb RAM
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for x86
    3. SAN Drive Array (modular and can grow with the warehouse)
    - 6 u2013 72GB Drives (RAID 5) total 360GB for SAS® and Data
    B. Software
    1. Server 1 - SAS® Data Server
    - SAS® 9.1.3
    - SAS® Metadata Server
    - SAS® WorkSpace Server
    - SAS® Stored Process Server
    - Platform JobScheduler
    2. Mid -Tier Server
    - SAS® Web Report Studio
    - SAS® Information Delivery Portal
    - BEA Web Logic for future SAS® SPM Platform
    - Xythos Web File System (WFS)
    3. Client u2013Tier Server
    - SAS® Enterprise Guide
    - SAS® Add-In for Microsoft Office
    REPORTING
    We have created a number of parameterized stored processes using SAS® Enterprise Guide, which our stakeholders will access as both static (HTML as well as PDF documents) and interactive reports (drill-down) through SAS® Web Report Studio and the SAS® Add-In for Microsoft Office. All canned reports along with SAS® Web Report Studio will be accessed through the SAS® Information Delivery Portal.
    NEXT STEPS
    Next steps of the project include development of a financial data mart along with appropriate data quality standards, monthly frozen snapshots and implementation of university-wide financial reporting standards. This will facilitate electronic access to integrated financial information necessary for the development and maintenance of an integrated, multi-year financial planning framework. Canned reports to include monthly web-based financial statements, with drill-down capability along with budget templates automatically populated with data values and saved in different workbooks for different subgroups (for example by Department). The later will be accomplished using Microsoft Direct Data Exchange (DDE).
    8
    SUGI 30 Focus Session
    As well, we will begin the implementation of SAS® Strategic Performance Management Software to support the performance measurement and monitoring initiative that is a fundamental component of McMasteru2019s strategic plan. This tool will assist in critically assessing and identifying meaningful and statistically relevant measures and indicators. This software can perform causal analyses among various measures within and across areas providing useful information on inter-relationships between factors and measures. As well as demonstrating how decisions in one area affect other areas, these cause-and-effect analyses can reveal both good performance drivers and also possible detractors and enable u2018evidenced-basedu2019 decision-making. Finally, the tool provides a balanced scorecard reporting format, designed to identify statistically significant trends and results that can be tailored to the specific goals, objectives and measures of the various operational areas of the University.
    LESSONS LEARNED
    Lessons learned include the importance of taking a consultative approach not only in assessing information needs, but also in building data hierarchies, understanding subject matter, and in prioritizing tasks to best support decision making and inform senior management. We found that a combination of training and mentoring (knowledge transfer) helped us accelerate learning the new tools. It was very important to ensure that time and resources were committed to complete the necessary planning and data quality initiatives prior to initiating the first project. When developing a project plan, it is important to

Maybe you are looking for