How to implement distributed work queue

Hi,
I have a simple problem: I need to implement a work queue, in which the "workers" are different processes running in different JVMs.
So I need to have a centralized (singleton), failure-free queue which contains works, and workers will connect to this queue and retrieve work to be processed.
To be more clear, what I need coherence to hold is the queue, and be sure that each work is dispatched to only 1 queue-listener.
Is this possible with coherence ?
Which is the best pattern to implement this?

Humm...
Using ur solution would mean get and put the whole queue on the cache for each use, right ?
Anyway this answered my question:
http://www.infoq.com/presentations/Data-Grid-Design-Patterns-Brian-Oliver
Edited by: GheParDo on Mar 29, 2011 11:37 PM

Similar Messages

  • How to implement Distributed Transaction

    Hi,
    We are using Jsp/Struts/Bc4j application model. JHeadstart 9.0.5.1.
    We have the Distributed transaction issue. There are 2 BC4J Application Modules (AM1 and AM2), each with its own DB connection, Entity and View Objects. In a special case, when record is inserted via AM1, there have to be record created in the DB, via AM2.
    What is the recommended approach to do that? We can't define AM2 as submodule of AM1, because, when instantiated (in the corresponding doDML method on the entity in AM1), it will try to use the connection of AM1, which is wrong. When (which event), and how to instantiate AM2 (ensure write connection), get corresponding View Object, create the other record, and commit everything in one transaction? Btw, we can not use DB link.
    Regards,
    Vladimir

    Hi Vladimir,
    It seems to me that this is a generic BC4J issue, independent of JHeadstart. You could post it at the JDeveloper forum at http://otn.oracle.com/discussionforums/jdev.html.
    My suggestion (though I'm not sure it would work) would be to explicitly instantiate AM2 at the time you need it within AM1, with code like this:
      String        amDef = "test.AM2";
      String        config = "AM2Local";
      ApplicationModule am =
      Configuration.createRootApplicationModule(amDef,config);
      ViewObject vo = am.findViewObject("TestView");
      // Work with your appmodule and view object here
      Configuration.releaseRootApplicationModule(am,true);By the way, in JDeveloper 10g you can get this piece of code by typing bc4jclient followed directly by Ctrl+Enter.
    Hope this helps,
    Sandra Muller
    JHeadstart Team
    Oracle Consulting

  • How to implement Distributed Transactions in ADF?

    Scenario:
    when selected rows and submitted action from UI, following happens
    1) For the selected rows, information will get fetched from database and a message will get formed and will get posted in JMS queue
    2) once step1 is successful, corresponding record will get logically deleted
    Implementation details:
    Environment: Oracle ADF , Weblogic server
    Step1: Created UI with Multi selection
    Step2: getting selected row key information and calling a custom AM method (exposed as client interface) from managed bean passing selected row key info
    step3: implementing following logic in AM method
    while(keyList.hasNext()
    1) getting View object for the given key and populating message
    2) Posting the mesage in JMS queue configured in weblogic using JMS Helper classes written
    3) updating the delete flag in VO
    Problem statement:
    In Step3 after sub step 2 executed, suppose if database is down substep3 will not get executed
    because of which a record will get displayed again back in UI and when selected again a duplicated message will be sent to the concerned external system
    I'm very new to ADF. Can anyone please help me on how to achieve this using ADF?
    Much Appreciated,
    Indu

    Thanks Vinod for your reply
    at substep 2 getting record from database and populating message and posting it to jms queue
    at substep 3 if post is successful the updating database indiacting this message has been sent by updating delete flag
    if i update database first and then if send message, In case of JMS exception(Can be Queue failures) my record will get updated and i cannot see record back in UI
    So both these opertions has to be happened in in single Transaction.Either both has to be happened or both has to be rolled back
    database and JMS are different resources. How ADF handles these XA transactions?
    Thanks,
    indu

  • How to implement a configurable queue datatype?

    I have an LV6.1 application that uses queues to transfer cluster data from one VI to another. I want to be able to (re-)define the cluster make up in one place, and have both the sending VI and the receiving VI adapt. As a relative newbie, am I missing a well known way of effectively defining a datatype and then using it in several VIs?
    If it makes it any easier, the variable part of the cluster is a 2 dimensional array, whose size may change. Otherwise the cluster makeup is fixed.
    On a related point, how do I pass a typed queue reference from one VI to another? I've found that if I wire a queue ref, then change the queue type, I have to delete and rewire the ref to get rid of errors?
    Ian

    Yes, it's a reference to the original cluster. Try these "new" VI version: when you choose (with go!) to dequeue the elements, the value visualized is the value actually present on the front panel (which is the object referenced), while with the "pass value" version, all the values are visualized.
    Bye, L.
    Attachments:
    new_queue_ref.zip ‏32 KB
    queue_value.zip ‏35 KB

  • Implementing work queues? (or, "readpast equivalent in oracle?")

    Without going to Advanced Queuing, is there a
    more straightforward way to implement a work
    queue?
    As an example of what I'd like to do,
    I have a table with rows, each row identifying some
    work that has to be done.
    I would like to have multiple processes pick up the
    first available "unlocked" row, and select it for update.
    When the work is done the process would update the row and
    mark it done, and commit it, etc. If the process fails,
    it would rollback and that will be the right behaviour
    for my needs.
    I think a SQL-server equivalent of the READPAST hint
    to ignore row-locked entries would work nicely for what
    I want to do, but I'm interested in other solutions, etc
    for this problem?
    Thanks,
    -kb

    Hello,
    Two thoughts come to mind. You could have a Java Message Listener listening on a queue for an event and then do whatever you need to do after it has received that event.
    The other option is to use the external procedure mechanism to invoke a bit of c code to start you Java code. A bit convoluted compared to the first option.
    Thanks
    Peter

  • How to implement the barcode printing functionality working on the Prod.

    Hi,
    Can anyone help me "How to implement the barcode printing functionality working on the Prod".
    Thanks
    Gangadhara

    Check this link --> barcode

  • How to Loop thru a Queue?

    I have a Queue that I can add and remove items from, but I need to display all the contents currently in the Queue.
    How would I loop thru the Queue and do this?
    Any simple example will do.
    Thanks!

    Thanks for the link. It showed me which methods an Iterator has, but not practical way to use it. So I did some googling and found this example.
    import java.util.Iterator;
    import java.util.NoSuchElementException;
    for ( Iterator iter = myList.iterator(); iter.hasNext(); )
       String key = (String) iter.next();
       System.out.println( key );
       }Which makes perfect sense. However, where it says Iterator iter = myList.iterator(); I am not sure what to put in the myList.iterator(); part.
    I would assume that my Queue object would be used there, however, my Queue object has no .iterator() method.
    Like I said, I am new to Java so this is not as easy to me as it seems it should be.
    But there is all of my code if it helps.....I'll leave out the unimportant parts.
    Queenode class........................................
    public class QueueNode {
    //fields
    Object info;
    QueueNode link;
        public QueueNode() {
        public QueueNode(Object item) {
        info=item;
        link=null;
        public QueueNode(Object item, QueueNode qn) {
            info=item;
            link=qn;
    }//end classQueue class..............................................
    public class Queue {
       //create a front and back node
       private QueueNode front;
       private QueueNode rear;
       private int size;
       //methods
       public void insert(Object item){
          //if queue is empty front and rear are same
          if(isEmpty()){
              rear=new QueueNode(item);
              front = rear;
          }//end if
          //otherwise queue not empty add top end
          else{
              rear.link= new QueueNode(item);
              rear=rear.link;
          }//end else
          ///either case incriment size
          size++;
       }//end insert
       public Object remove(){
           //create a temp node referencing the front
           QueueNode oldFront=front;
           //peek at front item
           Object item = peek();
           //make front to link with the next node
           front=front.link;
           //remove temp node
           oldFront=null;
           //decrement size
           size--;
           //return the item that was removed
           return item;
       }//remove
       public Object peek(){
           if(isEmpty()){
               throw new NullPointerException();
           }//end if
           else{
               return front.info;
           }//and else
       }//end peek
       public boolean isEmpty(){
           return(size==0);
       }//end isEmpty
       public int getSize(){
           return size;
       }//end getsize
    }//end classAnd finaly my class to test that the Queue is working....................
    import javax.swing.*;
    import javax.swing.border.BevelBorder;
    import java.awt.*;
    import java.awt.event.*;
    import java.util.Iterator;
    import java.util.NoSuchElementException;
    public class QueueTest extends JFrame implements ActionListener{
        //Create Queue object
        Queue q = new Queue();
       public QueueTest(){
            //Add title to window
            super("Testing Queue's");
         }//end constructor
        public void actionPerformed(ActionEvent event){
            if (event.getSource()==btnAddQueueItem){
                q.insert(txtAddQueueItem.getText());
                for(Iterator iter = q.?????; iter.hasNext();){
                    ///Output each queue item here........
                }//end for
            }//end if
             if (event.getSource()==btnRemoveQueueItem){
                 q.remove();
             }//end if
        }//end actionPerformed
        public static void main(String[] args) {
        QueueTest testQueue = new QueueTest();
        }//end main
    }//end class

  • What is BI ? How we implement & what is the cost to implement ?

    What is BI ? How we implement & what is the cost to implement ?
    Thanks,
    Sumit.

    Hi Sumit,
                        Below is the description according to ur query
    Business Intelligence is a process for increasing the competitive advantage of a business by intelligent use of available data in decision making. This process is pictured below.
    The five key stages of Business Intelligence:
    1.     Data Sourcing
    2.     Data Analysis
    3.     Situation Awareness
    4.     Risk Assessment
    5.     Decision Support
    Data sourcing
    Business Intelligence is about extracting information from multiple sources of data. The data might be: text documents - e.g. memos or reports or email messages; photographs and images; sounds; formatted tables; web pages and URL lists. The key to data sourcing is to obtain the information in electronic form. So typical sources of data might include: scanners; digital cameras; database queries; web searches; computer file access; etcetera.
    Data analysis
    Business Intelligence is about synthesizing useful knowledge from collections of data. It is about estimating current trends, integrating and summarising disparate information, validating models of understanding, and predicting missing information or future trends. This process of data analysis is also called data mining or knowledge discovery. Typical analysis tools might use:-
    u2022     probability theory - e.g. classification, clustering and Bayesian networks; 
    u2022     statistical methods - e.g. regression; 
    u2022     operations research - e.g. queuing and scheduling; 
    u2022     artificial intelligence - e.g. neural networks and fuzzy logic.
    Situation awareness
    Business Intelligence is about filtering out irrelevant information, and setting the remaining information in the context of the business and its environment. The user needs the key items of information relevant to his or her needs, and summaries that are syntheses of all the relevant data (market forces, government policy etc.).  Situation awareness is the grasp of  the context in which to understand and make decisions.  Algorithms for situation assessment provide such syntheses automatically.
    Risk assessment
    Business Intelligence is about discovering what plausible actions might be taken, or decisions made, at different times. It is about helping you weigh up the current and future risk, cost or benefit of taking one action over another, or making one decision versus another. It is about inferring and summarising your best options or choices.
    Decision support
    Business Intelligence is about using information wisely.  It aims to provide warning you of important events, such as takeovers, market changes, and poor staff performance, so that you can take preventative steps. It seeks to help you analyse and make better business decisions, to improve sales or customer satisfaction or staff morale. It presents the information you need, when you need it.
    This section describes how we are using extraction, transformation and loading (ETL) processes and a data warehouse architecture to build our enterprise-wide data warehouse in incremental project steps. Before an enterprise-wide data warehouse could be delivered, an integrated architecture and a companion implementation methodology needed to be adopted. A productive and flexible tool set was also required to support ETL processes and the data warehouse architecture in a production service environment. The resulting data warehouse architecture has the following four principal components:
    u2022 Data Sources
    u2022 Data Warehouses
    u2022 Data Marts
    u2022 Publication Services
    ETL processing occurs between data sources and the data warehouse, between the data warehouse and data marts and may also be used within the data warehouse and data marts.
    Data Sources
    The university has a multitude of data sources residing in different Data Base Management System (DBMS) tables and non-DBMS data sets. To ensure that all relevant data source candidates were identified, a physical inventory and logical inventory was conducted. The compilation of these inventories ensures that we have an enterprise-wide view of the university data resource.
    The physical inventory was comprised of a review of DBMS cataloged tables as well as data sets used by business processes. These data sets had been identified through developing the enterprise-wide information needs model.
    3
    SUGI 30 Focus Session
    The logical inventory was constructed from u201Cbrain-stormingu201D sessions which focused on common key business terms which must be referenced when articulating the institutionu2019s vision and mission (strategic direction, goals, strategies, objectives and activities). Once the primary terms were identified, they were organized into directories such as u201CProjectu201D, u201CLocationu201D, u201CAcademic Entityu201D, u201CUniversity Personu201D, u201CBudget Envelopeu201D etc. Relationships were identified by recognizing u201Cnatural linkagesu201D within and among directories, and the u201Cdrill-downsu201D and u201Croll-upsu201D that were required to support u201Creport byu201D and u201Creport onu201D information hierarchies. This exercise allowed the directories to be sub-divided into hierarchies of business terms which were useful for presentation and validation purposes.
    We called this important deliverable the u201CConceptual Data Modelu201D (CDM) and it was used as the consolidated conceptual (paper) view of all of the Universityu2019s diverse data sources. The CDM was then subjected to a university-wide consultative process to solicit feedback and communicate to the university community that this model would be adopted by the Business Intelligence (BI) project as a governance model in managing the incremental development of its enterprise-wide data warehousing project.
    Data Warehouse
    This component of our data warehouse architecture (DWA) is used to supply quality data to the many different data marts in a flexible, consistent and cohesive manner. It is a u2018landing zoneu2019 for inbound data sources and an organizational and re-structuring area for implementing data, information and statistical modeling. This is where business rules which measure and enforce data quality standards for data collection in the source systems are tested and evaluated against appropriate data quality business rules/standards which are required to perform the data, information and statistical modeling described previously.
    Inbound data that does not meet data warehouse data quality business rules is not loaded into the data warehouse (for example, if a hierarchy is incomplete). While it is desirable for rejected and corrected records to occur in the operational system, if this is not possible then start dates for when the data can begin to be collected into the data warehouse may need to be adjusted in order to accommodate necessary source systems data entry u201Cre-worku201D. Existing systems and procedures may need modification in order to permanently accommodate required data warehouse data quality measures. Severe situations may occur in which new data entry collection transactions or entire systems will need to be either built or acquired.
    We have found that a powerful and flexible extraction, transformation and loading (ETL) process is to use Structured Query Language (SQL) views on host database management systems (DBMS) in conjunction with a good ETL tool such as SAS® ETL Studio. This tool enables you to perform the following tasks:
    u2022 The extraction of data from operational data stores
    u2022 The transformation of this data
    u2022 The loading of the extracted data into your data warehouse or data mart
    When the data source is a u201Cnon-DBMSu201D data set it may be advantageous to pre-convert this into a SAS® data set to standardize data warehouse metadata definitions. Then it may be captured by SAS® ETL Studio and included in the data warehouse along with any DBMS source tables using consistent metadata terms. SAS® data sets, non-SAS® data sets, and any DBMS table will provide the SAS® ETL tool with all of the necessary metadata required to facilitate productive extraction, transformation and loading (ETL) work.
    Having the ability to utilize standard structured query language (SQL) views on host DBMS systems and within SAS® is a great advantage for ETL processing. The views can serve as data quality filters without having to write any procedural code. The option exists to u201Cmaterializeu201D these views on the host systems or leave them u201Cun-materializedu201D on the hosts and u201Cmaterializeu201D them on the target data structure defined in the SAS® ETL process. These choices may be applied differentially depending upon whether you are working with u201Ccurrent onlyu201D or u201Ctime seriesu201D data. Different deployment configurations may be chosen based upon performance issues or cost considerations. The flexibility of choosing different deployment options based upon these factors is a considerable advantage.
    4
    SUGI 30 Focus Session
    Data Marts
    This component of the data warehouse architecture may manifest as the following:
    u2022 Customer u201Cvisibleu201D relational tables
    u2022 OLAP cubes
    u2022 Pre-determined parameterized and non-parameterized reports
    u2022 Ad-hoc reports
    u2022 Spreadsheet applications with pre-populated work sheets and pivot tables
    u2022 Data visualization graphics
    u2022 Dashboard/scorecards for performance indicator applications
    Typically a business intelligence (BI) project may be scoped to deliver an agreed upon set of data marts in a project. Once these have been well specified, the conceptual data model (CDM) is used to determine what parts need to be built or used as a reference to conform the inbound data from any new project. After the detailed data mart specifications (DDMS) have been verified and the conceptual data model (CDM) components determined, a source and target logical data model (LDM) can be designed to integrate the detailed data mart specification (DDMS) and conceptual data model (CMD). An extraction, transformation and loading (ETL) process can then be set up and scheduled to populate the logical data models (LDM) from the required data sources and assist with any time series and data audit change control requirements.
    Over time as more and more data marts and logical data models (LDMu2019s) are built the conceptual data model (CDM) becomes more complete. One very important advantage to this implementation methodology is that the order of the data marts and logical data models can be entirely driven by project priority, project budget allocation and time-to-completion constraints/requirements. This data warehouse architecture implementation methodology does not need to dictate project priorities or project scope as long as the conceptual data model (CDM) exercise has been successfully completed before the first project request is initiated.
    McMasteru2019s Data Warehouse design
    DevelopmentTestProductionWarehouseWarehouseWarehouseOtherDB2 OperationalOracle OperationalETLETLETLETLETLETLETLETLETLDataMartsETLETLETLDataMartsDataMartsDB2/Oracle BIToolBIToolBIToolNoNoUserUserAccessAccessUserUserAccessAccess(SAS (SAS Data sets)Data sets)Staging Area 5
    SUGI 30 Focus Session
    Publication Services
    This is the visible presentation environment that business intelligence (BI) customers will use to interact with the published data mart deliverables. The SAS® Information Delivery Portal will be utilized as a web delivery channel to deliver a u201Cone-stop information shoppingu201D solution. This software solution provides an interface to access enterprise data, applications and information. It is built on top of the SAS Business Intelligence Architecture, provides a single point of entry and provides a Portal API for application development. All of our canned reports generated through SAS® Enterprise Guide, along with a web-based query and reporting tool (SAS® Web Report Studio) will be accessed through this publication channel.
    Using the portalu2019s personalization features we have customized it for a McMaster u201Clook and feelu201D. Information is organized using pages and portlets and our stakeholders will have access to public pages along with private portlets based on role authorization rules. Stakeholders will also be able to access SAS® data sets from within Microsoft Word and Microsoft Excel using the SAS® Add-In for Microsoft Office. This tool will enable our stakeholders to execute stored processes (a SAS® program which is hosted on a server) and embed the results in their documents and spreadsheets. Within Excel, the SAS® Add-In can:
    u2022 Access and view SAS® data sources
    u2022 Access and view any other data source that is available from a SAS® server
    u2022 Analyze SAS® or Excel data using analytic tasks
    The SAS® Add-In for Microsoft Office will not be accessed through the SAS® Information Delivery Portal as this is a client component which will be installed on individual personal computers by members of our Client Services group. Future stages of the project will include interactive reports (drill-down through OLAP cubes) as well as balanced scorecards to measure performance indicators (through SAS® Strategic Performance Management software). This, along with event notification messages, will all be delivered through the SAS® Information Delivery Portal.
    Publication is also channeled according to audience with appropriate security and privacy rules.
    SECURITY u2013 AUTHENTICATION AND AUTHORIZATION
    The business value derived from using the SAS® Value Chain Analytics includes an authoritative and secure environment for data management and reporting. A data warehouse may be categorized as a u201Ccollection of integrated databases designed to support managerial decision making and problem solving functionsu201D and u201Ccontains both highly detailed and summarized historical data relating to various categories, subjects, or areasu201D. Implementation of the research funding data mart at McMaster has meant that our stakeholders now have electronic access to data which previously was not widely disseminated. Stakeholders are now able to gain timely access to this data in the form that best matches their current information needs. Security requirements are being addressed taking into consideration the following:
    u2022 Data identification
    u2022 Data classification
    u2022 Value of the data
    u2022 Identifying any data security vulnerabilities
    u2022 Identifying data protection measures and associated costs
    u2022 Selection of cost-effective security measures
    u2022 Evaluation of effectiveness of security measures
    At McMaster access to data involves both authentication and authorization. Authentication may be defined as the process of verifying the identity of a person or process within the guidelines of a specific
    6
    SUGI 30 Focus Session
    security policy (who you are). Authorization is the process of determining which permissions the user has for which resources (permissions). Authentication is also a prerequisite for authorization. At McMaster business intelligence (BI) services that are not public require a sign on with a single university-wide login identifier which is currently authenticated using the Microsoft Active Directory. After a successful authentication the SAS® university login identifier can be used by the SAS® Meta data server. No passwords are ever stored in SAS®. Future plans at the university call for this authentication to be done using Kerberos.
    At McMaster aggregate information will be open to all. Granular security is being implemented as required through a combination of SAS® Information Maps and stored processes. SAS® Information Maps consist of metadata that describe a data warehouse in business terms. Through using SAS® Information Map Studio which is an application used to create, edit and manage SAS® Information Maps, we will determine what data our stakeholders will be accessing through either SAS® Web Report Studio (ability to create reports) or SAS® Information Delivery Portal (ability to view only). Previously access to data residing in DB-2 tables was granted by creating views using structured query language (SQL). Information maps are much more powerful as they capture metadata about allowable usage and query generation rules. They also describe what can be done, are database independent and can cross databases and they hide the physical structure of the data from the business user. Since query code is generated in the background, the business user does not need to know structured query language (SQL). As well as using Information Maps, we will also be using SAS® stored processes to implement role based granular security.
    At the university some business intelligence (BI) services are targeted for particular roles such as researchers. The primary investigator role of a research project needs access to current and past research funding data at both the summary and detail levels for their research project. A SAS® stored process (a SAS® program which is hosted on a server) is used to determine the employee number of the login by checking a common university directory and then filtering the research data mart to selectively provide only the data that is relevant for the researcher who has signed onto the decision support portal.
    Other business intelligence (BI) services are targeted for particular roles such as Vice-Presidents, Deans, Chairs, Directors, Managers and their Staff. SAS® stored processes are used as described above with the exception that they filter data on the basis of positions and organizational affiliations. When individuals change jobs or new appointments occur the authorized business intelligence (BI) data will always be correctly presented.
    As the SAS® stored process can be executed from many environments (for example, SAS® Web Report Studio, SAS® Add-In for Microsoft Office, SAS® Enterprise Guide) authorization rules are consistently applied across all environments on a timely basis. There is also potential in the future to automatically customize web portals and event notifications based upon the particular role of the person who has signed onto the SAS® Information Delivery Portal.
    ARCHITECTURE (PRODUCTION ENVIRONMENT)
    We are currently in the planning stages for building a scalable, sustainable infrastructure which will support a scaled deployment of the SAS® Value Chain Analytics. We are considering implementing the following three-tier platform which will allow us to scale horizontally in the future:
    Our development environment consists of a server with 2 x Intel Xeon 2.8GHz Processors, 2GB of RAM and is running Windows 2000 u2013 Service Pack 4.
    We are considering the following for the scaled roll-out of our production environment.
    A. Hardware
    1. Server 1 - SAS® Data Server
    - 4 way 64 bit 1.5Ghz Itanium2 server
    7
    SUGI 30 Focus Session
    - 16 Gb RAM
    - 2 73 Gb Drives (RAID 1) for the OS
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for Itanium
    2 Mid-Tier (Web) Server
    - 2 way 32 bit 3Ghz Xeon Server
    - 4 Gb RAM
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for x86
    3. SAN Drive Array (modular and can grow with the warehouse)
    - 6 u2013 72GB Drives (RAID 5) total 360GB for SAS® and Data
    B. Software
    1. Server 1 - SAS® Data Server
    - SAS® 9.1.3
    - SAS® Metadata Server
    - SAS® WorkSpace Server
    - SAS® Stored Process Server
    - Platform JobScheduler
    2. Mid -Tier Server
    - SAS® Web Report Studio
    - SAS® Information Delivery Portal
    - BEA Web Logic for future SAS® SPM Platform
    - Xythos Web File System (WFS)
    3. Client u2013Tier Server
    - SAS® Enterprise Guide
    - SAS® Add-In for Microsoft Office
    REPORTING
    We have created a number of parameterized stored processes using SAS® Enterprise Guide, which our stakeholders will access as both static (HTML as well as PDF documents) and interactive reports (drill-down) through SAS® Web Report Studio and the SAS® Add-In for Microsoft Office. All canned reports along with SAS® Web Report Studio will be accessed through the SAS® Information Delivery Portal.
    NEXT STEPS
    Next steps of the project include development of a financial data mart along with appropriate data quality standards, monthly frozen snapshots and implementation of university-wide financial reporting standards. This will facilitate electronic access to integrated financial information necessary for the development and maintenance of an integrated, multi-year financial planning framework. Canned reports to include monthly web-based financial statements, with drill-down capability along with budget templates automatically populated with data values and saved in different workbooks for different subgroups (for example by Department). The later will be accomplished using Microsoft Direct Data Exchange (DDE).
    8
    SUGI 30 Focus Session
    As well, we will begin the implementation of SAS® Strategic Performance Management Software to support the performance measurement and monitoring initiative that is a fundamental component of McMasteru2019s strategic plan. This tool will assist in critically assessing and identifying meaningful and statistically relevant measures and indicators. This software can perform causal analyses among various measures within and across areas providing useful information on inter-relationships between factors and measures. As well as demonstrating how decisions in one area affect other areas, these cause-and-effect analyses can reveal both good performance drivers and also possible detractors and enable u2018evidenced-basedu2019 decision-making. Finally, the tool provides a balanced scorecard reporting format, designed to identify statistically significant trends and results that can be tailored to the specific goals, objectives and measures of the various operational areas of the University.
    LESSONS LEARNED
    Lessons learned include the importance of taking a consultative approach not only in assessing information needs, but also in building data hierarchies, understanding subject matter, and in prioritizing tasks to best support decision making and inform senior management. We found that a combination of training and mentoring (knowledge transfer) helped us accelerate learning the new tools. It was very important to ensure that time and resources were committed to complete the necessary planning and data quality initiatives prior to initiating the first project. When developing a project plan, it is important to

  • How can I clean up queue one-time on Sun Java system messaging 6.3

    Hi,
    <address>Our Email server have a problem, When I run _./imsimta qm directory tcp_local_, There are about 5 Millions of messages in the queue.</address>
    Now our Email server send messages very slowly, how can I clean up queue one-time?
    the command _./imsimta qclean_ is very slowly.
    What can I do to prevent this problem?
    our Email server version is :
    Sun Java(tm) System Messaging Server 6.3-6.03 (built Mar 14 2008; 32bit)
    libimta.so 6.3-6.03 (built 17:12:37, Mar 14 2008; 32bit)
    SunOS email-1 5.10 Generic_120011-14 sun4u sparc SUNW,Sun-Fire-V890
    Thank you !

    If you have more than 100,000 messages in the queue, then look at the MAX_MESSAGES parameter in [the job_controller.cnf file|http://wikis.sun.com/display/CommSuite/Job+Controller+Configuration+File]. If the parameter is not specified, it defaults to 100000. If you have more than that number of messages in the channel queues, it will take a long time for new/legitimate messages to be sent because job_controller is only considering the first 100,000 messages in the queue.
    If you get into the "imsimta qm" command do do "sum -database", it will show a summary of what job_controller has in its internal cache and therefore what job_controller is working on. If you compare that to "sum -directory" you will probably see a difference.
    If these are all legitimate messages, you need to increase MAX_MESSAGES, cnbuild, and restart job_controller.
    If they are not, then preventing this in the future will require determining how they got into your queue and correcting that.
    As for removing them, the "imsimta qm" commands allow you to select messages by various criteria and then you can "return" or "delete" them. But yes, that will take a long time if there are many messages because it is a single threaded process and it needs to open and examine each message. You may be able to accomplish the task more quickly with a shell script that works on individual channel queue subdirectories and then run multiple copies of that script in parallel. After you have cleaned out the queue, restart job_controller.
    Also, you might want to consider [the subdirs channel keyword|http://msg.wikidoc.info/index.php/Subdirs_and_nosubdirs_Channel_Options].

  • How can i redirect a queued call to a voicemail using CCX editor?

    hi buddies, i wonder how can i redirect a queued call to a voicemail of a valid extension using CCX editor?
    i just uploaded my script (does not working as i wanted regarding voicemail)if can anybody help me i will be very happy!!

    first of all i REALLY A REALLY appreciate your reply
    but two questions remain:
    a) check my attached file is this the VM pilot number?
    b) Called Address = "2133"  means that the call will be redirected to the voicemail of the 2133 extension? im right??

  • How to implement a java class in my form .

    Hi All ,
    I'm trying to create a Button or a Bean Area Item and Implement a class to it on the ( IMPLEMENTATION CLASS ) property such as ( oracle.forms.demos.RoundedButton ) class . but it doesn't work ... please tell me how to implement such a class to my button .
    Thanx a lot for your help.
    AIN
    null

    hi [email protected]
    tell me my friend .. how can i extend
    the standard Forms button in Java ? ... what is the tool for that ... can you explain more please .. or can you give me a full example ... i don't have any expereience on that .. i'm waiting for your reply .
    Thanx a lot for your cooperation .
    Ali
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by [email protected]:
    Henrik, the Java importer lets you call Java classes on the app server side - I think what Ali is trying to do is integrate on the client side.
    If you want to add your own button then if you extend the standard Forms button in Java and then use this class name in the implementation class property then the Java for your button will be used instead of the standard Forms button. And since it has extended the basic Forms button it has all the standard button functionality.
    There is a white paper on OTN about this and we have created a new white paper which will be out in a couple of months (I think).
    Regards
    Grant Ronald<HR></BLOCKQUOTE>
    null

  • How to implement reading data from a mat file on a cRIO?

    Hi all!
    I am not even sure, this is plausible, but I'd rather ask before i start complicating. So far, I have not found any helpful info about reading in data to a RT device from a file (kind of a simulation test - the data is simulated). 
    I have the MatLab plugin that allows the data storage read a MAT file, which has a number of colums representing different signals and lines representing the samples at a given time (based on the sample time - each sample time has it's own line of signal data). 
    I have no idea of how to implement this to cRIO.
    The idea is:
    I have some algorithms that run on the RIO controller in a timed loop. As the inputs to these algorithms, i would need to acces each of the columns values in the line, that corresponds to the sample time (kind of a time series - without the actual times written).
    I am fairly new to RT and LV development, so any help would be much appreciated.
    Thanks, 
    Luka
    Solved!
    Go to Solution.

    Hi Luka!
    MAT file support in LabVIEW is handled by DataPlugins. Here you can find the MATLAb plugin:
    http://zone.ni.com/devzone/cda/epd/p/id/4178
    And here you can find information on how to use DataPlugins to read or write files of a certain format:
    http://www.ni.com/white-paper/11951/en
    There is also an open-source project that addresses the problem, you can find it at
    http://matio-labview.sourceforge.net/
    Unfortunately, RT systems are not supported by DataPlugins, so fist you'll have to write a VI on the Host computer to convert your files to a usable format (I suggest TMDS for compactness and ease of use), and the work on the converted files in the cRIO VI. If you have other questions about DataPlugins or anything else, please get back to me.
    Best regards:
    Andrew Valko
    National Instruments
    Andrew Valko
    National Instruments Hungary

  • How to implement multiple Value Helps within the same Application ??

    Dear Experts,
    I want to implement multiple value helps in the same view.For that I have declared exporting parameters of type 'wdy_key_value_table.' within the component controller for each of the value helps.While I do activate and test the application I get the following error :
    The following error text was processed in the system HE6 : A row with the same key already exists.
    The error occurred on the application server hsdnt24s11_HE6_00 and in the work process 4 .
    The termination type was: RABAX_STATE
    The ABAP call stack was:
    Method: VALUESET_BSART of program /1BCWDY/9VSHJWRNR0EZPKFT3ZKC==CP
    Method: IF_PO_VIEW1~VALUESET_BSART of program /1BCWDY/9VSHJWRNR0EZPKFT3ZKC==CP
    Method: WDDOINIT of program /1BCWDY/9VSHJWRNR0EZPKFT3ZKC==CP
    Method: IF_WDR_VIEW_DELEGATE~WD_DO_INIT of program /1BCWDY/9VSHJWRNR0EZPKFT3ZKC==CP
    Method: DO_INIT of program CL_WDR_DELEGATING_VIEW========CP
    Method: INIT_CONTROLLER of program CL_WDR_CONTROLLER=============CP
    Method: INIT_CONTROLLER of program CL_WDR_VIEW===================CP
    Method: INIT of program CL_WDR_CONTROLLER=============CP
    Method: GET_VIEW of program CL_WDR_VIEW_MANAGER===========CP
    Method: BIND_ROOT of program CL_WDR_VIEW_MANAGER===========CP
    I dont know how to implement multiple value helps.Need your help on this.
    Regards,
    Mamai.

    Hi
    Hint is : A row with the same key already exists it means , It is assigning the same value/Key to row and you are calling it at WDDOINIT  so it giving error at the time of initialization .
    Better way to do the coding at some event in view OR if not possible than just execute the first value help in wddoinit later clear all the value before gettig the other Value help. Code it at WdDoModify View to get its run time behaviour.
    BR
    Satish Kumar

  • How to implement Strategy pattern in ABAP Objects?

    Hello,
    I have a problem where I need to implement different algorithms, depending on the type of input. Example: I have to calculate a Present Value, sometimes with payments in advance, sometimes payment in arrear.
    From documentation and to enhance my ABAP Objects skills, I would like to implement the strategy pattern. It sounds the right solution for the problem.
    Hence I need some help in implementing this pattern in OO. I have some basic OO skills, but still learning.
    Has somebody already implemented this pattern in ABAP OO and can give me some input. Or is there any documentation how to implement it?
    Thanks and regards,
    Tapio

    Keshav has already outlined required logic, so let me fulfill his answer with a snippet
    An Interface
    INTERFACE lif_payment.
      METHODS pay CHANGING c_val TYPE p.
    ENDINTERFACE.
    Payment implementations
    CLASS lcl_payment_1 DEFINITION.
      PUBLIC SECTION.
      INTERFACES lif_payment.
      ALIASES pay for lif_payment~pay.
    ENDCLASS.                 
    CLASS lcl_payment_2 DEFINITION.
      PUBLIC SECTION.
      INTERFACES lif_payment.
      ALIASES pay for lif_payment~pay.
    ENDCLASS.                   
    CLASS lcl_payment_1 IMPLEMENTATION.
      METHOD pay.
        "do something with c_val i.e.
        c_val = c_val - 10.
      ENDMETHOD.                   
    ENDCLASS.                  
    CLASS lcl_payment_2 IMPLEMENTATION.
      METHOD pay.
        "do something else with c_val i.e.
        c_val = c_val + 10.
      ENDMETHOD.  
    Main class which uses strategy pattern
    CLASS lcl_main DEFINITION.
      PUBLIC SECTION.
        "during main object creation you pass which payment you want to use for this object
        METHODS constructor IMPORTING ir_payment TYPE REF TO lif_payment.
        "later on you can change this dynamicaly
        METHODS set_payment IMPORTING ir_payment TYPE REF TO lif_payment.
        METHODS show_payment_val.
        METHODS pay.
      PRIVATE SECTION.
        DATA payment_value TYPE p.
        "reference to your interface whcih you will be working with
        "polimorphically
        DATA mr_payment TYPE REF TO lif_payment.
    ENDCLASS.                  
    CLASS lcl_main IMPLEMENTATION.
      METHOD constructor.
        IF ir_payment IS BOUND.
          me->mr_payment = ir_payment.
        ENDIF.
      ENDMETHOD.                  
      METHOD set_payment.
        IF ir_payment IS BOUND.
          me->mr_payment = ir_payment.
        ENDIF.
      ENDMETHOD.                  
      METHOD show_payment_val.
        WRITE /: 'Payment value is now ', me->payment_value.
      ENDMETHOD.                  
      "hide fact that you are using composition to access pay method
      METHOD pay.
        mr_payment->pay( CHANGING c_val = payment_value ).
      ENDMETHOD.                   ENDCLASS.                  
    Client application
    PARAMETERS pa_pay TYPE c. "1 - first payment, 2 - second
    DATA gr_main TYPE REF TO lcl_main.
    DATA gr_payment TYPE REF TO lif_payment.
    START-OF-SELECTION.
      "client application (which uses stategy pattern)
      CASE pa_pay.
        WHEN 1.
          "create first type of payment
          CREATE OBJECT gr_payment TYPE lcl_payment_1.
        WHEN 2.
          "create second type of payment
          CREATE OBJECT gr_payment TYPE lcl_payment_2.
      ENDCASE.
      "pass payment type to main object
      CREATE OBJECT gr_main
        EXPORTING
          ir_payment = gr_payment.
      gr_main->show_payment_val( ).
      "now client doesn't know which object it is working with
      gr_main->pay( ).
      gr_main->show_payment_val( ).
      "you can also use set_payment method to set payment type dynamically
      "client would see no change
      if pa_pay = 1.
        "now create different payment to set it dynamically
        CREATE OBJECT gr_payment TYPE lcl_payment_2.
        gr_main->set_payment( gr_payment ).
        gr_main->pay( ).
        gr_main->show_payment_val( ).
      endif.
    Regads
    Marcin

  • How to get printer working

    This has always happened. Printers never get recognized

    Hi..
    My customers using printer HP2600n and was setup as a network printer.From HP website, it has mentioned that this printer only support for mac and windows.
    Btw, Sun engineer told me that this can be solved by using CUPS.
    I have installed the CUPS and have testing on similar model printer eg:HP1022n
    and it works but only can print 'Test Page CUPS'.
    My questions, how to make printers like HP1022n and HP2600n visible to applications such as
    star office and editor as it doesn't appear the printers that i have set at CUPS.
    It's we have to set at Solaris side or you have any ideas/info on how to implement
    CUPS for all non postscript printers.
    I need ur idea. Getting stuck now. :(
    Many thanks

Maybe you are looking for

  • Safari 3 crashing when closing tabs

    Hi All, I just did the 10.4.11 update which came with safari 3 and its crashing like crazy. I'm noticing that its crashing every time I attempt to close a tab OR when i attempt to close a browser window ("x"). Here is the crash log: Date/Time: 2007-1

  • Regarding Korean language support

    Hello there, I intend to buy z10 via Amazon website in Korea now however, I am not sure whether Korean language would be supported I do not have any information about z10. It would not be launched in Korean in near future. For former model of Blackbe

  • How to update stylesheets at runtime?

    Strangely the style does'nt change if i manually change the css class rules in one of the css document and switch between the 2 files.   @Override   public void initialize(URL url, ResourceBundle rb) {     final String cssUrl1 = getClass().getResourc

  • Issues with 0PP_C03 delta loads

    Hi, I loaded the data from 2LIS_04_P_ARBPL & 0PP_WCCP to 0PP_C03. Initialization of the data load went fine. Did the delta loads to BI and observed that I am not getting any of the Key figure data in the cube. All the values are showing as Zero. When

  • ODT PL/SQL overloading subprogram names

    Hi, I'm new with ODT. I'm trying to create a data set containing two queries to two overloaded sub programs of a package. I have a schema with a table named INI_FILE and two packages INI_FILE_TP and INI_FILE_QP (See the code at the end of the mail).