MIDP 2.0 Reference Implementation building on the Win2k

Hi
I builded MIDP2.0 Reference Implementation in the environment following below,
cygwin: 1.5.10_3
MIDP_DIR c:\j2me\midp_2-0
KVM_DIR c:\j2me\CLDC_1-0-4
ALT_BOOTDIR c:\jdk1.3.1
and I received error message.
Microsoft (R) Incremental Linker Version 6.00.8447
Copyright (C) Microsoft Corp 1992-1998. All rights reserved.
... jcc_classes/JavaCodeCompact.class
Note: Some input files use or override a deprecated API.
Note: Recompile with -deprecation for details.
... searching updated .java files
... compiling 256 .java files
... preverifying 298 .class files
... classes.zip
Can't open perl script "c:\Program": No such file or directory
make: *** [classes.zip] Error 2
I think that the problem is cause by wrong PATH environment
But I don't know solution.

something is installed in a folder with a space in it. for example, it will be installed in "c:\program files\...". you may be able to surround the part of the path with quotes. without knowing the program thats causing the problem its gonna be tricky to figure out how to fit it. it could be a path thing (i have loads of development stuff installed with no path problems) or it could be another environment variable.
it could be perl itself. i can't remember if i had problems with that - i might have done. i think i installed perl into c:\perl rather than c:\prorgam files\perl (for example) to get round problems. shrug, i can't remember now.

Similar Messages

  • MIDP 2.0 reference Implementation Strange code

    I' ve found a strange code in the http/Protocol.java of the Reference MIDP 2.0 Implementation.
    Line 139 in the static initializer.
    prop = Configuration.getProperty(
                                "com.sun.midp.io.http.max_persistent_connections");
            if (prop != null) {
                try {
                    temp = Integer.parseInt(prop);
                    if (temp <= 0) { //!!!!
                        maxNumberOfPersistentConnections = temp;
                } catch (NumberFormatException nfe) {
                    // keep the default
            }Can somebody explain me, why it looks so? It must be ">".
    Additionally, it looks like WTK22 emulator don't understands the property.

    No ideas?

  • Support for SWF Verification in the reference implementation server

    The Server for Protected Streaming provides a way to configure a whitelist of allowed SWF hashes.  However, from what I can tell, the reference implementation server does not have this capability.  Any suggestions on how to implement SWF Verification in the reference implementation server ?

    To use SWF whitelists with the Reference Implementation, set the SWF  whitelist in a policy, and use that policy when packaging the content.  When issuing a license, the Reference Implementation will use whatever  restrictions were specified in the policy at packaging time.  Alternatively, since the Reference Implementation ships with source  code, you can modify the server so you can specify the SWF whitelist  information at the time the license is generated.

  • Link to download the RMI optional package reference implementation

    Hi,
    I could not find any links for downloading the reference implementation of the RMI optional package. Is this package available for download? Pls give me the link if you find the same. Thanks in advance..
    regards,
    Anand
    Edited by: Anand.Raman on Apr 22, 2009 2:48 AM

    This is a deprecated URL that will be retired soon.
    You can download this RI here:
    https://java-partner.sun.com/support/login.action
    Assuming that you have a Java Partner login.

  • Is the jmxremote api reference implementation FOSS?

    I'm not great with understanding licenses and need to know if the JMX Remote API Reference Implementation found here is FOSS or just COTS.
    Thanks.

    Hi,
    We have very similar problem on iPad. Simple application that has 3 movie clips bouncing off the edges of the screen, just for testing purposes. We check for TOUCH_MOVE and when finger is moved on the screen everything starts lagging. We are baffled, this is serious problem for us.
    Anyone else had this problem? Can we somehow lower sampling frequency? And most importantly whay is this even happening?
    Thanks in advance for any advice.
    Konstanty

  • Calling a servlet from a JSP page using the J2EE reference implementation

    I have a JSP with an include tag as follows: <jsp:include page="servlet/ConnectionServlet" flush="true" />
    When I use JRUN it works fine. I created an ear file and ported
    the application to the J2EE reference implementation. When running the app under the J2EE reference implementation the ConnectionServlet is never called. I figured it must a deployment issue. I tried adding the ConnectionServlet.class file to the WEB-INF\classes file as servlet\ConnectionServlet.class but the JSP still can't find the servlet. Any ideas where I've gone wrong? TIA, Joe

    I have a JSP with an include tag as follows:
    <jsp:include page="servlet/ConnectionServlet"
    flush="true" />Basically, WEB-INF/classes gets added to the classpath, so the directory structure under this folder should be identical to your package structure. If the ConnectionServlet.class is not actually in a package, then it should be directly in WEB-INF/classes (ie if "servlet" isn't actually the name of your package, don't use a WEB-INF/classes/servlet/" directory).
    Then try taking out the "servlet" from your include tag, so you just have page="/ConnectionServlet" (not sure about the leading slash - try experimenting!)
    if this doesn't work, try adding this to your WEB-INF/web.xml file:
    <web-app><!-- the web-app tags may already be there - don't add more -->
    <servlet>
    <servlet-name>ConnectionServlet</servlet-name>
    <servlet-class>your.full.package.here.ConnectionServlet</servlet-class>
    </servlet>
    <servlet-mapping>
    <servlet-name>ConnectionServlet</servlet-name>
    <url-pattern>/ConnectionServlet</url-pattern>
    </servlet-mapping>
    </web-app>
    Good Luck!

  • Where do i find the SAP JDO reference implementation?

    Where do i find the SAP JDO reference implementation?

    I know that, but
    for example sun realese this
    https://sdlc5b.sun.com/ECom/EComActionServlet;jsessionid=4464320EBFC4EB51678647323C0135B7
    exist something from SAP and it´s JDO?

  • Why not porting CDC-PP reference implementation to PocketPC by ourself?

    Hi All,
    Seeing your open letter about Personal Profile for PocketPC, I have the question: Why not porting CDC-PP reference implementation to PocketPC by ourself? Or do you know any open source project on this topic?
    in Java world, we should DIY (eclipse, Junit, ANT......)
    Regards!
    Tiger

    Sorry, I missed the "reference implementation" in the subject. Can you even get the thing to compile? Taking a guess at which of the umpteen makefiles to use, I get pjt33@charis:/tmp/personal/build/share/$ make -f rules.mk
    rules.mk:289: /empty.mk: No such file or directory
    rules.mk:394: warning: overriding commands for target `/'
    rules.mk:20: warning: ignoring old commands for target `/'
    rules.mk:422: *** target file `/' has both : and :: entries.  Stop.There are also licencing issues with using the reference implementation - if you want to distribute it properly, it seems you have to pay Sun an annual fee for the Technology Compatibility Kit.

  • Reference Implementation: Cannot find flashaccess-refimpl.properties

    [ Problem ]
    I’ve just downloaded the latest drop of Flash Access. I have created a new project in Eclipse so that i can make changes to the reference implementation. When i access the following URL from a browser:
    http://localhost:8080/flashaccess/license/v1
    I get an error: "Could not find server properties". The following line of code in "RefImplAbstractReqHandlerServer.java" cannot find the server properties files:
    InputStream propsInputStream = Thread.currentThread().getContextClassLoader().getResourceAsStream(propertiesFileName);
    I have added the "resources" directory that contains flashaccess-refimpl.properties to my classpath. But i still get the same error.
    I’m guessing i have missed something. Does anyone have any suggestions to how i can overcome this error?
    [ Solution ]
    I am assuming you were adding the "resource" directory into the "classpath" by
    project property -> Java Build Path -> add class folder. This way, the resource directory is just not with the web container’s classpath.
    With Eclipse IDE, when i go this way, it works.
    1. go to "Run" -> Run Configuration ->"Apache Tomcat"( I am using tomcat)
    2.  select the Server Instance you are running the refimpl
    3. click the "classpath" tab
    4. in the "User entries" add the resource folder by "advanced->add folder"
    Then, restart the refimpl.

    Hi 高麻雀,
    You can edit the properties of configuration baselines and configuration items only if they were created at the same Configuration Manager site. The icons will have a lock symbol, and you can view their properties but not edit them in the following circumstances:
    They have been imported from Microsoft System Center Configuration Manager 2007 Configuration Packs.
    They have been imported from an external source, such as the Configuration Manager 2007 communities.
    They have been imported from another Configuration Manager 2007 hierarchy.
    They have been authored externally and then imported.
    They have been inherited by a parent site in the same Configuration Manager 2007 hierarchy.
    Solution
    Although this behavior is by design, you can modify imported data in the following ways: Refer the below link for details:
    Problems Editing Configuration Data -->: http://technet.microsoft.com/en-us/library/bb633142.aspx
    Related post:
    http://social.technet.microsoft.com/Forums/en-US/configmgrdcm/thread/7ea89eea-00f4-436f-a055-b72ab14dfa97
    Rajeesh M | My Tech Blog: ScorpITs |
    LinkedIn Please take a moment to “Mark as Answer” and/or “Vote as Helpful” on the post that helps you. This helps other community members reading the thread and recognises useful contributions. Thanks!

  • What is BI ? How we implement & what is the cost to implement ?

    What is BI ? How we implement & what is the cost to implement ?
    Thanks,
    Sumit.

    Hi Sumit,
                        Below is the description according to ur query
    Business Intelligence is a process for increasing the competitive advantage of a business by intelligent use of available data in decision making. This process is pictured below.
    The five key stages of Business Intelligence:
    1.     Data Sourcing
    2.     Data Analysis
    3.     Situation Awareness
    4.     Risk Assessment
    5.     Decision Support
    Data sourcing
    Business Intelligence is about extracting information from multiple sources of data. The data might be: text documents - e.g. memos or reports or email messages; photographs and images; sounds; formatted tables; web pages and URL lists. The key to data sourcing is to obtain the information in electronic form. So typical sources of data might include: scanners; digital cameras; database queries; web searches; computer file access; etcetera.
    Data analysis
    Business Intelligence is about synthesizing useful knowledge from collections of data. It is about estimating current trends, integrating and summarising disparate information, validating models of understanding, and predicting missing information or future trends. This process of data analysis is also called data mining or knowledge discovery. Typical analysis tools might use:-
    u2022     probability theory - e.g. classification, clustering and Bayesian networks; 
    u2022     statistical methods - e.g. regression; 
    u2022     operations research - e.g. queuing and scheduling; 
    u2022     artificial intelligence - e.g. neural networks and fuzzy logic.
    Situation awareness
    Business Intelligence is about filtering out irrelevant information, and setting the remaining information in the context of the business and its environment. The user needs the key items of information relevant to his or her needs, and summaries that are syntheses of all the relevant data (market forces, government policy etc.).  Situation awareness is the grasp of  the context in which to understand and make decisions.  Algorithms for situation assessment provide such syntheses automatically.
    Risk assessment
    Business Intelligence is about discovering what plausible actions might be taken, or decisions made, at different times. It is about helping you weigh up the current and future risk, cost or benefit of taking one action over another, or making one decision versus another. It is about inferring and summarising your best options or choices.
    Decision support
    Business Intelligence is about using information wisely.  It aims to provide warning you of important events, such as takeovers, market changes, and poor staff performance, so that you can take preventative steps. It seeks to help you analyse and make better business decisions, to improve sales or customer satisfaction or staff morale. It presents the information you need, when you need it.
    This section describes how we are using extraction, transformation and loading (ETL) processes and a data warehouse architecture to build our enterprise-wide data warehouse in incremental project steps. Before an enterprise-wide data warehouse could be delivered, an integrated architecture and a companion implementation methodology needed to be adopted. A productive and flexible tool set was also required to support ETL processes and the data warehouse architecture in a production service environment. The resulting data warehouse architecture has the following four principal components:
    u2022 Data Sources
    u2022 Data Warehouses
    u2022 Data Marts
    u2022 Publication Services
    ETL processing occurs between data sources and the data warehouse, between the data warehouse and data marts and may also be used within the data warehouse and data marts.
    Data Sources
    The university has a multitude of data sources residing in different Data Base Management System (DBMS) tables and non-DBMS data sets. To ensure that all relevant data source candidates were identified, a physical inventory and logical inventory was conducted. The compilation of these inventories ensures that we have an enterprise-wide view of the university data resource.
    The physical inventory was comprised of a review of DBMS cataloged tables as well as data sets used by business processes. These data sets had been identified through developing the enterprise-wide information needs model.
    3
    SUGI 30 Focus Session
    The logical inventory was constructed from u201Cbrain-stormingu201D sessions which focused on common key business terms which must be referenced when articulating the institutionu2019s vision and mission (strategic direction, goals, strategies, objectives and activities). Once the primary terms were identified, they were organized into directories such as u201CProjectu201D, u201CLocationu201D, u201CAcademic Entityu201D, u201CUniversity Personu201D, u201CBudget Envelopeu201D etc. Relationships were identified by recognizing u201Cnatural linkagesu201D within and among directories, and the u201Cdrill-downsu201D and u201Croll-upsu201D that were required to support u201Creport byu201D and u201Creport onu201D information hierarchies. This exercise allowed the directories to be sub-divided into hierarchies of business terms which were useful for presentation and validation purposes.
    We called this important deliverable the u201CConceptual Data Modelu201D (CDM) and it was used as the consolidated conceptual (paper) view of all of the Universityu2019s diverse data sources. The CDM was then subjected to a university-wide consultative process to solicit feedback and communicate to the university community that this model would be adopted by the Business Intelligence (BI) project as a governance model in managing the incremental development of its enterprise-wide data warehousing project.
    Data Warehouse
    This component of our data warehouse architecture (DWA) is used to supply quality data to the many different data marts in a flexible, consistent and cohesive manner. It is a u2018landing zoneu2019 for inbound data sources and an organizational and re-structuring area for implementing data, information and statistical modeling. This is where business rules which measure and enforce data quality standards for data collection in the source systems are tested and evaluated against appropriate data quality business rules/standards which are required to perform the data, information and statistical modeling described previously.
    Inbound data that does not meet data warehouse data quality business rules is not loaded into the data warehouse (for example, if a hierarchy is incomplete). While it is desirable for rejected and corrected records to occur in the operational system, if this is not possible then start dates for when the data can begin to be collected into the data warehouse may need to be adjusted in order to accommodate necessary source systems data entry u201Cre-worku201D. Existing systems and procedures may need modification in order to permanently accommodate required data warehouse data quality measures. Severe situations may occur in which new data entry collection transactions or entire systems will need to be either built or acquired.
    We have found that a powerful and flexible extraction, transformation and loading (ETL) process is to use Structured Query Language (SQL) views on host database management systems (DBMS) in conjunction with a good ETL tool such as SAS® ETL Studio. This tool enables you to perform the following tasks:
    u2022 The extraction of data from operational data stores
    u2022 The transformation of this data
    u2022 The loading of the extracted data into your data warehouse or data mart
    When the data source is a u201Cnon-DBMSu201D data set it may be advantageous to pre-convert this into a SAS® data set to standardize data warehouse metadata definitions. Then it may be captured by SAS® ETL Studio and included in the data warehouse along with any DBMS source tables using consistent metadata terms. SAS® data sets, non-SAS® data sets, and any DBMS table will provide the SAS® ETL tool with all of the necessary metadata required to facilitate productive extraction, transformation and loading (ETL) work.
    Having the ability to utilize standard structured query language (SQL) views on host DBMS systems and within SAS® is a great advantage for ETL processing. The views can serve as data quality filters without having to write any procedural code. The option exists to u201Cmaterializeu201D these views on the host systems or leave them u201Cun-materializedu201D on the hosts and u201Cmaterializeu201D them on the target data structure defined in the SAS® ETL process. These choices may be applied differentially depending upon whether you are working with u201Ccurrent onlyu201D or u201Ctime seriesu201D data. Different deployment configurations may be chosen based upon performance issues or cost considerations. The flexibility of choosing different deployment options based upon these factors is a considerable advantage.
    4
    SUGI 30 Focus Session
    Data Marts
    This component of the data warehouse architecture may manifest as the following:
    u2022 Customer u201Cvisibleu201D relational tables
    u2022 OLAP cubes
    u2022 Pre-determined parameterized and non-parameterized reports
    u2022 Ad-hoc reports
    u2022 Spreadsheet applications with pre-populated work sheets and pivot tables
    u2022 Data visualization graphics
    u2022 Dashboard/scorecards for performance indicator applications
    Typically a business intelligence (BI) project may be scoped to deliver an agreed upon set of data marts in a project. Once these have been well specified, the conceptual data model (CDM) is used to determine what parts need to be built or used as a reference to conform the inbound data from any new project. After the detailed data mart specifications (DDMS) have been verified and the conceptual data model (CDM) components determined, a source and target logical data model (LDM) can be designed to integrate the detailed data mart specification (DDMS) and conceptual data model (CMD). An extraction, transformation and loading (ETL) process can then be set up and scheduled to populate the logical data models (LDM) from the required data sources and assist with any time series and data audit change control requirements.
    Over time as more and more data marts and logical data models (LDMu2019s) are built the conceptual data model (CDM) becomes more complete. One very important advantage to this implementation methodology is that the order of the data marts and logical data models can be entirely driven by project priority, project budget allocation and time-to-completion constraints/requirements. This data warehouse architecture implementation methodology does not need to dictate project priorities or project scope as long as the conceptual data model (CDM) exercise has been successfully completed before the first project request is initiated.
    McMasteru2019s Data Warehouse design
    DevelopmentTestProductionWarehouseWarehouseWarehouseOtherDB2 OperationalOracle OperationalETLETLETLETLETLETLETLETLETLDataMartsETLETLETLDataMartsDataMartsDB2/Oracle BIToolBIToolBIToolNoNoUserUserAccessAccessUserUserAccessAccess(SAS (SAS Data sets)Data sets)Staging Area 5
    SUGI 30 Focus Session
    Publication Services
    This is the visible presentation environment that business intelligence (BI) customers will use to interact with the published data mart deliverables. The SAS® Information Delivery Portal will be utilized as a web delivery channel to deliver a u201Cone-stop information shoppingu201D solution. This software solution provides an interface to access enterprise data, applications and information. It is built on top of the SAS Business Intelligence Architecture, provides a single point of entry and provides a Portal API for application development. All of our canned reports generated through SAS® Enterprise Guide, along with a web-based query and reporting tool (SAS® Web Report Studio) will be accessed through this publication channel.
    Using the portalu2019s personalization features we have customized it for a McMaster u201Clook and feelu201D. Information is organized using pages and portlets and our stakeholders will have access to public pages along with private portlets based on role authorization rules. Stakeholders will also be able to access SAS® data sets from within Microsoft Word and Microsoft Excel using the SAS® Add-In for Microsoft Office. This tool will enable our stakeholders to execute stored processes (a SAS® program which is hosted on a server) and embed the results in their documents and spreadsheets. Within Excel, the SAS® Add-In can:
    u2022 Access and view SAS® data sources
    u2022 Access and view any other data source that is available from a SAS® server
    u2022 Analyze SAS® or Excel data using analytic tasks
    The SAS® Add-In for Microsoft Office will not be accessed through the SAS® Information Delivery Portal as this is a client component which will be installed on individual personal computers by members of our Client Services group. Future stages of the project will include interactive reports (drill-down through OLAP cubes) as well as balanced scorecards to measure performance indicators (through SAS® Strategic Performance Management software). This, along with event notification messages, will all be delivered through the SAS® Information Delivery Portal.
    Publication is also channeled according to audience with appropriate security and privacy rules.
    SECURITY u2013 AUTHENTICATION AND AUTHORIZATION
    The business value derived from using the SAS® Value Chain Analytics includes an authoritative and secure environment for data management and reporting. A data warehouse may be categorized as a u201Ccollection of integrated databases designed to support managerial decision making and problem solving functionsu201D and u201Ccontains both highly detailed and summarized historical data relating to various categories, subjects, or areasu201D. Implementation of the research funding data mart at McMaster has meant that our stakeholders now have electronic access to data which previously was not widely disseminated. Stakeholders are now able to gain timely access to this data in the form that best matches their current information needs. Security requirements are being addressed taking into consideration the following:
    u2022 Data identification
    u2022 Data classification
    u2022 Value of the data
    u2022 Identifying any data security vulnerabilities
    u2022 Identifying data protection measures and associated costs
    u2022 Selection of cost-effective security measures
    u2022 Evaluation of effectiveness of security measures
    At McMaster access to data involves both authentication and authorization. Authentication may be defined as the process of verifying the identity of a person or process within the guidelines of a specific
    6
    SUGI 30 Focus Session
    security policy (who you are). Authorization is the process of determining which permissions the user has for which resources (permissions). Authentication is also a prerequisite for authorization. At McMaster business intelligence (BI) services that are not public require a sign on with a single university-wide login identifier which is currently authenticated using the Microsoft Active Directory. After a successful authentication the SAS® university login identifier can be used by the SAS® Meta data server. No passwords are ever stored in SAS®. Future plans at the university call for this authentication to be done using Kerberos.
    At McMaster aggregate information will be open to all. Granular security is being implemented as required through a combination of SAS® Information Maps and stored processes. SAS® Information Maps consist of metadata that describe a data warehouse in business terms. Through using SAS® Information Map Studio which is an application used to create, edit and manage SAS® Information Maps, we will determine what data our stakeholders will be accessing through either SAS® Web Report Studio (ability to create reports) or SAS® Information Delivery Portal (ability to view only). Previously access to data residing in DB-2 tables was granted by creating views using structured query language (SQL). Information maps are much more powerful as they capture metadata about allowable usage and query generation rules. They also describe what can be done, are database independent and can cross databases and they hide the physical structure of the data from the business user. Since query code is generated in the background, the business user does not need to know structured query language (SQL). As well as using Information Maps, we will also be using SAS® stored processes to implement role based granular security.
    At the university some business intelligence (BI) services are targeted for particular roles such as researchers. The primary investigator role of a research project needs access to current and past research funding data at both the summary and detail levels for their research project. A SAS® stored process (a SAS® program which is hosted on a server) is used to determine the employee number of the login by checking a common university directory and then filtering the research data mart to selectively provide only the data that is relevant for the researcher who has signed onto the decision support portal.
    Other business intelligence (BI) services are targeted for particular roles such as Vice-Presidents, Deans, Chairs, Directors, Managers and their Staff. SAS® stored processes are used as described above with the exception that they filter data on the basis of positions and organizational affiliations. When individuals change jobs or new appointments occur the authorized business intelligence (BI) data will always be correctly presented.
    As the SAS® stored process can be executed from many environments (for example, SAS® Web Report Studio, SAS® Add-In for Microsoft Office, SAS® Enterprise Guide) authorization rules are consistently applied across all environments on a timely basis. There is also potential in the future to automatically customize web portals and event notifications based upon the particular role of the person who has signed onto the SAS® Information Delivery Portal.
    ARCHITECTURE (PRODUCTION ENVIRONMENT)
    We are currently in the planning stages for building a scalable, sustainable infrastructure which will support a scaled deployment of the SAS® Value Chain Analytics. We are considering implementing the following three-tier platform which will allow us to scale horizontally in the future:
    Our development environment consists of a server with 2 x Intel Xeon 2.8GHz Processors, 2GB of RAM and is running Windows 2000 u2013 Service Pack 4.
    We are considering the following for the scaled roll-out of our production environment.
    A. Hardware
    1. Server 1 - SAS® Data Server
    - 4 way 64 bit 1.5Ghz Itanium2 server
    7
    SUGI 30 Focus Session
    - 16 Gb RAM
    - 2 73 Gb Drives (RAID 1) for the OS
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for Itanium
    2 Mid-Tier (Web) Server
    - 2 way 32 bit 3Ghz Xeon Server
    - 4 Gb RAM
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for x86
    3. SAN Drive Array (modular and can grow with the warehouse)
    - 6 u2013 72GB Drives (RAID 5) total 360GB for SAS® and Data
    B. Software
    1. Server 1 - SAS® Data Server
    - SAS® 9.1.3
    - SAS® Metadata Server
    - SAS® WorkSpace Server
    - SAS® Stored Process Server
    - Platform JobScheduler
    2. Mid -Tier Server
    - SAS® Web Report Studio
    - SAS® Information Delivery Portal
    - BEA Web Logic for future SAS® SPM Platform
    - Xythos Web File System (WFS)
    3. Client u2013Tier Server
    - SAS® Enterprise Guide
    - SAS® Add-In for Microsoft Office
    REPORTING
    We have created a number of parameterized stored processes using SAS® Enterprise Guide, which our stakeholders will access as both static (HTML as well as PDF documents) and interactive reports (drill-down) through SAS® Web Report Studio and the SAS® Add-In for Microsoft Office. All canned reports along with SAS® Web Report Studio will be accessed through the SAS® Information Delivery Portal.
    NEXT STEPS
    Next steps of the project include development of a financial data mart along with appropriate data quality standards, monthly frozen snapshots and implementation of university-wide financial reporting standards. This will facilitate electronic access to integrated financial information necessary for the development and maintenance of an integrated, multi-year financial planning framework. Canned reports to include monthly web-based financial statements, with drill-down capability along with budget templates automatically populated with data values and saved in different workbooks for different subgroups (for example by Department). The later will be accomplished using Microsoft Direct Data Exchange (DDE).
    8
    SUGI 30 Focus Session
    As well, we will begin the implementation of SAS® Strategic Performance Management Software to support the performance measurement and monitoring initiative that is a fundamental component of McMasteru2019s strategic plan. This tool will assist in critically assessing and identifying meaningful and statistically relevant measures and indicators. This software can perform causal analyses among various measures within and across areas providing useful information on inter-relationships between factors and measures. As well as demonstrating how decisions in one area affect other areas, these cause-and-effect analyses can reveal both good performance drivers and also possible detractors and enable u2018evidenced-basedu2019 decision-making. Finally, the tool provides a balanced scorecard reporting format, designed to identify statistically significant trends and results that can be tailored to the specific goals, objectives and measures of the various operational areas of the University.
    LESSONS LEARNED
    Lessons learned include the importance of taking a consultative approach not only in assessing information needs, but also in building data hierarchies, understanding subject matter, and in prioritizing tasks to best support decision making and inform senior management. We found that a combination of training and mentoring (knowledge transfer) helped us accelerate learning the new tools. It was very important to ensure that time and resources were committed to complete the necessary planning and data quality initiatives prior to initiating the first project. When developing a project plan, it is important to

  • Use of "Open FPGA VI Reference" function --- Build Specification vs VI vs Bitfile

    When using the "Open FPGA VI Reference" function in a LV2012 cRIO application, there are 3 options: Build Specification, VI, or Bitfile. What would be the reasons for selecting one over the others? Does it affect the resulting startup.rtexe when the cRIO application is built? I searched through the help and in these forums, but I don't see criteria for selecting one over the others; maybe I missed it.

    Hello Chris,
    Apologies in advance for a long reply.  
    The reference method won't change the functionality of your rtexe.exe.  They all end up dropping a bitstream, based on a bitfile, onto the cRIO's FPGA.
    To a degree, the method used to reference the FPGA code is a matter of taste, but there are situations where one method is better suited than the others.
    Reference by VI:
    Setting the configuration options to open reference by VI is helpful during development when you are making changes to an FPGA VI often and building/testing using the same spec.  When this option is used, a bitfile is selected based on the default build specification for the project.  A project may have only one default build specification.  You can make any build the default by checking the option under the Source Files category in the build properties.  The default build is indicated in the project explorer by the green box around the builds icon.  
    Reference by Bitfile:
    This option references a bitfile directly.  Through the configuration window, you can select one specific bitfile to open a reference to (this is not dynamic and does not change unless you physically go make a change to that path).  If you're using this method, it helps to give your bitfiles more meaningful names than the ones that are automatically generated by LabVIEW.  When you run subsequent compilations off of the same build specification and do not change the bitfile nname or path in the build configuration, the old bitfile is overwritten and replaced with the new one.  When you are using this option, it is critical that you keep up with which bitfile is the one you want to be using.  There is an option now that will help alleviate any problems referencing by bitfile through the Open FPGA VI Reference function.  There is a new VI called Open Dynamic Bitfile Reference.  It is typically used when you want to chose a specific bitfile to load depending on something in your host code (a configuration option etc) - but it allows you to dynamically reference a bitfile on the block diagram by path.
    Referency by Build Specification:
    This option is good for when you want to always use a bitfile that is associated with/compiled with the same build configuration.  Say you have two options for top level FPGA VIs in your project (each with its own build spec).  Both of these VIs have the same interface (read/write controls, DMA) but they run different algorithms or something.  This is nice because you can easily switch your host application between them by picking the build spec associated with the FPGA VI you want to use.  In this type of sutation, referencing by VI is no good because you can only have on default build spec.
    cheers.
    Matthew H.
    Applications Engineer
    National Instruments

  • Implementing security for the Projects in OWB

    Hi,
    Can we able to implement the security for the individual projects?
    Thanks
    Vinay

    Hi,
    do not know exactly what kind of security you want to implement. But the below is the excerption from OWB User guide about implementing security at Project level...
    You should be able to find more info in OWB User Guide.
    Freezing Projects
    If you want to freeze the project MY_PROJECT and prevent access to all its contents,
    the following restrictions will apply:
    You cannot create, edit, or delete any objects under a frozen project.
    You cannot invoke any of the services that modify objects within this frozen
    project. For example, you cannot perform an MDL import, a source import, or a
    snapshot restore in this project.
    You can deploy, export, and execute runtime procedures within a frozen project.
    You can validate and generate within a frozen project.
    You cannot add or remove any objects from a frozen project to a snapshot.
    The frozen project security policy is implemented within Warehouse Builder through
    the following files. These files are located on your installation CD under:
    samples/security_feature/frozenproject.
    frozenProject.pkb: Holds the implementation of the security policy.
    frozenProject.sql: Contains a table of the structure as shown in Table 19–2. The
    administrator can freeze projects by inserting them into this table and setting the
    isFrozen flag to 1.
    HTH
    mahesh

  • Connecting JMS to J2EE Reference Implementation 1.3 beta

    Is it possible to write JMS client talking with J2EE server without having in path j2ee.jar? Something like remote small client communicating with the server due JMS? Do I have to use some third party
    software (I have swiftMQ in mind) for this? If somebody have configuration for swift bridge to J2EE
    Refernce Implementation can you share it?

    The JMS service provider that comes with J2EE SDK 1.3 is only a reference implementation of the JMS API.
    In a real application system, one will be using a JMS service provider from some vendor. Preferably one that is certified to be J2EE compatible.
    In fact, this is true as such for anything in j2ee.jar Through j2ee.jar, Sun provides a reference implementation of J2EE APIs with its J2EE SDK and in a real application one will not have j2ee.jar in the client or server classpath. Instead one will use jar files specific to the vendor that one has chosen.
    Specifically about swiftMQ, as per Sun, it is not a J2EE licensee. swiftMQ is JMS 1.0.2 compliant. But it is not certified to be J2EE compatible.
    Refer http://java.sun.com/products/jms/nonlicensedvendors.html for more details.

  • Where can I get a reference implementation of XQJ

    Where can I get a reference implementation of XQJ
    - Raees Uzhunnan

    Reference implementation of XQJ is not available yet. When it becomes available, you can find it from the JSR XQJ page (http://jcp.org/en/jsr/detail?id=225).
    Regards,
    Geoff

  • Inconsistency using SUN Rowset Reference Implementation

    I am Using the RowSet reference implemention with Oracle Database.
    I am having a class which takes a input in form of XML adhering to the webrowset format.
    The xml contains some records to be inserted , some records to be updated and some records to be deleted.
    This xml is then fed to a WebRowSet using the webRowSet.readXml method which takes a Reader as a parameter. I am using a StringReader here.
    Here is the code snippet:-
    public void executeUpdate(String webRowSetXML)throws SQLComponentException{
    System.out.println(" webRowSetXML "+webRowSetXML);
    StringReader sr = new StringReader(webRowSetXML);
    WebRowSet webRowSet = null;
    try {
    webRowSet = new WebRowSetImpl();
    webRowSet.readXml(sr);
    webRowSet.acceptChanges(connection);
    } catch (SQLException e) {
    // TODO Auto-generated catch block
    e.printStackTrace();
    I found here that the insertion and updating works without any problem , but the deletion doesn't work.
    Then to study this problem , i wrote a program with two WebRowSets, one will be the producer of the data and the other will be the consumer.
    the first WebRowSet will read the data from the data source , delete a record using the following piece of code:-
    wrs1.absolute(6);
    wrs1.deleteRow();
    And then i generate a String XML using the writeXML method.
    Then i feed this xml to the readXML method of the second WebRowSet and i call the acceptChanges method of the second WebRowSet. But the record doesn't get deleted.Also after printing the xml from the second WebRowSet , the <deleteRow> tag is absent from the xml.
    The same condition works properly for the insert and update operation and the respective <insertRow> and <updateRow> methods appear in both the xml's generated from both the WebRowSet's.
    Is this a bug in the webRowSet implemention or is it a problem of Oracle? Or is it that something else needs to be done for the delete operation to work.
    I tried the same thing with Pervasive SQL instead of oracle. Here only the insert operation works.
    Any body please help me in solving this problem...

    http://java.sun.com/j2se/1.4.1/docs/api/javax/sql/RowSet.html
    From the sun desc above.
    The RowSet interface is unique in that it is intended to be implemented using the rest of the JDBC API. In other words, a RowSet implementation
    is a layer of software that executes "on top" of a JDBC driver.
    Implementations of the RowSet interface can be provided by anyone,
    including JDBC driver vendors who want to provide a RowSet
    implementation as part of their JDBC products.
    rykk

Maybe you are looking for

  • Adding a field to a crystal report in sap b1

    All I am trying to do is to add the field for credit limited to the standard item invoice in SAP 8.8, I have added the BP table OCRD and draged the creditline field to the layout, but when you view all the fields are empty. I obviously need to link s

  • How do you email more than 1 photo

    So I've got Bout 6 pics I need to send through email can I send them all at once I have taken then myself through the iPhone 4

  • Need Help with Export to ePub in CS5.5

    I am working on my first ePub in InDesign 5.5. When I go to File-Export for- there is no option for ePub. In fact there is only "Buzzword" and it grayed out. How do I get export for epub?

  • XML Publisher 5.6.2 manual setup on linux

    Dear Friends, For the manual install of XML Publisher 5.6.2 on Application server 10.1.2 which is set for EM Gride control on linux box, I followed up the install manual. I completed the steps in the OC4J 10.1.2 Deployment section, but I could not ge

  • IPhone/iTunes 1.1.1 Update Error?

    When you hook up your iPhone to iTunes like any iPod you get a special menu for the product. In the Menu I have an "UPDATE" button for the 1.1.1 update for the iPhone but every time I press it this message comes up: iTunes could not contact the iPhon