Statistical significance of causal factors in Demantra

Hey everyone,
I bet you missed me, well I have another question which is bugging my mind regarding demantra implementations.
Say I add a causal factor to my business model, and of course I re-run the analytical engine in order to generate a new forecast.
How can I tell if the causal factor that I have presumed to be affecting demand, really is affecting demand - in statistical significance meaning ?
I guess the answer relies in finding the coefficients calculated by the engine for each causal factor,
but I am unsure where and if that data is available to me.
(I have found some COEFFICIENT_DATA_0 tables but have no idea what to make of its data).
I think this kinda of feedback is very neccessary for building an accurate business model,
anyone able to help?
Thanks in advance,
Aaron

Hey there,
Shekhar - I really appreciate your quick replies. Anyhow - what you're offering is not that easy to acheive too.
Your way is certainly right for when you have a working model and you add just one new causal factor,
but that isn't the case most times.
If I start my model with 20 causal factors, I wanna know which are relevant and which aren't.
I'm not going to do an engine run for all the combinations to see which one is most affecting.
The right way should be to find a way to get this output from the system.
I'd be surprised if demantra cannot produce an output such as this, since most statistical softwares I've worked with (like SPSS) show an output of many relevant coefficients regarding the model.
Thanks again, still waiting for other ideas ,
Aaron.

Similar Messages

  • Demantra Price causal factor

    Hi all,
    I would like to know if it's enough for the local causal factor "price" to work to have the column item_price in sales_data populated until the end of sales or you have to project it in to the future?
    And how can the future price information is loaded in Demantra?
    Than you
    AMC

    Hi AMC,
    You can create Integration Interface to load data either from Oracle table or flat file. You just need to populate the BIIO_XXX staging table which is created by Demantra while creating integration interface.
    That way its not mandatory to integrate Demantra to any ERP.
    In some cases customers are using only Oracle VCP(Demantra is a part of it) and they are using SAP/BAAN as the transaction system. There they have created their own integration interfaces to load specific information in Demantra (e.g. Price, Inventory Levels, Discounts, etc)
    Regards,
    Milind...

  • Lead time information in Demantra

    Hi
    For a POC, We need to get lead time data from ERP(R12) to Demantra. Can I use Lead time as one of the attributes in data model? is there any other better way to represent the same in Demantra?
    What are the inputs for forecast generation in Demantra (Other than Historical series, Causal factors and engine/non engine parametes)
    Thanks
    DNP

    Mismatch of forecast values having similar history on 7.2.0.2 VS 6.2.6
    Hi,
    check the following
    Hope it will help you
    Demantra - history period to be considered for forecast ???
    Forecast generation horizon
    Demantra Forecast clarification
    System Parameter
    Demantra - sytem parameter - max_fore_sales_date
    Thanks
    MJ

  • SQL User Defined Functions for performing statistical calculations

    Hi!
       I hope you can help.  I just wasn’t sure where to go with this question, so I’m hoping you can at least point me in the right direction.
       I’m writing a SQL Server stored procedure that returns information for a facility-wide scorecard-type report.  The row and columns are going to be displayed in a SQL Server Reporting Services report. 
       Each row of information contains “Current Month” and “Previous Month” numbers and a variance column.  Some rows may compare percentages, others whole numbers, others ratios, depending on the metric they’re measuring.  For each row/metric the company has specified whether they want to see a t-test or a chi-squared statistical test to determine whether or not there was a statistically significant difference between the current month and the previous month. 
       My question is this:  Do you know where I can find a set of already-written user defined functions to perform statistical calculations beyond the basic ones provided in SQL Server 2005?  I’m not using Analysis Services, so what I’m looking for are real SQL User Defined Functions where I can just pass my data to the function and have it return the result within a stored procedure. 
       I’m aware that there may be some third-party statistical packages out there we could purchase, but that’s not what I’m looking for.   And I’m not able to do anything like call Excel’s analysis pack functions from within my stored procedure.   I’ve asked.   They won’t let me do that.   I just need to perform the calculation within the stored procedure and return the result.
       Any suggestions?  Is there a site where people are posting their SQL Server UDF’s to perform statistical functions?  Or are you perhaps aware of something like a free add-in for SQL that will add statistical functions to those available in SQL?   I just don’t want to have to write my own t-test function or my own chi-squared function if someone has already done it.
     Thanks for your help in advance!  Oh, and please let me know if this should have been posted in the TSQL forum instead.  I wasn't entirely sure.
    Karen Grube

    STATS_T_TEST_
    docs.oracle.com/cd/B19306_01/server.102/b14200/functions157.htm 
    STATS_T_TEST_ONE: A one-sample t-test
    STATS_T_TEST_PAIRED: A two-sample, paired t-test (also known as a crossed t-test)
    STATS_T_TEST_INDEP: A t-test of two independent groups with the same variance (pooled variances)
    STATS_T_TEST_INDEPU: A t-test of two independent groups with unequal variance (unpooled variances)

  • Minimum data required

    Hello, Is there a minimum of data, required to be loaded in Demantra for an accurate forecast?
    Thx

    Hi dcro,
    Before answering about the amount/ volume of data needed for accurate forecast I would like to shed some light on the working of the Analytical engine.
    There are two phases an analytical engine goes through viz learning stage and forecasting stage.
    In the learning stage the analytical engine identifies the repetitive demand patterns that needs to be considered while calculating the figures in the forecasting stage. In the forecasting stage the engine applies the lessons learnt in the learning stage along with the causal factors to generate the forecast figures by using one or more statistical models.
    While trying out the statistical models on the nodes for forecasting, the analytical engine checks out for the minimum and maximum history periods defined for the model to be applied. If the amount of data for that node falls outside this range (min-max history period) then that particular model is not used for forecasting.
    So in short, the forecast might be generated even if you dont have sufficient history but the accuracy might not be good enough. Likewise, even after having considerable amount of historical data, the accuracy might be poor if sufficient demand pattern is not seen in it.
    So to comment on the exact volume of history needed for accurate forecast generation will be a topic to debate on.
    Regards,
    Shekhar

  • What is BI ? How we implement & what is the cost to implement ?

    What is BI ? How we implement & what is the cost to implement ?
    Thanks,
    Sumit.

    Hi Sumit,
                        Below is the description according to ur query
    Business Intelligence is a process for increasing the competitive advantage of a business by intelligent use of available data in decision making. This process is pictured below.
    The five key stages of Business Intelligence:
    1.     Data Sourcing
    2.     Data Analysis
    3.     Situation Awareness
    4.     Risk Assessment
    5.     Decision Support
    Data sourcing
    Business Intelligence is about extracting information from multiple sources of data. The data might be: text documents - e.g. memos or reports or email messages; photographs and images; sounds; formatted tables; web pages and URL lists. The key to data sourcing is to obtain the information in electronic form. So typical sources of data might include: scanners; digital cameras; database queries; web searches; computer file access; etcetera.
    Data analysis
    Business Intelligence is about synthesizing useful knowledge from collections of data. It is about estimating current trends, integrating and summarising disparate information, validating models of understanding, and predicting missing information or future trends. This process of data analysis is also called data mining or knowledge discovery. Typical analysis tools might use:-
    u2022     probability theory - e.g. classification, clustering and Bayesian networks; 
    u2022     statistical methods - e.g. regression; 
    u2022     operations research - e.g. queuing and scheduling; 
    u2022     artificial intelligence - e.g. neural networks and fuzzy logic.
    Situation awareness
    Business Intelligence is about filtering out irrelevant information, and setting the remaining information in the context of the business and its environment. The user needs the key items of information relevant to his or her needs, and summaries that are syntheses of all the relevant data (market forces, government policy etc.).  Situation awareness is the grasp of  the context in which to understand and make decisions.  Algorithms for situation assessment provide such syntheses automatically.
    Risk assessment
    Business Intelligence is about discovering what plausible actions might be taken, or decisions made, at different times. It is about helping you weigh up the current and future risk, cost or benefit of taking one action over another, or making one decision versus another. It is about inferring and summarising your best options or choices.
    Decision support
    Business Intelligence is about using information wisely.  It aims to provide warning you of important events, such as takeovers, market changes, and poor staff performance, so that you can take preventative steps. It seeks to help you analyse and make better business decisions, to improve sales or customer satisfaction or staff morale. It presents the information you need, when you need it.
    This section describes how we are using extraction, transformation and loading (ETL) processes and a data warehouse architecture to build our enterprise-wide data warehouse in incremental project steps. Before an enterprise-wide data warehouse could be delivered, an integrated architecture and a companion implementation methodology needed to be adopted. A productive and flexible tool set was also required to support ETL processes and the data warehouse architecture in a production service environment. The resulting data warehouse architecture has the following four principal components:
    u2022 Data Sources
    u2022 Data Warehouses
    u2022 Data Marts
    u2022 Publication Services
    ETL processing occurs between data sources and the data warehouse, between the data warehouse and data marts and may also be used within the data warehouse and data marts.
    Data Sources
    The university has a multitude of data sources residing in different Data Base Management System (DBMS) tables and non-DBMS data sets. To ensure that all relevant data source candidates were identified, a physical inventory and logical inventory was conducted. The compilation of these inventories ensures that we have an enterprise-wide view of the university data resource.
    The physical inventory was comprised of a review of DBMS cataloged tables as well as data sets used by business processes. These data sets had been identified through developing the enterprise-wide information needs model.
    3
    SUGI 30 Focus Session
    The logical inventory was constructed from u201Cbrain-stormingu201D sessions which focused on common key business terms which must be referenced when articulating the institutionu2019s vision and mission (strategic direction, goals, strategies, objectives and activities). Once the primary terms were identified, they were organized into directories such as u201CProjectu201D, u201CLocationu201D, u201CAcademic Entityu201D, u201CUniversity Personu201D, u201CBudget Envelopeu201D etc. Relationships were identified by recognizing u201Cnatural linkagesu201D within and among directories, and the u201Cdrill-downsu201D and u201Croll-upsu201D that were required to support u201Creport byu201D and u201Creport onu201D information hierarchies. This exercise allowed the directories to be sub-divided into hierarchies of business terms which were useful for presentation and validation purposes.
    We called this important deliverable the u201CConceptual Data Modelu201D (CDM) and it was used as the consolidated conceptual (paper) view of all of the Universityu2019s diverse data sources. The CDM was then subjected to a university-wide consultative process to solicit feedback and communicate to the university community that this model would be adopted by the Business Intelligence (BI) project as a governance model in managing the incremental development of its enterprise-wide data warehousing project.
    Data Warehouse
    This component of our data warehouse architecture (DWA) is used to supply quality data to the many different data marts in a flexible, consistent and cohesive manner. It is a u2018landing zoneu2019 for inbound data sources and an organizational and re-structuring area for implementing data, information and statistical modeling. This is where business rules which measure and enforce data quality standards for data collection in the source systems are tested and evaluated against appropriate data quality business rules/standards which are required to perform the data, information and statistical modeling described previously.
    Inbound data that does not meet data warehouse data quality business rules is not loaded into the data warehouse (for example, if a hierarchy is incomplete). While it is desirable for rejected and corrected records to occur in the operational system, if this is not possible then start dates for when the data can begin to be collected into the data warehouse may need to be adjusted in order to accommodate necessary source systems data entry u201Cre-worku201D. Existing systems and procedures may need modification in order to permanently accommodate required data warehouse data quality measures. Severe situations may occur in which new data entry collection transactions or entire systems will need to be either built or acquired.
    We have found that a powerful and flexible extraction, transformation and loading (ETL) process is to use Structured Query Language (SQL) views on host database management systems (DBMS) in conjunction with a good ETL tool such as SAS® ETL Studio. This tool enables you to perform the following tasks:
    u2022 The extraction of data from operational data stores
    u2022 The transformation of this data
    u2022 The loading of the extracted data into your data warehouse or data mart
    When the data source is a u201Cnon-DBMSu201D data set it may be advantageous to pre-convert this into a SAS® data set to standardize data warehouse metadata definitions. Then it may be captured by SAS® ETL Studio and included in the data warehouse along with any DBMS source tables using consistent metadata terms. SAS® data sets, non-SAS® data sets, and any DBMS table will provide the SAS® ETL tool with all of the necessary metadata required to facilitate productive extraction, transformation and loading (ETL) work.
    Having the ability to utilize standard structured query language (SQL) views on host DBMS systems and within SAS® is a great advantage for ETL processing. The views can serve as data quality filters without having to write any procedural code. The option exists to u201Cmaterializeu201D these views on the host systems or leave them u201Cun-materializedu201D on the hosts and u201Cmaterializeu201D them on the target data structure defined in the SAS® ETL process. These choices may be applied differentially depending upon whether you are working with u201Ccurrent onlyu201D or u201Ctime seriesu201D data. Different deployment configurations may be chosen based upon performance issues or cost considerations. The flexibility of choosing different deployment options based upon these factors is a considerable advantage.
    4
    SUGI 30 Focus Session
    Data Marts
    This component of the data warehouse architecture may manifest as the following:
    u2022 Customer u201Cvisibleu201D relational tables
    u2022 OLAP cubes
    u2022 Pre-determined parameterized and non-parameterized reports
    u2022 Ad-hoc reports
    u2022 Spreadsheet applications with pre-populated work sheets and pivot tables
    u2022 Data visualization graphics
    u2022 Dashboard/scorecards for performance indicator applications
    Typically a business intelligence (BI) project may be scoped to deliver an agreed upon set of data marts in a project. Once these have been well specified, the conceptual data model (CDM) is used to determine what parts need to be built or used as a reference to conform the inbound data from any new project. After the detailed data mart specifications (DDMS) have been verified and the conceptual data model (CDM) components determined, a source and target logical data model (LDM) can be designed to integrate the detailed data mart specification (DDMS) and conceptual data model (CMD). An extraction, transformation and loading (ETL) process can then be set up and scheduled to populate the logical data models (LDM) from the required data sources and assist with any time series and data audit change control requirements.
    Over time as more and more data marts and logical data models (LDMu2019s) are built the conceptual data model (CDM) becomes more complete. One very important advantage to this implementation methodology is that the order of the data marts and logical data models can be entirely driven by project priority, project budget allocation and time-to-completion constraints/requirements. This data warehouse architecture implementation methodology does not need to dictate project priorities or project scope as long as the conceptual data model (CDM) exercise has been successfully completed before the first project request is initiated.
    McMasteru2019s Data Warehouse design
    DevelopmentTestProductionWarehouseWarehouseWarehouseOtherDB2 OperationalOracle OperationalETLETLETLETLETLETLETLETLETLDataMartsETLETLETLDataMartsDataMartsDB2/Oracle BIToolBIToolBIToolNoNoUserUserAccessAccessUserUserAccessAccess(SAS (SAS Data sets)Data sets)Staging Area 5
    SUGI 30 Focus Session
    Publication Services
    This is the visible presentation environment that business intelligence (BI) customers will use to interact with the published data mart deliverables. The SAS® Information Delivery Portal will be utilized as a web delivery channel to deliver a u201Cone-stop information shoppingu201D solution. This software solution provides an interface to access enterprise data, applications and information. It is built on top of the SAS Business Intelligence Architecture, provides a single point of entry and provides a Portal API for application development. All of our canned reports generated through SAS® Enterprise Guide, along with a web-based query and reporting tool (SAS® Web Report Studio) will be accessed through this publication channel.
    Using the portalu2019s personalization features we have customized it for a McMaster u201Clook and feelu201D. Information is organized using pages and portlets and our stakeholders will have access to public pages along with private portlets based on role authorization rules. Stakeholders will also be able to access SAS® data sets from within Microsoft Word and Microsoft Excel using the SAS® Add-In for Microsoft Office. This tool will enable our stakeholders to execute stored processes (a SAS® program which is hosted on a server) and embed the results in their documents and spreadsheets. Within Excel, the SAS® Add-In can:
    u2022 Access and view SAS® data sources
    u2022 Access and view any other data source that is available from a SAS® server
    u2022 Analyze SAS® or Excel data using analytic tasks
    The SAS® Add-In for Microsoft Office will not be accessed through the SAS® Information Delivery Portal as this is a client component which will be installed on individual personal computers by members of our Client Services group. Future stages of the project will include interactive reports (drill-down through OLAP cubes) as well as balanced scorecards to measure performance indicators (through SAS® Strategic Performance Management software). This, along with event notification messages, will all be delivered through the SAS® Information Delivery Portal.
    Publication is also channeled according to audience with appropriate security and privacy rules.
    SECURITY u2013 AUTHENTICATION AND AUTHORIZATION
    The business value derived from using the SAS® Value Chain Analytics includes an authoritative and secure environment for data management and reporting. A data warehouse may be categorized as a u201Ccollection of integrated databases designed to support managerial decision making and problem solving functionsu201D and u201Ccontains both highly detailed and summarized historical data relating to various categories, subjects, or areasu201D. Implementation of the research funding data mart at McMaster has meant that our stakeholders now have electronic access to data which previously was not widely disseminated. Stakeholders are now able to gain timely access to this data in the form that best matches their current information needs. Security requirements are being addressed taking into consideration the following:
    u2022 Data identification
    u2022 Data classification
    u2022 Value of the data
    u2022 Identifying any data security vulnerabilities
    u2022 Identifying data protection measures and associated costs
    u2022 Selection of cost-effective security measures
    u2022 Evaluation of effectiveness of security measures
    At McMaster access to data involves both authentication and authorization. Authentication may be defined as the process of verifying the identity of a person or process within the guidelines of a specific
    6
    SUGI 30 Focus Session
    security policy (who you are). Authorization is the process of determining which permissions the user has for which resources (permissions). Authentication is also a prerequisite for authorization. At McMaster business intelligence (BI) services that are not public require a sign on with a single university-wide login identifier which is currently authenticated using the Microsoft Active Directory. After a successful authentication the SAS® university login identifier can be used by the SAS® Meta data server. No passwords are ever stored in SAS®. Future plans at the university call for this authentication to be done using Kerberos.
    At McMaster aggregate information will be open to all. Granular security is being implemented as required through a combination of SAS® Information Maps and stored processes. SAS® Information Maps consist of metadata that describe a data warehouse in business terms. Through using SAS® Information Map Studio which is an application used to create, edit and manage SAS® Information Maps, we will determine what data our stakeholders will be accessing through either SAS® Web Report Studio (ability to create reports) or SAS® Information Delivery Portal (ability to view only). Previously access to data residing in DB-2 tables was granted by creating views using structured query language (SQL). Information maps are much more powerful as they capture metadata about allowable usage and query generation rules. They also describe what can be done, are database independent and can cross databases and they hide the physical structure of the data from the business user. Since query code is generated in the background, the business user does not need to know structured query language (SQL). As well as using Information Maps, we will also be using SAS® stored processes to implement role based granular security.
    At the university some business intelligence (BI) services are targeted for particular roles such as researchers. The primary investigator role of a research project needs access to current and past research funding data at both the summary and detail levels for their research project. A SAS® stored process (a SAS® program which is hosted on a server) is used to determine the employee number of the login by checking a common university directory and then filtering the research data mart to selectively provide only the data that is relevant for the researcher who has signed onto the decision support portal.
    Other business intelligence (BI) services are targeted for particular roles such as Vice-Presidents, Deans, Chairs, Directors, Managers and their Staff. SAS® stored processes are used as described above with the exception that they filter data on the basis of positions and organizational affiliations. When individuals change jobs or new appointments occur the authorized business intelligence (BI) data will always be correctly presented.
    As the SAS® stored process can be executed from many environments (for example, SAS® Web Report Studio, SAS® Add-In for Microsoft Office, SAS® Enterprise Guide) authorization rules are consistently applied across all environments on a timely basis. There is also potential in the future to automatically customize web portals and event notifications based upon the particular role of the person who has signed onto the SAS® Information Delivery Portal.
    ARCHITECTURE (PRODUCTION ENVIRONMENT)
    We are currently in the planning stages for building a scalable, sustainable infrastructure which will support a scaled deployment of the SAS® Value Chain Analytics. We are considering implementing the following three-tier platform which will allow us to scale horizontally in the future:
    Our development environment consists of a server with 2 x Intel Xeon 2.8GHz Processors, 2GB of RAM and is running Windows 2000 u2013 Service Pack 4.
    We are considering the following for the scaled roll-out of our production environment.
    A. Hardware
    1. Server 1 - SAS® Data Server
    - 4 way 64 bit 1.5Ghz Itanium2 server
    7
    SUGI 30 Focus Session
    - 16 Gb RAM
    - 2 73 Gb Drives (RAID 1) for the OS
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for Itanium
    2 Mid-Tier (Web) Server
    - 2 way 32 bit 3Ghz Xeon Server
    - 4 Gb RAM
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for x86
    3. SAN Drive Array (modular and can grow with the warehouse)
    - 6 u2013 72GB Drives (RAID 5) total 360GB for SAS® and Data
    B. Software
    1. Server 1 - SAS® Data Server
    - SAS® 9.1.3
    - SAS® Metadata Server
    - SAS® WorkSpace Server
    - SAS® Stored Process Server
    - Platform JobScheduler
    2. Mid -Tier Server
    - SAS® Web Report Studio
    - SAS® Information Delivery Portal
    - BEA Web Logic for future SAS® SPM Platform
    - Xythos Web File System (WFS)
    3. Client u2013Tier Server
    - SAS® Enterprise Guide
    - SAS® Add-In for Microsoft Office
    REPORTING
    We have created a number of parameterized stored processes using SAS® Enterprise Guide, which our stakeholders will access as both static (HTML as well as PDF documents) and interactive reports (drill-down) through SAS® Web Report Studio and the SAS® Add-In for Microsoft Office. All canned reports along with SAS® Web Report Studio will be accessed through the SAS® Information Delivery Portal.
    NEXT STEPS
    Next steps of the project include development of a financial data mart along with appropriate data quality standards, monthly frozen snapshots and implementation of university-wide financial reporting standards. This will facilitate electronic access to integrated financial information necessary for the development and maintenance of an integrated, multi-year financial planning framework. Canned reports to include monthly web-based financial statements, with drill-down capability along with budget templates automatically populated with data values and saved in different workbooks for different subgroups (for example by Department). The later will be accomplished using Microsoft Direct Data Exchange (DDE).
    8
    SUGI 30 Focus Session
    As well, we will begin the implementation of SAS® Strategic Performance Management Software to support the performance measurement and monitoring initiative that is a fundamental component of McMasteru2019s strategic plan. This tool will assist in critically assessing and identifying meaningful and statistically relevant measures and indicators. This software can perform causal analyses among various measures within and across areas providing useful information on inter-relationships between factors and measures. As well as demonstrating how decisions in one area affect other areas, these cause-and-effect analyses can reveal both good performance drivers and also possible detractors and enable u2018evidenced-basedu2019 decision-making. Finally, the tool provides a balanced scorecard reporting format, designed to identify statistically significant trends and results that can be tailored to the specific goals, objectives and measures of the various operational areas of the University.
    LESSONS LEARNED
    Lessons learned include the importance of taking a consultative approach not only in assessing information needs, but also in building data hierarchies, understanding subject matter, and in prioritizing tasks to best support decision making and inform senior management. We found that a combination of training and mentoring (knowledge transfer) helped us accelerate learning the new tools. It was very important to ensure that time and resources were committed to complete the necessary planning and data quality initiatives prior to initiating the first project. When developing a project plan, it is important to

  • Survey: Considerations for Using DITA

    Survey: Considerations for Using DITA
    To help potential users decide whether to use DITA and how much effort doing so would involve, Text Structure Consulting, Inc. is conducting a survey to better understand the documentation projects for which DITA is appropriate. If you are using or considering DITA (or have done so), please take the time to share your experience by completing the survey. Partial and anonymous responses are welcome. You can send your response (embedded in an email message, in Microsoft Word, RTF, Adobe FrameMaker, or PDF) to [email protected]. If you prefer to answer by phone, you can write to the same address to schedule an interview.
    Please forward the survey to other documentation professionals you know who might like to participate. If you are involved in multiple relevant projects or would like to add to an earlier response, feel free to complete the survey multiple times. Answers from different people working on the same project will gladly be received.
    Since the survey is being distributed largely by forwarded email, statistically significant results are not expected. Nevertheless, survey responses should help existing users re-evaluate their projects and possibly learn about new tools, potential users evaluate the relevance of DITA, and consultants decide when to recommend it. If an interesting number of responses are received, the results will be summarized on www.txstruct.com and submitted for possible presentation at Balisage: The Markup Conference in early August, 2010 (see www.balisage.org). If at all possible, please respond by June 15 so that information can be prepared for the conference.
    The survey questions are available at http://www.txstruct.com/dita.survey/questions.htm and are repeated here:
    Personal Identification
    What is your name, affiliation, and name of your project?
    What is your personal role in your project (e.g., author, editor, manager)?
    Are you an end user, consultant, or tool vendor?
    Is this a new survey response or a replacement for an earlier response?
    Do you give permission for your responses to be quoted in a summary of the results of this survey? Do you want such quotations to be anonymous or attributed to you?
    Project Identification
    What industry does your documentation represent?
    What type of processing does your project involve (authoring, publishing, translating, indexing, analyzing, etc.)?
    How many documents or pages do you process annually? How much of this material is new and how much revised?
    How do you publish documents (paper, PDF, Web, CD, etc.)?
    How many document tool users do you have?
    How many people use your finished documents?
    Are your documents translated to multiple languages or localized in any other form? Are all documents localized or only some? How many languages do you support?
    Are your documents revised and republished?
    General software considerations
    What documentation software does your project use:? Consider DITA-specific tools, XML tools, content management, word processing, desktop publishing, text editors, database management, project management, spreadsheets, and any other relevant tools.
    Do you have software that enforces that writers follow your organization's conventions?
    Do all groups within your organization use the same tools? All people in your group?
    Is authoring geographically distributed?
    How are editing tasks assigned to individual writers or editors? For example, is a writer responsible for a document or document component through multiple revisions, or is an available writer assigned whenever a change is needed? Do writers need specific expertise, such as knowledge of a documented product, to maintain particular pieces of content?
    Do you reuse all or parts or your documents? What size units do you reuse? In how many documents does a typical reusable component occur? What percentage of a typical document is comprised of reusable segments?
    DITA Considerations
    For what types of documents (user manuals, online help, test plans, requirement specifications, journal articles, technical books, technical reports, interdepartmental memos, etc.)
    Does your project use DITA?
    Have you considered using DITA but decided not to?
    Have you never considered using DITA?
    Do you use DITA-inspired naming of element and attribute types when you do not use DITA itself?
    Do you use DITA maps?
    Do you specialize (modify) the DITA tagging scheme? How extensive are your changes? Which of the following do they involve:
    Rename existing element and attribute types
    Change the definitions of existing element and attribute types
    Add new element and attribute types.
    How many of the DITA element and attribute types do you use?
    What were the primary factors in deciding whether to use DITA (for example, eliminates need to define a tagging scheme, availability of DITA open toolkit, a DITA deliverable is part of the project, wanted to use DITA- based software, recommended by consultant, addresses usability, effort required)?
    Who are the primary decision makers on DITA issues (for example, customer, consultant, manager, tools group, writers)?
    Do you transform your documents to or from DITA for different types of processing? Explain.
    In what circumstances would you recommend that an organization consider DITA?
    What have you found surprising about DITA?
    How well have the effort, elapsed time, and cost of your solution corresponded to your expectations at the beginning of the project?
    How well have the results corresponded to your expectations?
    Given the experience you have gained, would you make the same DITA-related decisions now?
    What version(s) of DITA do you use? Are you planning to use any others? When?
    What changes to DITA or the Open Toolkit would you like to see?
    What changes to your processes do you plan?
    Other comments
    Please make any other relevant comments.

    Arnis,
      Of course. Results will also be posted at www.txstruct.com.
            --Lynne

  • My macbook pro 2012 keeps restarting saying there is a problem

    My macbook pro with retina display 2012 keeps restarting saying there is a problem. tHEN A REORT POPS UP

    Two of those reports were panics with Sophos as the only item in the backtrace. You don't seem to know how to read panic logs, so I'll have to point out that this is the only confirmation you will ever get that a particular kernel extension caused the panic.
    And you can absolutely rule out a hardware problem or corrupted system based solely on the fact that a Sophos kext is in the installed kext list?
    You yourselfwrote in that thread:
    Linc and many others are of the opinion that you should absolutely never install commercial anti-virus software, such as Sophos. ... Certainly, your experience would seem to indicate that there's some truth to that.
    Indeed I did. That was back when ML was new, and there had been a couple reports I saw complaining of problems with Sophos. It was too early to tell whether there was a compatibility problem, but it was easily possible. Nothing ever materialized, that I have seen, since then.
    Let me ask you this, though: have you ever actually tested any of them?
    Of course not [...] I tell people to remove it, and when they do, their problem is solved. That's all the testing I need to do.
    You've gotten enough feedback to take this from anecdotal to statistically significant? If so, those findings would be worth documentation. If not, you can't be 100% sure that there wasn't something wrong with the Sophos install, the system or the hardware.
    Now, don't get me wrong... I'm not trying to claim that Sophos won't ever cause problems and can't conflict with anything else. No software is ever perfect, and all kernel extensions run the potential risk of conflicting with other third-party kernel extensions or causing other problems. However, based on my testing as well as reports that I have received, I see no reason to condemn Sophos in such an absolute manner. It is one of the best-behaved anti-virus apps that uses kernel extensions, in my experience. If I were going to recommend anti-virus software for someone (which would depend on many factors, such as that user's sophistication, work environment, home environment, use patterns, etc), Sophos would be one I would recommend.
    Not once, to my knowledge, has it ever saved anyone from a malware infection
    You have documentation of that, do you?

  • Error in running the Analytical engine.

    HI,
    When i run the Analytical engine, i get the following error message.
    - I tried to register the engine manually, but it still gives the same error.
    - Unistalled Demantra and installed it again, it still gives the same error.
    **Can someone help me with this .......**
    F:\DemantraSpectrum7.3\Demand Planner\Analytical Engines>.\bin\EngineManager.ex
    1 1
    11:27:47:061 [EngLogger.cpp,196] Logger General Level: message, UserLevel: mess
    ge
    11:27:47:061 Working with Oracle RDBMS
    11:27:47:061 WARNING Failed to create engine monitor. HRESULT=0x80040154 - Clas
    not registered
    11:27:47:061 Database connection string: Provider=MSDAORA;Data Source=orcl User
    rs_demantra
    11:27:52:530 --------------------------------------------------------------
    11:27:52:530 Oracle Demantra Windows Forecast Engine Version 7.3.0 (351)
    11:27:52:530 Engine Start Time 11:27:52 - 02/11/2010
    11:27:52:530 Running in Batch mode
    11:27:52:530 --------------------------------------------------------------
    11:27:55:921 Text Pro creating global causal factor Sqls...
    11:27:56:249 Text Pro creating Aggri Sqls...
    11:27:56:983 Text Pro creating Psum Sqls...
    11:27:57:937 Text Pro finished executing.
    11:28:15:718 CHECK_FORE_SERIES finished
    11:28:15:718 Executing the shell...
    11:28:16:546 Resetting mdp_matrix for new run
    11:28:16:609 Resetting mdp_matrix for new run finished
    11:28:16:656 Resetting previous forecast for inactive combinations
    11:28:21:062 Finished resetting sales_data previous forecast
    11:28:23:171 Truncating node_forecast and interm_results tables
    11:28:23:718 INSERT_UNITS Procedure Started
    11:28:28:281 INSERT_UNITS Procedure Finished
    11:28:28:296 ERROR Could not create callback COM object! HRESULT=0x80040154 - C
    ass not registered
    11:28:28:296 Total Engine Time 0 Hours, 0 Minutes, 36 Seconds
    Thanks,
    Renu.

    Hi,
    Please see if (Simulation Engine Errors With HRESULT=0x8004154 Class Not Registered [ID 427819.1]) helps.
    Thanks,
    Hussein

  • General Discussion - Why to choose APO over R/3 for planning?

    Hi All,
    Why do we choose APO over R/3 for planning? What are the inputs or other factors that make APO a better planning tool?
    Are the factors which make APO a better tool not available foe R/3 planning?
    Thanks in advance
    Changed subject suitably.
    Message was edited by:
            Somnath Manna

    I do aggree what other have mentioned .Here are specific inputs on newly desined process in APO in comparisosn with R/3
    1. Existing Processes in SAP R/3
    <b>Forecasting</b>
    Forecasting enables companies to run statistical forecasting model, aggregate and disaggregate forecasts, as well as the capability to modify forecasting numbers manually
    Newly-Designed Processes
    Demand Planning DP (SAP APO)
    Demand Planning adds to forecasting capabilities to manage promotions, product introductions and phase outs, as well as the impact o causal factors, like weather or macro economic data.
    Additional forecasting techniques are available like Linear Regression (Causal Factors), Croston Method (Slow Moving Items) and Weighted Average
    Consensus based Planning enables Marketing, Sales, Logistics to share and collaborate on their forecast and create a final consensus forecast
    Forecasting based on characteristics instead of SKU for configured products
    2. Existing Processes in SAP R/3
    <b>Sales & Operations Planning</b>
    SOP allows to consolidate forecast based on supply capabilities, through reporting and interactive planning capabilities.
    Newly-Designed Processes
    Supply Network Planning SNP (SAP APO)
    Demand and Supply Plans can be intelligently balanced leveraging cost or/and revenue based optimization techniques
    Trade-off Analysis to determine product-mix, inventory build ups for seasonal demand, capacity allocations for alternative manufacturing locations etc.
    Web based information sharing and collaborative interactive planning environment
    KPI Reporting
    3. Existing Processes in SAP R/3
    <b>Distribution Requirements Planning</b>
    Traditional DRP logic allows to calculate requirements for a single stage distribution network
    Newly-Designed Processes
    Supply Network Planning SNP (SAP APO)
    Supply Network Heuristic enables to run DRP across the complete supply network generating requirements (production and purchasing) at each node.
    The planner can easily navigate through the network to view and modify demands and supply‘s in their different categories
    Demand Planning Units and Stock Keeping Units mapping can be done on quotations.
    4. Existing Processes in SAP R/3
    Not available
    Newly-Designed Processes
    <b>Deployment SNP (SAP APO)</b>
    The ability to deploy products through the supply network based on pull or push concepts. Different heuristics like fair share logics are supported.
    Transport Load Building allows to combine different products to a single load, interactive, as well as via heuristic to create full truck loads.
    5. Existing Processes in SAP R/3
    <b>Master Planning</b>
    Master Planning allows to plan production at an aggregated level including rough cut capacity planning.
    Newly-Designed Processes
    Supply Network Planning SNP (SAP APO)
    Supply Network Planning allows to consider distribution, production and transportation constraints concurrently.
    In addition to a generic SNP Optimizer based on Linear Programming and capacity leveling algorithms, industry specific optimization is available for
    Campaign Optimization
    Capable to Match
    6. Existing Processes in SAP R/3
    <b>Material Requirements Planning</b>
    MRP calculates material requirements based on BOM structures, lead times, lot sizes etc.
    Capacity Planning
    CRP in a second step looks at the capacity requirements
    Newly-Designed Processes
    Production Planning PP/DS  (SAP APO)
    Production Planning allows to consider material and capacity constraints simultaneously to derive material, capacity requirements and dynamic lead times. BOMs and lot sizing will be also considered
    Pegging relationships between customer order, production orders, purchase orders will be stored and enable bi-directional propagation if changes occur on the demand or supply side
    7. Existing Processes in SAP R/3
    <b>Transportation Planning</b>
    Manual Planning of shipments, including predefined master routes.
    Newly-Designed Processes
    Transportation Planning TP/VS (SAP APO)
    Transportation Planning enables companies to determine the right shipment mode and route based on demand and supply situation, considering transportation constraints and costs.
    8. Existing Processes in SAP R/3
    <b>Shop Floor Scheduling and Assembly Processing</b>
    Planning Table to view in a Gant Chart the schedule.
    Heuristics like forward/backward scheduling (infinite)
    Manual scheduling
    Newly-Designed Processes
    Detailed Scheduling PP/DS  (SAP APO)
    Finite and infinite scheduling, considering material, capacity, time and market constraints.
    Manual scheduling
    Scheduling optimization via optimization algorithms, like
    Genetic Algorithm
    Constraint based Propagation
    Kiran Kute

  • Events Wait - Latch Free

    Hello all,
    Please help me to detect this problem. It's happening in my production db.
    Everyday in specific hours, users often call me report that core application performance slow.
    Then I query "SELECT * FROM V$SESSION_WAIT" to check wait event.
    I can see that so many latch free events (library cache). It's always return the same/almost same results everytime I run this query.
    I want to know who's holding the latch, so I do this:
    "SELECT l.sid, s.sql_hash_value, s.sql_address, s.osuser, s.username, s.machine, s.program, l.name
    FROM V$SESSION s, V$LATCHHOLDER l
    WHERE s.sid = l.sid;"
    Unfortunately, there's no useful information, sometimes it returns zero result, sometimes just few records. And I can't get who's exactly holding the latch whereas the 'victim' sessions still waiting for the latch free event.
    How to get SID who's holding the latch that caused session from V$SESSION_WAIT waiting ?
    Once I get the SID of holder, I can check what query caused performance slow from v$sqlarea.
    Please advice,
    This is my system information.
    OS : AIX 5.2
    Oracle : Oracle 9.2.0.8
    Thank you,
    BSS.

    Hello, Thanks for replying
    Because it contains high wait events and I think it's one of causal factor of degradation performance.
    When the application was smooth, I noticed that this view contains few wait event records.
    For additional information, I've also checked these things:
    IO server : there's no significant disk busy on the server
    redo generation : it takes about 10 minutes to generate redo (I think it's normal)
    Or if you have any idea how to identify problem when application run slowly in specific hours, please welcome to mention it here...I'm very glad if you do :)
    Thanks,
    BSS

  • Speck See Thru Case - Your Thoughts

    I know this question has come up many times about the Speck See Thru series of snap-on shells for the MBP. But after a response to a thread I read the other day in comparison to an answer I received at the Apple Store I'm even more confused.
    The advice in the earlier thread advised against the use of the Speck suggesting that it would cause the MBP to overheat, or at the very least run hotter than usual which would in turn lead to an early demise of the MBP.
    The advice from the Apple Store Genius was quite different though. He said that it's fine to use the Speck, and although the argument that the aluminum case was designed to disipate heat was true, the bottom of the Macbook was insulated and thus does not offer much in heat dispensation anyway. He added that the majority of heat is released through the vent that runs along the hinge which is not obstructed by the Speck.
    So my point is this. If the MBP is running three to five degrees warmer with the Speck, but still within the lower limits of acceptable operating temperture, then whats the harm?
    And, is it safe to assume that since the Speck is sold in the Apple Store it's perfectly ok to use?

    Joseph, greetings:  There is no speculation on my part.  Heat is the bane of all computers.  Aluminum is an excellent conductor of heat.  Plastic is not and will tend to insulate heat rather than dissipate it.  These characteristics conform to the laws of Physics.
    Placing two MBPs side by side will prove very little because that does not meet the requirement of a statistically significant sample. What I am suggesting is analogous to a clinical trial, large enough so that the variables among the individuals is such that the item to be studied for effectiveness becomes the sole differentiating factor being measured. (For MBPs that would be the manufacturing variables and the sole factor is the Speck case.)
    The result would be two failure curves (each curve consisting of date on the X axis and number of MBPs on the Y axis.)  The MBPs (with out the cases) curve will be shifted to the right of the MBPs with the cases.  If I had said by how much the respective failures would be and the differences, that would be speculation, but I did not.
    Even if the MBPs were all operating within Apple specifications, as a group the hotter ones would fail sooner.
    In a nutshell, the more heat a MBP, or any computer is subjected to, the shorter its life span.
    Perhaps we should get together and develop a cooling system that will enable our MBPs outlast us.
    Ciao.

  • Difference between dp and snp?

    hi friends,
    what is main difference between with dp and snp. why we use both in apo. can u give me any scenerio with any industry?
    regards
    suneel.

    Dear Suneel,
    <a href="http://help.sap.com/saphelp_scm50/helpdata/en/8f/9d6937089c2556e10000009b38f889/frameset.htm">Demand Planning</a> - Use APO Demand Planning (DP) to create a forecast of market demand for your company's products. This component allows you to take into consideration the many different causal factors that affect demand. The result of APO Demand Planning is the demand plan.
    Demand Planning is a powerful and flexible tool that supports the demand planning process in your company. User-specific planning layouts and interactive planning books enable you to integrate people from different departments, and even different companies, into the forecasting process. Using the DP library of statistical forecasting and advanced macro techniques you can create forecasts based on demand history as well as any number of causal factors, carry out predefined and self-defined tests on forecast models and forecast results, and adopt a consensus-based approach to reconcile the demand plans of different departments. To add marketing intelligence and make management adjustments, you use promotions and forecast overrides. The seamless integration with APO Supply Network Planning supports an efficient S&OP process.
    <a href="http://help.sap.com/saphelp_scm50/helpdata/en/1c/4d7a375f0dbc7fe10000009b38f8cf/frameset.htm">Supply Network Planning</a> - APO Supply Network Planning (SNP) integrates purchasing, manufacturing, distribution, and transportation so that comprehensive tactical planning and sourcing decisions can be simulated and implemented on the basis of a single, global consistent model. Supply Network Planning uses advanced optimization techniques, based on constraints and penalties, to plan product flow along the supply chain. The result is optimal purchasing, production, and distribution decisions; reduced order fulfillment times and inventory levels; and improved customer service.
    Starting from a demand plan, Supply Network Planning determines a permissible short- to medium-term plan for fulfilling the estimated sales volumes. This plan covers both the quantities that must be transported between two locations (for example, distribution center to customer or production plant to distribution center), and the quantities to be produced and procured. When making a recommendation, Supply Network Planning compares all logistical activities to the available capacity.
    The Deployment function determines how and when inventory should be deployed to distribution centers, customers, and vendor-managed inventory accounts. It produces optimized distribution plans based on constraints (such as transportation capacities) and business rules (such as minimum cost approach, or replenishment strategies).
    The Transport Load Builder (TLB) function maximizes transport capacities by optimizing load building.
    In addition, the seamless integration with APO Demand Planning supports an efficient S&OP process.
    Supply Network Planning is used to calculate quantities to be delivered to a location in order to match customer demand and maintain the desired service level. Supply Network Planning includes both heuristics and mathematical optimization methods to ensure that demand is covered and transportation, production, and warehousing resources are operating within the specified capacities.
    The interactive planning desktop makes it possible to visualize and interactively modify planning figures. You can present all key indicators graphically. The system processes any changes directly via liveCache.
    Regards,
    Naveen.

  • Preview low quality, how do I change the setting?

    I am working with Adobe Premiere Pro CS4 with Windows Vista.  The preview is showing in low quality.  It makes it difficule to judge the quality of the footage.  Is there a setting for this.  I have looked in the preferences but didn't see anythig helpful.

    Well, Jim there have been three users, IIRC, in the last year or so, where Auto performed better than with Highest. For me, Auto and Highest produce no noticeable difference (on my laptop's GeForce 8800m GTX, and do not even know about my Quadro FX-4500, as it's always been on Highest), and I would have assumed that Highest would always be the best - yet for 3 posters, it was not. Three is not a very large number, but if one considers the number of copies of PrPro sold, factor then by the number of people, who post here. Factor again by the number of posters with questions/problems with the quality of the display in Program Monitor. All of a sudden, 3 might take on statistical significance. I do not recall the video card from any of those posts - could have been the same, or more likely different. When I read the first response, that Auto had worked better, I dismissed it. As time went by and at least two others reported the same, I made a mental note of it - in some cases, it appears that Highest is NOT the best, hence my mention to try each and judge. Why would Auto do better than Highest? I have no clue, but would think that it might have something to do with the video card, its driver, or possibly individual settings in the driver's console, or maybe something like Hardware Acceleration in the OS.
    So you can question all that you want. It could be that the posters lied, or became confused over which setting worked best for them, yet with 3, it seems a bit of a stretch.
    To cover bases, I mentioned trying each to test, so in this case, the OP should be able to make that determination, with very little time lost - click, click, test.
    If it's meaningful to you, perhaps spend the weekend pouring over the posts, say back 18 mos. and see exactly what the circumstances were, that yielded better results on Auto.
    Happy reading,
    Hunt
    PS - I know that the threads were here, as PrE does not have a Program Monitor Quality setting - only Magnification

  • What is demand planning and shop floor plannig, explain briefly?

    I like to know and understand about demand planning and shop floor plaining? please explain briefly?
    Thank you,
    york

    Hi Les,
    <b>Demand Planning</b> is
    Application component in the Advanced Planner and Optimizer (APO) that allows you to forecast market demand for your company's products and produce a demand plan.
    APO Demand Planning has a data mart in which you store and maintain the information necessary for the demand planning process in your company. Using this information along with user-defined planning layouts and interactive planning books, you can integrate people from different departments into the forecasting process. The APO DP library of statistical forecasting and advanced macro techniques allows you to create forecasts based on demand history as well as any number of causal factors, and use a consensus-based approach to consolidate the results. Marketing intelligence and management adjustments can be added by using forecast overrides and promotions. The seamless integration with APO Supply Network Planning supports an efficient S&OP process.
    regards
    kp

Maybe you are looking for

  • In Portal MDM system missing

    Hi, Iam using the discovery system for creating a POC i was going through a SDN demo on MDM portal where they have show me to create a MDM folder and assign it to the MDM system templete. Here Iam not getting this <b>MDM SYSTEM</b> Can any one help m

  • Flash Player 11.3.300.262 Not working with Firefox. I think I found the problem.

    *Flash Player 11.3.300.262 is not working with Firefox. I think I found the problem.* Millions of people got off work & turned on there computers recently to discover that Firefox is not playing any videos or flash content. Just like me, millions of

  • Replacing the cookies file with a previous version

         I have a corrupted cookies file, as far as I  an see.  I want to replace it with a previous version--I have time machione.  Where in the library files is the cookies file located?

  • DB Adapter to call PL/SQL procedure

    We are trying to evaluate BPEL and I am trying to do proof of concept passing some values to a database procedure. I have the following simple procedure: /* Program Name: BHPOC_PROC */ create or replace procedure BHPOC_PROC (P_INPUT_ID IN NUMBER, P_I

  • Photoshop Elements 11 Won't Open a File

    I cannot get a photo to open in PSE 11 on my Mac to save my life. I have tried reinstalling the software. This is really irritating. Adobe, or someone, suggestions please?