Time & cost to implement Oracle Contracts

Hey, Everyone,
I am try to figure out how long it take to implement oracle contract with Application 11.0.3 as back office? How long it take to get it done. Do we have to use Oracle consultant or other source or we can do it in house.
Thanks for all the information.
Oliver

Read SAP note 830576... and go get some basic BASIS course soon.

Similar Messages

  • How to accomplish Revenue and Cost forecasting in Oracle?

    We are implementing Oracle Rel 11.5.10. and have a specific business requirement for Revenue and Cost Forecasting.
    We have Projects integrated with OM using custom extensions (We are not using Project Manufacturing). Revenue is based on scheduled shipment dates and Cost is based on PO Promise Dates. Both Revenue and Cost are forecasted for 2 quarters and are rolled up on revenue category level. When Scheduled shipment date changes it is required to automatically forecast the revenue in the period scheduled ship date falls into. We understand that out of the box functionality does not support roll up of forecast at revenue category level which leads to displaying budgets, commitments and atuals in different lines. Has any one done this in any of there client engagements? Can anyone explain how Revenue and Cost forecasting is done in a High tech Network Equipment manufacturer which usies Projects.
    Any help will be highly appreciated.
    Thanks in advance
    Amit

    Hi
    My understanding is that your entire solution is based on custom proceses.
    You might have a program that reads the scheduled shipment date from OM and generate budget lines for revenue.
    You might have a program that reads the PO Promise date from Purchasing, and generate budget lines for costs.
    You could integrate those sources and generate a new version of forecast every time you need to update your plans in Oracle Projects.
    Dina

  • What is BI ? How we implement & what is the cost to implement ?

    What is BI ? How we implement & what is the cost to implement ?
    Thanks,
    Sumit.

    Hi Sumit,
                        Below is the description according to ur query
    Business Intelligence is a process for increasing the competitive advantage of a business by intelligent use of available data in decision making. This process is pictured below.
    The five key stages of Business Intelligence:
    1.     Data Sourcing
    2.     Data Analysis
    3.     Situation Awareness
    4.     Risk Assessment
    5.     Decision Support
    Data sourcing
    Business Intelligence is about extracting information from multiple sources of data. The data might be: text documents - e.g. memos or reports or email messages; photographs and images; sounds; formatted tables; web pages and URL lists. The key to data sourcing is to obtain the information in electronic form. So typical sources of data might include: scanners; digital cameras; database queries; web searches; computer file access; etcetera.
    Data analysis
    Business Intelligence is about synthesizing useful knowledge from collections of data. It is about estimating current trends, integrating and summarising disparate information, validating models of understanding, and predicting missing information or future trends. This process of data analysis is also called data mining or knowledge discovery. Typical analysis tools might use:-
    u2022     probability theory - e.g. classification, clustering and Bayesian networks; 
    u2022     statistical methods - e.g. regression; 
    u2022     operations research - e.g. queuing and scheduling; 
    u2022     artificial intelligence - e.g. neural networks and fuzzy logic.
    Situation awareness
    Business Intelligence is about filtering out irrelevant information, and setting the remaining information in the context of the business and its environment. The user needs the key items of information relevant to his or her needs, and summaries that are syntheses of all the relevant data (market forces, government policy etc.).  Situation awareness is the grasp of  the context in which to understand and make decisions.  Algorithms for situation assessment provide such syntheses automatically.
    Risk assessment
    Business Intelligence is about discovering what plausible actions might be taken, or decisions made, at different times. It is about helping you weigh up the current and future risk, cost or benefit of taking one action over another, or making one decision versus another. It is about inferring and summarising your best options or choices.
    Decision support
    Business Intelligence is about using information wisely.  It aims to provide warning you of important events, such as takeovers, market changes, and poor staff performance, so that you can take preventative steps. It seeks to help you analyse and make better business decisions, to improve sales or customer satisfaction or staff morale. It presents the information you need, when you need it.
    This section describes how we are using extraction, transformation and loading (ETL) processes and a data warehouse architecture to build our enterprise-wide data warehouse in incremental project steps. Before an enterprise-wide data warehouse could be delivered, an integrated architecture and a companion implementation methodology needed to be adopted. A productive and flexible tool set was also required to support ETL processes and the data warehouse architecture in a production service environment. The resulting data warehouse architecture has the following four principal components:
    u2022 Data Sources
    u2022 Data Warehouses
    u2022 Data Marts
    u2022 Publication Services
    ETL processing occurs between data sources and the data warehouse, between the data warehouse and data marts and may also be used within the data warehouse and data marts.
    Data Sources
    The university has a multitude of data sources residing in different Data Base Management System (DBMS) tables and non-DBMS data sets. To ensure that all relevant data source candidates were identified, a physical inventory and logical inventory was conducted. The compilation of these inventories ensures that we have an enterprise-wide view of the university data resource.
    The physical inventory was comprised of a review of DBMS cataloged tables as well as data sets used by business processes. These data sets had been identified through developing the enterprise-wide information needs model.
    3
    SUGI 30 Focus Session
    The logical inventory was constructed from u201Cbrain-stormingu201D sessions which focused on common key business terms which must be referenced when articulating the institutionu2019s vision and mission (strategic direction, goals, strategies, objectives and activities). Once the primary terms were identified, they were organized into directories such as u201CProjectu201D, u201CLocationu201D, u201CAcademic Entityu201D, u201CUniversity Personu201D, u201CBudget Envelopeu201D etc. Relationships were identified by recognizing u201Cnatural linkagesu201D within and among directories, and the u201Cdrill-downsu201D and u201Croll-upsu201D that were required to support u201Creport byu201D and u201Creport onu201D information hierarchies. This exercise allowed the directories to be sub-divided into hierarchies of business terms which were useful for presentation and validation purposes.
    We called this important deliverable the u201CConceptual Data Modelu201D (CDM) and it was used as the consolidated conceptual (paper) view of all of the Universityu2019s diverse data sources. The CDM was then subjected to a university-wide consultative process to solicit feedback and communicate to the university community that this model would be adopted by the Business Intelligence (BI) project as a governance model in managing the incremental development of its enterprise-wide data warehousing project.
    Data Warehouse
    This component of our data warehouse architecture (DWA) is used to supply quality data to the many different data marts in a flexible, consistent and cohesive manner. It is a u2018landing zoneu2019 for inbound data sources and an organizational and re-structuring area for implementing data, information and statistical modeling. This is where business rules which measure and enforce data quality standards for data collection in the source systems are tested and evaluated against appropriate data quality business rules/standards which are required to perform the data, information and statistical modeling described previously.
    Inbound data that does not meet data warehouse data quality business rules is not loaded into the data warehouse (for example, if a hierarchy is incomplete). While it is desirable for rejected and corrected records to occur in the operational system, if this is not possible then start dates for when the data can begin to be collected into the data warehouse may need to be adjusted in order to accommodate necessary source systems data entry u201Cre-worku201D. Existing systems and procedures may need modification in order to permanently accommodate required data warehouse data quality measures. Severe situations may occur in which new data entry collection transactions or entire systems will need to be either built or acquired.
    We have found that a powerful and flexible extraction, transformation and loading (ETL) process is to use Structured Query Language (SQL) views on host database management systems (DBMS) in conjunction with a good ETL tool such as SAS® ETL Studio. This tool enables you to perform the following tasks:
    u2022 The extraction of data from operational data stores
    u2022 The transformation of this data
    u2022 The loading of the extracted data into your data warehouse or data mart
    When the data source is a u201Cnon-DBMSu201D data set it may be advantageous to pre-convert this into a SAS® data set to standardize data warehouse metadata definitions. Then it may be captured by SAS® ETL Studio and included in the data warehouse along with any DBMS source tables using consistent metadata terms. SAS® data sets, non-SAS® data sets, and any DBMS table will provide the SAS® ETL tool with all of the necessary metadata required to facilitate productive extraction, transformation and loading (ETL) work.
    Having the ability to utilize standard structured query language (SQL) views on host DBMS systems and within SAS® is a great advantage for ETL processing. The views can serve as data quality filters without having to write any procedural code. The option exists to u201Cmaterializeu201D these views on the host systems or leave them u201Cun-materializedu201D on the hosts and u201Cmaterializeu201D them on the target data structure defined in the SAS® ETL process. These choices may be applied differentially depending upon whether you are working with u201Ccurrent onlyu201D or u201Ctime seriesu201D data. Different deployment configurations may be chosen based upon performance issues or cost considerations. The flexibility of choosing different deployment options based upon these factors is a considerable advantage.
    4
    SUGI 30 Focus Session
    Data Marts
    This component of the data warehouse architecture may manifest as the following:
    u2022 Customer u201Cvisibleu201D relational tables
    u2022 OLAP cubes
    u2022 Pre-determined parameterized and non-parameterized reports
    u2022 Ad-hoc reports
    u2022 Spreadsheet applications with pre-populated work sheets and pivot tables
    u2022 Data visualization graphics
    u2022 Dashboard/scorecards for performance indicator applications
    Typically a business intelligence (BI) project may be scoped to deliver an agreed upon set of data marts in a project. Once these have been well specified, the conceptual data model (CDM) is used to determine what parts need to be built or used as a reference to conform the inbound data from any new project. After the detailed data mart specifications (DDMS) have been verified and the conceptual data model (CDM) components determined, a source and target logical data model (LDM) can be designed to integrate the detailed data mart specification (DDMS) and conceptual data model (CMD). An extraction, transformation and loading (ETL) process can then be set up and scheduled to populate the logical data models (LDM) from the required data sources and assist with any time series and data audit change control requirements.
    Over time as more and more data marts and logical data models (LDMu2019s) are built the conceptual data model (CDM) becomes more complete. One very important advantage to this implementation methodology is that the order of the data marts and logical data models can be entirely driven by project priority, project budget allocation and time-to-completion constraints/requirements. This data warehouse architecture implementation methodology does not need to dictate project priorities or project scope as long as the conceptual data model (CDM) exercise has been successfully completed before the first project request is initiated.
    McMasteru2019s Data Warehouse design
    DevelopmentTestProductionWarehouseWarehouseWarehouseOtherDB2 OperationalOracle OperationalETLETLETLETLETLETLETLETLETLDataMartsETLETLETLDataMartsDataMartsDB2/Oracle BIToolBIToolBIToolNoNoUserUserAccessAccessUserUserAccessAccess(SAS (SAS Data sets)Data sets)Staging Area 5
    SUGI 30 Focus Session
    Publication Services
    This is the visible presentation environment that business intelligence (BI) customers will use to interact with the published data mart deliverables. The SAS® Information Delivery Portal will be utilized as a web delivery channel to deliver a u201Cone-stop information shoppingu201D solution. This software solution provides an interface to access enterprise data, applications and information. It is built on top of the SAS Business Intelligence Architecture, provides a single point of entry and provides a Portal API for application development. All of our canned reports generated through SAS® Enterprise Guide, along with a web-based query and reporting tool (SAS® Web Report Studio) will be accessed through this publication channel.
    Using the portalu2019s personalization features we have customized it for a McMaster u201Clook and feelu201D. Information is organized using pages and portlets and our stakeholders will have access to public pages along with private portlets based on role authorization rules. Stakeholders will also be able to access SAS® data sets from within Microsoft Word and Microsoft Excel using the SAS® Add-In for Microsoft Office. This tool will enable our stakeholders to execute stored processes (a SAS® program which is hosted on a server) and embed the results in their documents and spreadsheets. Within Excel, the SAS® Add-In can:
    u2022 Access and view SAS® data sources
    u2022 Access and view any other data source that is available from a SAS® server
    u2022 Analyze SAS® or Excel data using analytic tasks
    The SAS® Add-In for Microsoft Office will not be accessed through the SAS® Information Delivery Portal as this is a client component which will be installed on individual personal computers by members of our Client Services group. Future stages of the project will include interactive reports (drill-down through OLAP cubes) as well as balanced scorecards to measure performance indicators (through SAS® Strategic Performance Management software). This, along with event notification messages, will all be delivered through the SAS® Information Delivery Portal.
    Publication is also channeled according to audience with appropriate security and privacy rules.
    SECURITY u2013 AUTHENTICATION AND AUTHORIZATION
    The business value derived from using the SAS® Value Chain Analytics includes an authoritative and secure environment for data management and reporting. A data warehouse may be categorized as a u201Ccollection of integrated databases designed to support managerial decision making and problem solving functionsu201D and u201Ccontains both highly detailed and summarized historical data relating to various categories, subjects, or areasu201D. Implementation of the research funding data mart at McMaster has meant that our stakeholders now have electronic access to data which previously was not widely disseminated. Stakeholders are now able to gain timely access to this data in the form that best matches their current information needs. Security requirements are being addressed taking into consideration the following:
    u2022 Data identification
    u2022 Data classification
    u2022 Value of the data
    u2022 Identifying any data security vulnerabilities
    u2022 Identifying data protection measures and associated costs
    u2022 Selection of cost-effective security measures
    u2022 Evaluation of effectiveness of security measures
    At McMaster access to data involves both authentication and authorization. Authentication may be defined as the process of verifying the identity of a person or process within the guidelines of a specific
    6
    SUGI 30 Focus Session
    security policy (who you are). Authorization is the process of determining which permissions the user has for which resources (permissions). Authentication is also a prerequisite for authorization. At McMaster business intelligence (BI) services that are not public require a sign on with a single university-wide login identifier which is currently authenticated using the Microsoft Active Directory. After a successful authentication the SAS® university login identifier can be used by the SAS® Meta data server. No passwords are ever stored in SAS®. Future plans at the university call for this authentication to be done using Kerberos.
    At McMaster aggregate information will be open to all. Granular security is being implemented as required through a combination of SAS® Information Maps and stored processes. SAS® Information Maps consist of metadata that describe a data warehouse in business terms. Through using SAS® Information Map Studio which is an application used to create, edit and manage SAS® Information Maps, we will determine what data our stakeholders will be accessing through either SAS® Web Report Studio (ability to create reports) or SAS® Information Delivery Portal (ability to view only). Previously access to data residing in DB-2 tables was granted by creating views using structured query language (SQL). Information maps are much more powerful as they capture metadata about allowable usage and query generation rules. They also describe what can be done, are database independent and can cross databases and they hide the physical structure of the data from the business user. Since query code is generated in the background, the business user does not need to know structured query language (SQL). As well as using Information Maps, we will also be using SAS® stored processes to implement role based granular security.
    At the university some business intelligence (BI) services are targeted for particular roles such as researchers. The primary investigator role of a research project needs access to current and past research funding data at both the summary and detail levels for their research project. A SAS® stored process (a SAS® program which is hosted on a server) is used to determine the employee number of the login by checking a common university directory and then filtering the research data mart to selectively provide only the data that is relevant for the researcher who has signed onto the decision support portal.
    Other business intelligence (BI) services are targeted for particular roles such as Vice-Presidents, Deans, Chairs, Directors, Managers and their Staff. SAS® stored processes are used as described above with the exception that they filter data on the basis of positions and organizational affiliations. When individuals change jobs or new appointments occur the authorized business intelligence (BI) data will always be correctly presented.
    As the SAS® stored process can be executed from many environments (for example, SAS® Web Report Studio, SAS® Add-In for Microsoft Office, SAS® Enterprise Guide) authorization rules are consistently applied across all environments on a timely basis. There is also potential in the future to automatically customize web portals and event notifications based upon the particular role of the person who has signed onto the SAS® Information Delivery Portal.
    ARCHITECTURE (PRODUCTION ENVIRONMENT)
    We are currently in the planning stages for building a scalable, sustainable infrastructure which will support a scaled deployment of the SAS® Value Chain Analytics. We are considering implementing the following three-tier platform which will allow us to scale horizontally in the future:
    Our development environment consists of a server with 2 x Intel Xeon 2.8GHz Processors, 2GB of RAM and is running Windows 2000 u2013 Service Pack 4.
    We are considering the following for the scaled roll-out of our production environment.
    A. Hardware
    1. Server 1 - SAS® Data Server
    - 4 way 64 bit 1.5Ghz Itanium2 server
    7
    SUGI 30 Focus Session
    - 16 Gb RAM
    - 2 73 Gb Drives (RAID 1) for the OS
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for Itanium
    2 Mid-Tier (Web) Server
    - 2 way 32 bit 3Ghz Xeon Server
    - 4 Gb RAM
    - 1 10/100/1Gb Cu Ethernet card
    - 1 Windows 2003 Enterprise Edition for x86
    3. SAN Drive Array (modular and can grow with the warehouse)
    - 6 u2013 72GB Drives (RAID 5) total 360GB for SAS® and Data
    B. Software
    1. Server 1 - SAS® Data Server
    - SAS® 9.1.3
    - SAS® Metadata Server
    - SAS® WorkSpace Server
    - SAS® Stored Process Server
    - Platform JobScheduler
    2. Mid -Tier Server
    - SAS® Web Report Studio
    - SAS® Information Delivery Portal
    - BEA Web Logic for future SAS® SPM Platform
    - Xythos Web File System (WFS)
    3. Client u2013Tier Server
    - SAS® Enterprise Guide
    - SAS® Add-In for Microsoft Office
    REPORTING
    We have created a number of parameterized stored processes using SAS® Enterprise Guide, which our stakeholders will access as both static (HTML as well as PDF documents) and interactive reports (drill-down) through SAS® Web Report Studio and the SAS® Add-In for Microsoft Office. All canned reports along with SAS® Web Report Studio will be accessed through the SAS® Information Delivery Portal.
    NEXT STEPS
    Next steps of the project include development of a financial data mart along with appropriate data quality standards, monthly frozen snapshots and implementation of university-wide financial reporting standards. This will facilitate electronic access to integrated financial information necessary for the development and maintenance of an integrated, multi-year financial planning framework. Canned reports to include monthly web-based financial statements, with drill-down capability along with budget templates automatically populated with data values and saved in different workbooks for different subgroups (for example by Department). The later will be accomplished using Microsoft Direct Data Exchange (DDE).
    8
    SUGI 30 Focus Session
    As well, we will begin the implementation of SAS® Strategic Performance Management Software to support the performance measurement and monitoring initiative that is a fundamental component of McMasteru2019s strategic plan. This tool will assist in critically assessing and identifying meaningful and statistically relevant measures and indicators. This software can perform causal analyses among various measures within and across areas providing useful information on inter-relationships between factors and measures. As well as demonstrating how decisions in one area affect other areas, these cause-and-effect analyses can reveal both good performance drivers and also possible detractors and enable u2018evidenced-basedu2019 decision-making. Finally, the tool provides a balanced scorecard reporting format, designed to identify statistically significant trends and results that can be tailored to the specific goals, objectives and measures of the various operational areas of the University.
    LESSONS LEARNED
    Lessons learned include the importance of taking a consultative approach not only in assessing information needs, but also in building data hierarchies, understanding subject matter, and in prioritizing tasks to best support decision making and inform senior management. We found that a combination of training and mentoring (knowledge transfer) helped us accelerate learning the new tools. It was very important to ensure that time and resources were committed to complete the necessary planning and data quality initiatives prior to initiating the first project. When developing a project plan, it is important to

  • Oracle contracts vs oracle lease management

    can some one please exlain the differences in oracle contracts and oracle lease manage ment? or they are same thing.
    also, please let me know what is the appropriate forum for Oracle lease management on metalink.
    thanks,

    Here is some what i can say abt this topic
    Oracle Core Contracts is majorly divided into 2 categories
    1) Sales Contracts
    2) Service Contracts
    There is some base setup which we do in Core Contracts which is required for the implementation for Sales and Service Contracts, like Auto Numbering and etc.
    Now Oracle has given the New Category which is Oracle Lease Management.
    Since this is also a contract, the base setups are carried out in Core Contracts.
    OLM is majorly built on financial modules like AP, AR, GL, Collections and it is also integrated with some other modules like FA and some of the CRM modules like Fulfillment, CRM foundation and Pre-sales Module.
    For Service part they use one small module called as Lease Center.
    There is lot of Stuff in OLM like the streams which are generated from a 3rd party system.
    Vendor Funding, Termination, Master Lease Agreements, Vendor Program Agreements, Formula's, etc, etc are present in OLM.
    It's a HUGE world
    Actually OLM is all together different. It has flows which is called as Key Business Flows (KBF)
    So in my words i would say that Oracle Contracts supports Lease Management
    After all Lease is a part of Contracts :-)
    Hope this clarifies
    If you want any more clarifications, then please let me know
    Cheers,
    Vicky

  • Time/Cost

    This is theversion i am using .
    (Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, Real Application Clusters, OLAP, Data Mining
    and Real Application Testing options);
    I have to insert Into table A from Flat file which has 3.6 million rows , By creation external table and join ext table with table B then insert those returned Rows into Table A . ( approxmately 3.6 million )
    For this Process Took approxmately 1 Hr
    Is this acceptable ( required ) time duration? or Oracle has to take less than this?
    If this question looks stupid or data Insufficient please ask me for more info or advice me how to approch. ( i am Dummy at oracle )
    cost for join condition is 1300 i am using hint ( index ) !! ( There is no change in cost before after hint )
    Thanks in advance

    Lets answer this way:
    If you have an old single processor PC and a single slow disk, this time is excellent.
    If however you have a modern multi processor machine and SSDs, something is wrong.
    If however you are asking for hints how to improve the insert operation:
    1. Use the /+* APPEND */ hint
    2. Provided you have more than one processor, split your flat file, spread it over different disks and start multiple processes to load the data.

  • Error Implementing Oracle Projects Resource Management

    I'm implementing Oracle Resource Management and I've followed all the requirement/steps in the Implementation Guide. I realized that after I changed the flag "Include in Utilization" to Yes in the Job Definition, the request showed me the following error in log:
    Current NLS_LANG and NLS_NUMERIC_CHARACTERS Environment Variables are :
    American_America.WE8ISO8859P1
    Enter Password:
    REP-1401: 'beforereport': Fatal PL/SQL error occurred.
    ORA-06503: PL/SQL: Function returned without value
    REP-0069: Internal error
    REP-57054: In-process job terminated:Terminated with error:
    REP-1401: 'beforereport': Fatal PL/SQL error occurred.
    ORA-06503: PL/SQL: Function returned without value
    Thanks in Advance.

    Hi Dean and Dina,
    Thanks for your answers. Following is the missing info:
    - Yes, we are using the AIA PIP
    - We are using 'Workplan enabled' projects for integration with a 'Partially Shared Structure'
    - In integration options, we are only integrating 'Workplan' with a 'Working Version'
    - We are trying to send only progress information from P6 to Oracle Projects, not the time card info
    Iniitially, when trying to send progress from P6 to Oracle Projects, we were getting the error - 'You cannot update a published version of the workplan'
    However, after checking the following flags, we stopped getting any errors in AIA/middleware and still not getting any data into Oracle Projects:
    Projects --> Setup --> Integration settings:
    Integrate - Workplan - Send Task - Current Working Version
    Integrate - Workplan - Receive Task
    Workplan --> setup:
    Allow Physical Percent Complete Collection
    Allow Physical Percent Complete Overrides
    Thanks for your help.

  • Revenue Recognition :- Type A : Time Based Revenue Recognition in Contract

    Hello,
    I am facing a peculiar problem while creating the contract relevant for Revenue Recognition Type A : Time Based Revenue Recognition in Contract.
    The Item category setting I am using is as follows:-
    1.     Rev. recognition : A : Time-related revenue recognition
    2.     Acc. period start : B : Proposal based on billing plan start date
    3.     Revenue Dist. : A : Total Val.: Linear and Correction Value Linearly Distributed
    I have already done the necessary activation of Revenue Recognition.
    Done the account determination for following :-
    1.     Revenue account determination (VKOA)
    2.     Deferred revenue (VKOA)
    3.     Unbilled receivables (OVUR) 
    Still when I am trying to save the contract the document goes into Incompletion asking for "GL account for item 10"
    Where as in account determination analysis the system has already showing the
    1.     Revenue account
    2.     Deferred revenue account
    Kindly guide further.
    Thanks & Regards,
    Amrish Purohit

    Hi
    you can overcome this issue of G/l account missing while saving the document is by assigning the Account key of that condition type in
    img-> sales and Disrtibution->account assignment /Costing->revenue account determination-->defina and assign account keys
    here you have to assign the account keys you are using for that condition type in pricing procedure against the same conditiontype in the accurals field.
    this will solve your problem

  • How much does it cost to implement peoplesoft HRMS ?

    Hi All,
    how much does it cost to implement peoplesoft hrms (including license etc ) ?
    Thanks

    It depends.
    you may refer to the link below to get an idea of the retail price list
    http://www.oracle.com/us/corporate/pricing/peoplesoft-price-list-070612.pdf
    It would benefit if your organization is a university or an International Org.
    Discounts upto 80% can be availed.
    Bear in mind to strike a deal before May of any calendar year as that is there year end closing.

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • Java Heap Error when using Stateless Session Timer Bean deployed in Oracle

    Hi,
    Am getting following Java Heap Error when using Stateless Session Timer Bean deployed in Oracle 10g AS R3 (Oracle Containers for J2EE 10g (10.1.3.0.0) (build 060119.1546.05277) ):
    06/08/02 14:58:43 javax.ejb.EJBException: java.lang.OutOfMemoryError: Java heap space
    06/08/02 14:58:43 at com.evermind.server.ejb.EJBUtils.getLocalUserException(EJBUtils.java:304)
    06/08/02 14:58:43 at com.evermind.server.ejb.interceptor.system.AbstractTxInterceptor.convertAndHandleMethodException(AbstractTxInterceptor.java:67)
    06/08/02 14:58:43 at com.evermind.server.ejb.interceptor.system.TxNotSupportedInterceptor.invoke(TxNotSupportedInterceptor.java:45)
    06/08/02 14:58:43 at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:69)
    06/08/02 14:58:43 at com.evermind.server.ejb.StatelessSessionEJBObject.OC4J_invokeMethod(StatelessSessionEJBObject.java:86)
    06/08/02 14:58:43 at com.evermind.server.ejb.StatelessSessionEJBHome.invokeTimer(StatelessSessionEJBHome.java:71)
    06/08/02 14:58:43 at com.evermind.server.ejb.EJBContainer.invokeTimer(EJBContainer.java:1624)
    06/08/02 14:58:43 at oracle.ias.container.scheduler.TimerTask.runBeanTimer(TimerTask.java:92)
    06/08/02 14:58:43 at oracle.ias.container.scheduler.TimerTask.run(TimerTask.java:184)
    06/08/02 14:58:43 at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$Worker.run(PooledExecutor.java:819)
    06/08/02 14:58:43 at java.lang.Thread.run(Thread.java:595)
    06/08/02 14:58:43 Caused by: java.lang.OutOfMemoryError: Java heap space
    I had tried using -Xms / -Xmx options (upto 1 GB).
    The trace of exception gets delayed (from being displayed on the console) as the memory size is increased; but after sometime it starts getting displayed on the console.
    Even though this exception is displayed on the console, the Timer Bean continues to execute upto sometime before it finally crashes!
    If anyone has encountered such problem; would appreciate if you could share the solution.
    Regards, Vidyadhar

    Hi guys, I have the same problem. I have an application EAR file with two modules (EJB and WAR starting in this order). The application can schedule a process via EJB timer. In this case restarting the server I receive the error above. If I change the modules start order --> WAR - EJB the server start correctly, but the application scheduler fails (the persistency is not working) with this error:
    07/10/09 10:30:54 FINISSIMO: TimerTask.runBeanTimer java.lang.NullPointerException; nested exception is: java.lang.NullPointerExceptionjavax.ejb.TransactionRolledbackLocalException: java.lang.NullPointerException; nested exception is: java.lang.NullPointerException
    java.lang.NullPointerException
         at java.util.ListResourceBundle.handleGetObject(ListResourceBundle.java:107)
         at java.util.ResourceBundle.getObject(ResourceBundle.java:319)
         at java.util.ResourceBundle.getString(ResourceBundle.java:285)
         at java.util.logging.Formatter.formatMessage(Formatter.java:108)
         at oracle.j2ee.util.TraceLogFormatter.format(TraceLogger.java:124)
         at oracle.j2ee.util.TraceLogger$TraceLoggerHandler.publish(TraceLogger.java:105)
         at java.util.logging.Logger.log(Logger.java:428)
         at java.util.logging.Logger.doLog(Logger.java:450)
         at java.util.logging.Logger.log(Logger.java:539)
         at oracle.ias.container.timer.TimerEntry.readObjFromBytes(TimerEntry.java:308)
         at oracle.ias.container.timer.TimerEntry.getInfo(TimerEntry.java:107)
         at oracle.ias.container.timer.Timer.getInfo(Timer.java:367)
         at oracle.ias.container.timer.EJBTimerImpl.getInfo(EJBTimerImpl.java:89)
         at com.finantix.foundation.integration.ejbtimer.EJBTimerServiceExecutorBean.ejbTimeout(EJBTimerServiceExecutorBean.java:42)
         at com.evermind.server.ejb.interceptor.joinpoint.EJBTimeoutJoinPoint.invoke(EJBTimeoutJoinPoint.java:20)
         at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
         at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSInterceptor.java:52)
         at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
         at com.evermind.server.ejb.interceptor.system.SetContextActionInterceptor.invoke(SetContextActionInterceptor.java:44)
         at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
         at com.evermind.server.ejb.interceptor.system.TxBeanManagedInterceptor.invoke(TxBeanManagedInterceptor.java:53)
         at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
         at com.evermind.server.ejb.InvocationContextPool.invoke(InvocationContextPool.java:55)
         at com.evermind.server.ejb.StatelessSessionEJBObject.OC4J_invokeMethod(StatelessSessionEJBObject.java:87)
         at com.evermind.server.ejb.StatelessSessionEJBHome.invokeTimer(StatelessSessionEJBHome.java:38)
         at com.evermind.server.ejb.EJBContainer.invokeTimer(EJBContainer.java:1714)
         at oracle.ias.container.scheduler.TimerTask.runBeanTimer(TimerTask.java:106)
         at oracle.ias.container.scheduler.TimerTask.run(TimerTask.java:220)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
         at java.lang.Thread.run(Thread.java:595)
    javax.ejb.TransactionRolledbackLocalException: java.lang.NullPointerException; nested exception is: java.lang.NullPointerException
         at com.evermind.server.ejb.EJBUtils.getLocalUserException(EJBUtils.java:309)
         at com.evermind.server.ejb.interceptor.system.AbstractTxInterceptor.convertAndHandleMethodException(AbstractTxInterceptor.java:73)
         at com.evermind.server.ejb.interceptor.system.TxBeanManagedInterceptor.invoke(TxBeanManagedInterceptor.java:55)
         at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
         at com.evermind.server.ejb.InvocationContextPool.invoke(InvocationContextPool.java:55)
         at com.evermind.server.ejb.StatelessSessionEJBObject.OC4J_invokeMethod(StatelessSessionEJBObject.java:87)
         at com.evermind.server.ejb.StatelessSessionEJBHome.invokeTimer(StatelessSessionEJBHome.java:38)
         at com.evermind.server.ejb.EJBContainer.invokeTimer(EJBContainer.java:1714)
         at oracle.ias.container.scheduler.TimerTask.runBeanTimer(TimerTask.java:106)
         at oracle.ias.container.scheduler.TimerTask.run(TimerTask.java:220)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
         at java.lang.Thread.run(Thread.java:595)
    Caused by: java.lang.NullPointerException
         at java.util.ListResourceBundle.handleGetObject(ListResourceBundle.java:107)
         at java.util.ResourceBundle.getObject(ResourceBundle.java:319)
         at java.util.ResourceBundle.getString(ResourceBundle.java:285)
         at java.util.logging.Formatter.formatMessage(Formatter.java:108)
         at oracle.j2ee.util.TraceLogFormatter.format(TraceLogger.java:124)
         at oracle.j2ee.util.TraceLogger$TraceLoggerHandler.publish(TraceLogger.java:105)
         at java.util.logging.Logger.log(Logger.java:428)
         at java.util.logging.Logger.doLog(Logger.java:450)
         at java.util.logging.Logger.log(Logger.java:539)
         at oracle.ias.container.timer.TimerEntry.readObjFromBytes(TimerEntry.java:308)
         at oracle.ias.container.timer.TimerEntry.getInfo(TimerEntry.java:107)
         at oracle.ias.container.timer.Timer.getInfo(Timer.java:367)
         at oracle.ias.container.timer.EJBTimerImpl.getInfo(EJBTimerImpl.java:89)
         at com.finantix.foundation.integration.ejbtimer.EJBTimerServiceExecutorBean.ejbTimeout(EJBTimerServiceExecutorBean.java:42)
         at com.evermind.server.ejb.interceptor.joinpoint.EJBTimeoutJoinPoint.invoke(EJBTimeoutJoinPoint.java:20)
         at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
         at com.evermind.server.ejb.interceptor.system.DMSInterceptor.invoke(DMSInterceptor.java:52)
         at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
         at com.evermind.server.ejb.interceptor.system.SetContextActionInterceptor.invoke(SetContextActionInterceptor.java:44)
         at com.evermind.server.ejb.interceptor.InvocationContextImpl.proceed(InvocationContextImpl.java:119)
         at com.evermind.server.ejb.interceptor.system.TxBeanManagedInterceptor.invoke(TxBeanManagedInterceptor.java:53)
         ... 9 more
    Any idea?
    Thx Auro

  • How to find the Date and time of Installation of Oracle in Windows?

    Hi ,
    Is there any way to find the installation Date and time of Oracle in Windows?
    Regards
    Rajesh

    If you have never recreated/restored/reincarnated the database, then CREATED column from v$database should tell you when you created the database.
    If you are looking for the date/time of installation of Oracle home, check the logs under OraInventory.
    Reply by Satish sir at following link:
    installation date/time of a database !!
    Hth
    Girish Sharma

  • Implementing Auto Contract Generation/Negotiation and Execution

    Would love to hear from anyone that has implemented auto contract generation/negotiation and execution.  Were there any lessons learned you could share during implementation?  Were there things you found out - too late? Are you seeing adoption?  How long did the implementation take?  Pain points along the road?  Things you would do differently?  We are getting ready to take this trip ourselves and I have a feeling it will not be smooth sailing.

    Hello Sravan
    Your message isn't clear fo me
    If you used import for new master data you can use automatic key generation ability for inbound port
    The another way:
    You can create special key field in MDM and
    create simple workflow which fire when ADD record.
    That worklflow call assignment which fill key field
    Regards
    Kanstantsin

  • Query for Cost Code in oracle HRMS

    Hello Exports,
    i am new in oacle hrms,
    So Can you please tell how to fetch cost code in oracle hrms r12 ( i need sql query for cost code )
    Thanks

    try the following (assignment costing as at today):
    select ppx.full_name
    ,ppx.employee_number
    ,pax.assignment_number
    ,pax.primary_flag
    ,pcax.proportion
    ,pcak.segment1
    ,pcak.segment2
    ,pcak.segment3
    ,pcak.segment4
    from per_assignments_x pax
    ,per_people_x ppx
    ,pay_cost_allocations_x pcax
    ,pay_cost_allocation_keyflex pcak
    where ppx.person_id = pax.person_id
    and pax.assignment_id = pcax.assignment_id
    and pcak.cost_allocation_keyflex_id = pcax.cost_allocation_keyflex_id

  • UNDERSTAND THE NEW DATE AND TIME DATA TYPES IN ORACLE 9I

    제품 : SQL*PLUS
    작성날짜 : 2001-08-01
    UNDERSTAND THE NEW DATE AND TIME DATA TYPES IN ORACLE 9I
    ========================================================
    PURPOSE
    Oracle 9i 에서 소개되는 새로운 datetime data type 에 대해 소개한다.
    Explanation
    Example
    1. Datetime Datatypes
    1) TIMESTAMP
    : YEAR/MONTH/DAY/HOUR/MINUTE/SECOND
    2) TIMESTAMP WITH TIME ZONE
    : YEAR/MONTH/DAY/HOUR/MINUTE/SECOND/
    TIMEZONE_HOUR/TIMEZONE_MINUTE( +09:00 )
    or TIMEZONE_REGION( Asia/Seoul )
    3) TIMESTAMP WITH LOCAL TIME ZONE
    : YEAR/MONTH/DAY/HOUR/MINUTE/SECOND
    4) TIME WITH TIME ZONE
    : HOUR/MINUTE/SECOND/TIMEZONE_HOUR/TIMEZONE_MINUTE
    2. Datetime Fields
    1) YEAR/MONTH/DAY/HOUR/MINUTE
    2) SECOND(00 to 59.9(N) is precision) : Range 0 to 9, default is 6
    3) TIMEZONE_HOUR : -12 to 13
    4) TIMEZONE_MINUTE : 00 to 59
    5) TIMEZONE_REGION : Listed in v$timezone_names
    3. DATE 와 TIMESTAMP 와의 차이점
    SQL> select hiredate from emp;
    HIREDATE
    17-DEC-80
    20-FEB-81
    SQL> alter table employees modify hiredate timestamp;
    SQL> select hiredate from employees;
    HIREDATE
    17-DEC-80 12.00.00.000000 AM
    20-FEB-81 12.00.00.000000 AM
    단, 해당 Column 에 Data 가 있다면 DATE/TIMESTAMP -> TIMESTAMP WITH
    TIME ZONE 으로 Convert 할 수 없다.
    SQL> alter table employees modify hiredate timestamp with time zone;
    alter table employees modify hiredate timestamp with time zone
    ERROR at line 1:
    ORA-01439: column to be modified must be empty to change datatype
    4. TIMESTAMP WITH TIME ZONE Datatype
    TIMESTAMP '2001-05-24 10:00:00 +09:00'
    TIMESTAMP '2001-05-24 10:00:00 Asia/Seoul'
    TIMESTAMP '2001-05-24 10:00:00 KST'
    5. TIMESTAMP WITH LOCAL TIME ZONE Datatype
    SQL> create table date_tab (date_col TIMESTAMP WITH LOCAL TIME ZONE);
    SQL> insert into date_tab values ('15-NOV-00 09:34:28 AM');
    SQL> select * from date_tab;
    DATE_COL
    15-NOV-00 09.34.28.000000 AM
    SQL> alter session set TIME_ZONE = 'EUROPE/LONDON';
    SQL> select * from date_tab;
    DATE_COL
    15-NOV-00 12.34.28.000000 AM
    6. INTERVAL Datatypes
    1) INTERVAL YEAR(year_precision) TO MONTH
    : YEAR/MONTH
    : Year_precision default value is 2
    SQL> create table orders (warranty interval year to month);
    SQL> insert into orders values ('2-6');
    SQL> select warranty from orders;
    WARRANTY
    +02-06
    2) INTERVAL DAY (day_precision) TO SECOND (fractional_seconds_precision)
    : DAY/HOUR/MINUTE/SECOND
    : Logon time 확인시 주로 사용
    : day_precision range 0 to 9, default is 2
    SQL> create table orders (warranty interval day(2) to second);
    SQL> insert into orders values ('90 00:00:00');
    SQL> select warranty from orders;
    WARRANTY
    +90 00:00:00.000000
    7. Interval Fields
    - YEAR : Any positive or negative integer
    - MONTH : 00 to 11
    - DAY : Any positive or negative integer
    - HOUR : 00 to 23
    - MINUTE : 00 to 59
    - SECOND : 00 to 59.9(N) where 9(N) is precision
    8. Using Time Zones
    1) Database operation
    - Defined at CREATE DATABASE
    - Can be altered with ALTER DATABASE
    - Current value given by DBTIMEZONE
    2) Session operation
    - Defined with environment variable ORA_SDTZ
    - Can be altered with ALTER SESSION SET TIME_ZONE
    - Current value given by SESSIONTIMEZONE
    3) TIMESTAMP WITH LOCAL TIMEZONE
    - TIME_ZONE Session parameter
    : O/S Local Time Zone
    Alter session set time_zone = '-05:00';
    : An absolute offset
    Alter session set time_zone = dbtimezone;
    : Database time zone
    Alter session set time_zone = local;
    : A named region
    Alter session set time_zone = 'America/New_York';
    Reference Document
    ------------------

    Hi ,
    I am facing the same problem and my scenario is also same (BAPI's).
    So can you please tell me how you overcome this problem .
    Thanks,
    Rahul

  • Benefits of implementing Central Contracts in SRM 7.

    Dear Experts,
    Picture the following:
    - Classic Scenario.
    - One ECC 6 EHP 4 Back end.
    I'm currently exploring the benefits of implementing Central Contracts against using only Purchasing Contracts in ECC and replicating them to SRM for use in the Shopping Carts.
    So my question is why would you implement central contracts in SRM instead of just replicating these contracts from ECC? What benefits would you get out of having central contracts in place against having only Purchasing Contracts in ECC?
    Thanks for your input.

    adding to Manju one point .
    - Procurement department(PURCHASER) can use web interface to create / manage contracts rather GUI interface and quick contract creation / maintenance
    - Approval mechanism simplified with less efforts.
    - Different ways to create a contract from SRM
       Copy an existing contract
    Use an existing template
    Upload an external file
    Upload a contract from the catalog
    Muthuraman
    1.You can use this business process to access SAP SRM contract features, such as contract hierarchies, discount across contract hierarchies, and grouping logic, when determining source of supply in SAP ERP.
    2.You can create a central contract in SAP SRM, and it can then be used as source of supply in both SAP SRM and SAP ERP. Relevant data is sent to SAP ERP for a source of supply determination, and a specific type of contract or scheduling agreement can be created there.
    3.While determining a source of supply, you can access central contracts directly.
    4.The price is determined in SAP SRM before the SAP ERP purchase order (PO) is sent to the supplier.
    5.You can create and change central contracts and renegotiate existing contracts directly with a supplier, or through the creation of an RFx. You can automatically assign a contract as a source of supply, or it can be listed as one of numerous potential source of supply contracts. A strategic purchaser can create a contract whenever they anticipate a long-term relationship with a supplier.
    6.Contract management enables purchasers from various parts of the company at different locations to take advantage of the terms of globally-negotiated contracts for specific product categories. You can provide users with specific levels of authorization to contracts, and you can categorize documents as confidential.
    6.You can distribute central contracts to release-authorized purchasing organizations, and these organizations can then use them as source of supply in the appropriate SAP ERP system. Hierarchies can be used to organize, structure, display, and search for contracts.
    7.If you use SAP NetWeaver Business Intelligence (SAP BI), you can view various consolidated reports of contract management. For example, you can view aggregated value released against all contracts in a contract hierarchy.

Maybe you are looking for