Oracle Database abnormal growth

Dear all,
Can anybody suggest me, what is the best method to analyze and check the abnormal growth of database?
Currently our database size is 3.8 Tera , we have more then 1300 concurrent users and we are collecting statistics info on daily basis from DB02, ST04, ST10, and program RSSTAT10.
Can any body suggest me an answer with regards to various options available?
Thanks and regards
Asif
Message was edited by: Mohammed Asif
Message was edited by: Mohammed Asif

Mohammed,
Lot's of work to do. No easy fix for that.
As far as GLPCA you need to work with application teams to do data archiving. As far as I know, this is the only way you can shrunk this table.
TST* tables are spool/temse related tables. You can go over OSS notes 160083 48400 and make sure that spool and temse consistence check jobs are running periodically based on your company's retention period limitations.
CDCLS (data table) is change pointer related. You can get more maeningful information on CDHDR ( header table) such as how old data is out there. You need change pointer deletion/reorganization or archiving jobs scheduled. I do not have OSS notes handy but check OSS and schedule changepointer deletion jobs based on your company's retention period requirements.
After running cleanup jobs, do not forget to run an analyze table statement at the DB level and see how much space you have gained. If it is more than 40%, I recommend full table reorganization otherwise unbalanced indexes might cause you performance loss and you cannot allocate the space back to tablespace as freespace to be shared by other tables.
Good luck!
Neval

Similar Messages

  • How to choose a right server for Oracle Database!

    Hi all,
    Is there any formular that allows me to find the appropriate server configuration for Oracle database 11g?
    Please help!
    Thank you all.
    Dan.
    Edited by: Dan on 01:45 06-01-2013

    Simply put ... no.
    Determining the correct server is a dance between hardware vendors (one point of input to be treated with general distrust), internal expertise and experience working with similar systems, and most importantly understanding the behaviour of what you are building.
    Here are some sample questions to start the ball rolling.
    1. What operating system expertise does your organization have?
    2. What operating system expertise is readily available in the community from which you hire applicants?
    3. What type of application?
    4. What are up-time expectations?
    5. Requirements for high availability?
    6. Requirements for time to perform a full recovery?
    7. Requirements for security?
    8. Storage footprint (dictates internal storage versus SAN, NAS, or ZFS)
    9. Anticipated growth (3-7 years)
    Everyone wants to sell you CPU clock-ticks, more RAM, more IOPS, etc. But the first thing you need to understand is the capabilities you require that are generic: Required of any solution. Only when you have a clear understanding of these factors are you ready to discuss whether the solution is a pizza box or an ODA ... an Exadata or an IBM System Z.
    Far too often I find people purchasing RAM they don't need with CPU speeds that exceed the ability of their network infrastructure to move the data so pay special attention when you are ready to make your purchase to engineered systems like the ODA.

  • Oracle Databases and Storage Systems like HP EVA

    Hi everybody,
    are there generel advantages or disadvantages using such storage systems?
    I would be happy if somebody could talk about his experience using such systems.
    Is there better/less performance with Oracle Databases? Better/less Security for my Data instead of using a local raid 5/1+0 for example? Or anything else I should now before using such a storage system.
    What type of storage are you using? What type of disks would be the best (scsi/fata) ?
    Please talk with me a little bit about your experience with storage systems and Oracle Databases ;-)
    Thank you
    scratch

    Abhi
    Greate to meet another DBA in forum.The big problem we will face is SQL Tune.
    Nothing much Diffnt supporting SAP.SAP PRovided some ready made tools to monitor and to solve.Reg tuning still we can follow same methoda but with approval from SAP or by ref SAP notes.Which u can find at service market place.
    You can find some PDfs in sap market place and sdn.Which are ex'ly for DBas.
    Reg Growth rate you can have a look on sap transactions and there are some notes also.I dont think there is specific Book or Doc which will say Dos and donts.
    Performance we used to follow same methoda as a Dbas with ref of sap notes and support for any side effects.
    let me know if are in need of any specific info
    Regards
    Vinod

  • Oracle Database Size Estimation Technique

    Hello All,
    I am planning to estimate a database in Oracle. Its a fresh database and will not take any data from any other legacy or flat file systems.
    Do anyone of you can provide me with a checklist sort of thing which will enable me to estimate the size of my database considering its future growth as well?
    Or if you have a document which guides to do the estimation of the size of Oracle database, could you please share it with me as well?
    Thanks in advance
    Himanshu

    This estimation will depend on your installation type and your configuration like your OS, Oracle edition, release information. For example below information is from Oracle® Database Installation Guide
    10g Release 2 (10.2) for Linux x86 Chapter 2 Preinstallation Tasks - http://download.oracle.com/docs/cd/B19306_01/install.102/b15660/pre_install.htm#sthref95
    # 400 MB of disk space in the /tmp directory
    # Between 1.5 GB and 3.5 GB of disk space for the Oracle software, depending on the installation type
    # 1.2 GB of disk space for a preconfigured database that uses file system storage (optional)
    Best regards.

  • Use OEM to monitor oracle database size

    Hello All,
    We currently have several database which are currently monitored via OEM however i will appreciate if anyone can tell me how i can monitor the the size of my oracle databases via oem
    Thanks

    Hi,
    I'm not sure of OEM will monitor database size, but many 3rd party tools do.
    There are many ways to achieve Oracle table monitoring, and many create specialized extension metadata tables to monitor Oracle table growth. In Oracle 10g you can perform Oracle tables monitoring with the dba_hist_seg_stat tables, specifically the space_used_total column.
    On my databases, I create STATSPACK extension tables to track table and overall database growth:
    http://www.dba-oracle.com/te_table_monitoring.htm
    The full scripts are in my Oracle Press book, Oracle tuning with STATSPACK . . .
    http://www.amazon.com/Oracle9i-High-Performance-Tuning-STATSPACK-Burleson/dp/007222360X
    Hope this helps . . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference"
    http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm

  • Oracle databases to MySQL server

    Hi!
    Can anybody suggest a program to move Oracle databases to MySQL server.
    Thanks.
    Abdulkadir Muhammed

    > If the point is cost ... move it to Application Express
    .. on Oracle XE (eXpress Edition).
    Both products are free. This software stack rivals LAMP. Without as many software layers. Using the core of the best commercial database on the market - thus allowing you future growth and scalability. Something that LAMP does quite poorly.

  • Oracle Database Diconnection with VC Application

    Hi,
    We are facing a major issue in our application such as the Application shows customized error as "Connection to the database server is disconnected" and terminates the application abruptly. The error pops up sometimes immediately after opening the application or after a period of time.
    I cannot able to figure out how the error is coming. Application is not generating any logs.
    The application is built on VC and it has two tier architecture. Application uses the Oracle tnsnames and ODBC Connection to connect to the Oracle Database.
    I had done some basic analysis for network availability such as i opened the Oracle Sql*plus and our application simultaneously on the client machine (Windows XP). But application was disconnected with the above mentioned error and sql*plus working normal.
    I have also done by switching on connection trace in the sqlnet.ora file. But I can't able to understand the trace file as it has no "null" named entries for connection values.
    Whenever the Application is closed abnormally its leaving the ACTIVE session as INACTIVE. Then, I'm killing the oracle sessions manually using ORAKILL.
    Please provide some pointers like how would I log the exception information if any exception is raised in oracle database. Depending on that log information i like to back track the issue.
    Oracle Database Environment Details*
    OS : Windows 2000 Server (SP4)
    Oracle Database : Oracle 8i Enterprise Edition (8.1.7)
    Please revert in case of any further information is required.

    @SomeoneElse
    Its happening from the beginning say two or three times a day. But from last two months the problem is very often (once per 15mins). User can't able to work on the application.

  • Just started supporting Oracle Databases  for SAP Systems

    Hello Guys/Girls,
    I am working as oracle dba for the past 7 years, but this is the first time, I got an opportunity to adminster databases supporting SAP systems. I am doing search on SAP Network, and gathered information on BRTOOLS, SAPGUI etc., but could not find the details what I am looking for like, database objects growth rate, sap-oracle database performance specific details, as a oracle dba what are the DOs&DONTs etc., After made a lot of search on google, I found a book from O'Reilly, but that book covers older SAP, and Oracle versions. Our Client uses oracle 9i, and 10g primarily.
    I am confident as a Oracle DBA supporting non-sap systems, but not SAPsystems. Is any one of you pls., suggest me or provide me some guide lines in terms of Database Management.
    Appreciate your time!

    Abhi
    Greate to meet another DBA in forum.The big problem we will face is SQL Tune.
    Nothing much Diffnt supporting SAP.SAP PRovided some ready made tools to monitor and to solve.Reg tuning still we can follow same methoda but with approval from SAP or by ref SAP notes.Which u can find at service market place.
    You can find some PDfs in sap market place and sdn.Which are ex'ly for DBas.
    Reg Growth rate you can have a look on sap transactions and there are some notes also.I dont think there is specific Book or Doc which will say Dos and donts.
    Performance we used to follow same methoda as a Dbas with ref of sap notes and support for any side effects.
    let me know if are in need of any specific info
    Regards
    Vinod

  • How to load a java script in oracle database

    Is it possible to load d java script in Oracle database. while the object type is
    java resource ..

    RENUJP wrote:
    I meant to load a javascript to oracle database not to oracle appliocation.
    like loadjava....
    I can load a javascript to oracle database.. but i can't call it...Please re-read the comments above, especially the part about this not being a javascript nor oracle forum. Exactly what part about this information don't you understand?

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • Using Oracle ODBC Gateway connecting to a remote Oracle database

    Oracle 11gR2
    RHEL 6.4
    Has anyone use the Oracle ODBC Gateway to connect to another Oracle database?  Any issues with that configuration?  Where do I get the ODBC drivers for Linux?
    (I know, "why not use a dblink?" -- well that would be against company security policies)

    Hi,
       From the Oracle point of view we support using DG4ODBC for Oracle to Oracle connections. However, we have not actually tested it as DG4ODBC is primarily designed for access to non-Oracle databases.
    How DG4ODBC will work between Oracle databases depends on the ODBC driver and what that supports. You will need an ODBC Oracle driver which you can get from various suppliers including Oracle but also vendors such as DataDirect, Easysoft etc. You could try a Google search.
    You say you do not want to use database links but that is how DG4ODBC is used, You cannot do -
    sqlplus user/password@dg4odbc_oracle
    Once Dg4ODBC is setup and configured as in this note - if you are using Linux 64-bit -
    How to Configure DG4ODBC on 64bit Unix OS (Linux, Solaris, AIX, HP-UX Itanium) to Connect to Non-Oracle Databases Post Install (Doc ID 561033.1)
    then in the Oracle database you create a database link and select from tables in the other Oracle database -
    select * from table@dg4odbc_db_link ;
    Regards,
    Mike

  • Problem with httpd and oracle database

    hi
    i have oracle linux with oracle database 11gR2
    i install php with oci as the following link :
    http://oss.oracle.com/projects/php/dist/documentation/installation.html
    but when i try phpinfo() i found just only
    *" /etc/php.d/oci8.ini"* in the result
    and when i test connection i have this error
    *" Fatal error: Call to undefined function oci_connect() "*
    so what is the best way to install oci on oracle linux ?

    What version of the OS are you using? Any clues in the syslog and httpd log? Are you using SELinux, which is often an overlooked issue?
    Can you provide more details?
    so what is the best way to install oci on oracle linux ?According to the instructions, if support is needed: http://www.oracle.com/technetwork/topics/php/zend-server-096314.html

  • Import error on Oracle Database Express 10.2.0.1.0

    Hi,
    I try to import data from oracle V10.01.02 running on SUSE10 Linux to
    oracle 10g (10.1.0.2.0) running on Windows Server 2003.
    I am able to import the big part from my ata, but not all data.
    The begin of my log file is:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0
    - Production
    With the Partitioning, OLAP and Data Mining options
    Export file created by EXPORT:V10.01.00 via conventional path
    And after some time I receive:
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "SAFCI"."EMP"."NAME"
    (actual: 65, maximum: 64)
    Column 1 1000005025
    Column 2 ??????? ???????
    Column 3 19-SEP-2002:00:00:00
    Column 4 9089
    Column 5 ??. ?????
    Column 6 1.9828
    Column 7 377.77
    Column 8 75.55
    Column 9 ???????????? ???????? ? ??? ???? ? 32 ??.
    Column 10 19-SEP-2002:00:00:00
    Column 11 ?????? ?????, ?.?. 172747675, ???. 23.11.200?
    Column 12 ????????? ?????????
    Column 13 ? ????
    Column 14 T
    Column 15
    Column 16 F
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "SAFCI"."EMP"."NAME"
    (actual: 65, maximum: 64)
    Column 1 1000006408
    Column 2 ??????? ???????
    Column 3 05-NOV-2002:00:00:00
    Column 4 9089
    Column 5 ??. ?????
    Column 6 1.939
    Column 7 82
    Column 8 16.4
    Column 9 ?????????? ? ???? ???? ? 40 ??.
    Column 10 05-NOV-2002:00:00:00
    Column 11 ?????? ?????, ?.?. 172747675, ???. 23.11.200?
    Column 12 ????????? ?????????
    Column 13 ? ????
    Column 14 T
    Column 15
    Column 16 F 36943 rows imported
    I can not understand this problem, because I exported the hole user and
    also try to import the hole user in my new system.
    Pls., can some one point me to some paper about this problem or help me
    to solve the problem.
    Thanks
    configuration Oracle10g (EXP source)
    SQLWKS> select * from nls_database_parameters
    2>
    PARAMETER VALUE
    NLS_LANGUAGE BRAZILIAN PORTUGUESE
    NLS_TERRITORY BRAZIL
    NLS_CURRENCY R$
    NLS_ISO_CURRENCY BRAZIL
    NLS_NUMERIC_CHARACTERS ,.
    NLS_CHARACTERSET WE8ISO8859P1
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD/MM/RR
    NLS_DATE_LANGUAGE BRAZILIAN PORTUGUESE
    NLS_SORT WEST_EUROPEAN
    NLS_TIME_FORMAT HH24:MI:SSXFF
    NLS_TIMESTAMP_FORMAT DD/MM/RR HH24:MI:SSXFF
    NLS_TIME_TZ_FORMAT HH24:MI:SSXFF TZR
    NLS_TIMESTAMP_TZ_FORMAT DD/MM/RR HH24:MI:SSXFF TZR
    NLS_DUAL_CURRENCY Cr$
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_RDBMS_VERSION 10.1.0.2.0
    configuration Oracle10g Express (IMP destination)
    SQL> select * from nls_database_parameters;
    PARAMETER VALUE
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET AL32UTF8
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_RDBMS_VERSION 10.2.0.1.0

    Hi
    Your import database use a multibyte characterset, your export db a singlebyte cs.
    This means, a char can need more than 1 byte.
    Try this before import (and before create tables!!):
    alter system set nls_length_semantics=char;Greetings
    Sven

  • How to take Input from the callers telephone keypad into oracle database

    My client requires to automate a registration task for his customers by telephone. The requirement is the Oracle database should take numeric input from the callers telephone keypad eg 1 or 2 or 3 and the oracle must store this input inside its database and do some required processing with such input and generate a unique numeric id (sequence) and send back this unique numeric ID to the caller who is on hold. Please Note. Is there any electronic device to be installed at the client site to solve the above task eg. any telephone line to be connected to the CPU and etc?
    I completely have zero knowledge. Please can you outline/summarise the instructions how to construct and install to meet my client requirement.
    If this is not possible in Oracle I Can migrate to any database. Please suggest.
    please help me. if you cant help i appretiate if you give me some clues

    You need to start with your telephone system. Normally this situation would involve some sort of PBX or other internal switching equipment. Talk to your vendor. There's likely to be software, etc. available to perform the interaction with the end user.
    It should have programmable "hooks" to interface with various databases. To Oracle it just looks like another program.
    Ken

  • Help needed to loadjava apache poi jars into oracle database.

    Help needed to loadjava apache poi jars into oracle database. Many classes left unresolved. (Poi 3.7, database 11.1.0.7). Please share your experience!

    Hi,
    The first 3 steps are just perfect.
    But with
    loadjava.bat -user=user/pw@connstr -force -resolve geronimo-stax-api_1.0_spec-1.0.jar
    the results are rather unexpected. Here is a part of the log file:
    arguments: '-user' 'ccc/***@bisera7-db.dev.srv' '-fileout' 'c:\temp\load4.log' '-force' '-resolve' '-jarsasdbobjects' '-v' 'geronimo-stax-api_1.0_spec-1.0.jar'
    The following operations failed
    resource META-INF/MANIFEST.MF: creation (createFailed)
    class javax/xml/stream/EventFilter: resolution
    class javax/xml/stream/events/Attribute: resolution
    class javax/xml/stream/events/Characters: resolution
    class javax/xml/stream/events/Comment: resolution
    class javax/xml/stream/events/DTD: resolution
    class javax/xml/stream/events/EndDocument: resolution
    class javax/xml/stream/events/EndElement: resolution
    class javax/xml/stream/events/EntityDeclaration: resolution
    class javax/xml/stream/events/EntityReference: resolution
    class javax/xml/stream/events/Namespace: resolution
    class javax/xml/stream/events/NotationDeclaration: resolution
    class javax/xml/stream/events/ProcessingInstruction: resolution
    class javax/xml/stream/events/StartDocument: resolution
    class javax/xml/stream/events/StartElement: resolution
    class javax/xml/stream/events/XMLEvent: resolution
    class javax/xml/stream/StreamFilter: resolution
    class javax/xml/stream/util/EventReaderDelegate: resolution
    class javax/xml/stream/util/StreamReaderDelegate: resolution
    class javax/xml/stream/util/XMLEventAllocator: resolution
    class javax/xml/stream/util/XMLEventConsumer: resolution
    class javax/xml/stream/XMLEventFactory: resolution
    class javax/xml/stream/XMLEventReader: resolution
    class javax/xml/stream/XMLEventWriter: resolution
    class javax/xml/stream/XMLInputFactory: resolution
    class javax/xml/stream/XMLOutputFactory: resolution
    class javax/xml/stream/XMLStreamReader: resolution
    resource META-INF/LICENSE.txt: creation (createFailed)
    resource META-INF/NOTICE.txt: creation (createFailed)
    It seems to me that the root of the problem is the error:
    ORA-29521: referenced name javax/xml/namespace/QName could not be found
    This class exists in the SYS schema though and is valid. If SYS should be included as a resolver? How to solve this problem?

Maybe you are looking for