Query speed

Hi dear friends
I have problem with my Oracle9i Database R2 on linux redhat 2.1 and I have a table with 7 million records on it. When I run my query, after 3 minutes show the answer but I have the other Oracle9i Database R2 on Window 2000 and When I run my query, after 1 minutes. the both of hardware are the same, One Gig RAM and One Intel CPU 2.8 full cache.
Linux Kernel setting is :
kernel.sem = 250 32000 100 128
kernel.shmmax = 2147483647
kernel.shmmni = 4096
kernel.shmall = 2097152
fs.file-max = 65536
and Oracle PFILE is:
# Copyright (c) 1991, 2001, 2002 by Oracle Corporation
# Cache and I/O
db_block_size=8192
db_cache_size=25165824
db_file_multiblock_read_count=16
# Cursors and Library Cache
open_cursors=300
# Database Identification
db_domain=""
db_name=redora
# Diagnostics and Statistics
background_dump_dest=/opt/oracle/admin/redora/bdump
core_dump_dest=/opt/oracle/admin/redora/cdump
timed_statistics=TRUE
user_dump_dest=/opt/oracle/admin/redora/udump
# File Configuration
control_files=("/opt/oracle/oradata/redora/control01.ctl",
"/opt/oracle/oradata/redora/control02.ctl",
"/opt/oracle/oradata/redora/control03.ctl")
# Instance Identification
instance_name=redora
# Job Queues
job_queue_processes=10
# MTS
dispatchers="(PROTOCOL=TCP) (SERVICE=redoraXDB)"
# Miscellaneous
aq_tm_processes=1
compatible=9.2.0.0.0
# Optimizer
hash_join_enabled=TRUE
query_rewrite_enabled=FALSE
star_transformation_enabled=FALSE
# Pools
java_pool_size=83886080
large_pool_size=8388608
shared_pool_size=83886080
# Processes and Sessions
processes=150
# Redo Log and Recovery
fast_start_mttr_target=300
# Security and Auditing
remote_login_passwordfile=EXCLUSIVE
# Sort, Hash Joins, Bitmap Indexes
pga_aggregate_target=25165824
sort_area_size=524288
# System Managed Undo and Rollback Segments
undo_management=AUTO
undo_retention=10800
undo_tablespace=UNDOTBS1
and SGA :
Total System Global Area 235999352 bytes
Fixed Size 450680 bytes
Variable Size 201326592 bytes
Database Buffers 33554432 bytes
Redo Buffers 667648 bytes
I think I have problem in Linux configuration. I don't have so much experience on linux.
Thanks

easy, just logon sqlplus run the query and check the Path that it's using using set autotrace on
set autotrace on
your query and check the result, I think
they are using a direffence plan, remember that you must analyze the table first, and I dont think your problem has to be with linux

Similar Messages

  • How to improve this query speed ?....help me

    How to improve the query speed. Any hints can u suggest in the query or any correction. Here i am using sample tables for checking purpose, When i am trying with my original values, this type of query taking longer time to run
    select ename,sal,comm from emp where(comm is null and &status='ok') or (comm is not null and &status='failed');
    Thanx in advance
    prasanth a.s.

    What about
    select ename,sal,comm from emp where comm is null and &status='ok'
    union all
    select ename,sal,comm from emp where comm is not null and &status='failed';
    Regards
    Vaishnavi

  • Improve query speed

    Hi there,
    I'm relatively new to Pl/SQL and to APEX and i'm developing a application on APEX. So i have a report on a page which pulls out the orders according to the type of user. If the type is not LOG_FARM,'MANAGER','MNG_FARM' the code runs fine in acceptable time but when the user is one of those three it takes almost 3 minutes to finish the query. What i wanted to know is if within the code you see any clear change that i could do to improve the speed or if i can for example bring only the newest results (although this last option is not good because i have a search engine above and like this i wont have acess to older registers).
    Declare
    v_restricao varchar2(100);
    v_restricao_dim varchar2(4000);
    Begin
    if :G_APP_USER_EMP_TYPE = 'LOG_FARM' then
    v_restricao := 'and FO.ORDER_LOGISTICS = ''Y''';
    else
    v_restricao := null;
    end if;
    if :G_APP_USER_EMP_TYPE IN('MNG_FARM','DIM_FARM') then
    v_restricao_dim := '
    WHERE pharmacyid in (SELECT pharm_id
    FROM dim_user
    WHERE UPPER (emp_login) = UPPER ('''||:app_user||''')) ';
    end if;
    if :G_APP_USER_EMP_TYPE = 'CALL_FARM' then
    v_restricao_dim := '
    WHERE pharmacyid IN (SELECT DISTINCT pharmid
    FROM dimuser
    WHERE SEL_PHARM = ''Y'' ) ';
    end if;
    if :G_APP_USER_EMP_TYPE IN('LOG_FARM','MANAGER','MNG_FARM') then
    return 'SELECT fo.pharm_id, fo.order_id, dp.pharmacy_dsc, ds.whs_dsc, to_CHAR(fo.order_dt,''DD/MM/YYYY'') AS ORDER_DT,
    fo.order_obs, acronyms (fo.order_stat, ''ORDER.STATUS'') as STATUS,
    fo.order_stat
    FROM fact_order fo,
    dim_pharm dp,
    dim_whs ds
    WHERE upper(order_id) LIKE NVL(upper('''||:p501_order_id||'''),''%'')
    AND fo.pharm_id = dp.pharmacy_id
    AND fo.whs_id = ds.whs_id ' || v_restricao || '
    order by fo.order_id desc ';
    ELSE
    return 'SELECT fo.pharm_id, fo.order_id, dp.pharmacy_dsc, ds.whs_dsc, to_CHAR(fo.order_dt,''DD/MM/YYYY'') AS ORDER_DT,
    fo.order_obs, acronyms (fo.order_stat, ''ORDER.STATUS'') as STATUS,
    fo.order_stat,(select dkap.emp_name from dim_kap dkap where DKAP.PHARM_ID = FO.PHARM_ID AND DKAP.sel_pharm = ''Y'') AS KAP
    FROM fact_order fo,
    dim_pharm dp,
    dim_whs ds ,
    (SELECT pharmacyid
    FROM dim_pharm ' || v_restricao_dim ||' ) dim_phar
    WHERE upper(order_id) LIKE NVL(upper('''||:p501_order_id||'''),''%'')
    AND fo.pharm_id = dp.pharmacy_id
    AND fo.whs_id = ds.whs_id
    and FO.ORDER_LOGISTICS = ''N''
    AND dim_phar.pharmacy_id = fo.pharm_id
    order by fo.order_id desc';
    END IF;
    END;
    The two are quite similar only the second has a condition to show only the pharmacys associated with the current user and the first one shows all the pharmacys.
    Thanks in advance.
    Bruno

    Hi there,
    I'm relatively new to Pl/SQL and to APEX and i'm developing a application on APEX. So i have a report on a page which pulls out the orders according to the type of user. If the type is not LOG_FARM,'MANAGER','MNG_FARM' the code runs fine in acceptable time but when the user is one of those three it takes almost 3 minutes to finish the query. What i wanted to know is if within the code you see any clear change that i could do to improve the speed or if i can for example bring only the newest results (although this last option is not good because i have a search engine above and like this i wont have acess to older registers).
    Declare
    v_restricao varchar2(100);
    v_restricao_dim varchar2(4000);
    Begin
    if :G_APP_USER_EMP_TYPE = 'LOG_FARM' then
    v_restricao := 'and FO.ORDER_LOGISTICS = ''Y''';
    else
    v_restricao := null;
    end if;
    if :G_APP_USER_EMP_TYPE IN('MNG_FARM','DIM_FARM') then
    v_restricao_dim := '
    WHERE pharmacyid in (SELECT pharm_id
    FROM dim_user
    WHERE UPPER (emp_login) = UPPER ('''||:app_user||''')) ';
    end if;
    if :G_APP_USER_EMP_TYPE = 'CALL_FARM' then
    v_restricao_dim := '
    WHERE pharmacyid IN (SELECT DISTINCT pharmid
    FROM dimuser
    WHERE SEL_PHARM = ''Y'' ) ';
    end if;
    if :G_APP_USER_EMP_TYPE IN('LOG_FARM','MANAGER','MNG_FARM') then
    return 'SELECT fo.pharm_id, fo.order_id, dp.pharmacy_dsc, ds.whs_dsc, to_CHAR(fo.order_dt,''DD/MM/YYYY'') AS ORDER_DT,
    fo.order_obs, acronyms (fo.order_stat, ''ORDER.STATUS'') as STATUS,
    fo.order_stat
    FROM fact_order fo,
    dim_pharm dp,
    dim_whs ds
    WHERE upper(order_id) LIKE NVL(upper('''||:p501_order_id||'''),''%'')
    AND fo.pharm_id = dp.pharmacy_id
    AND fo.whs_id = ds.whs_id ' || v_restricao || '
    order by fo.order_id desc ';
    ELSE
    return 'SELECT fo.pharm_id, fo.order_id, dp.pharmacy_dsc, ds.whs_dsc, to_CHAR(fo.order_dt,''DD/MM/YYYY'') AS ORDER_DT,
    fo.order_obs, acronyms (fo.order_stat, ''ORDER.STATUS'') as STATUS,
    fo.order_stat,(select dkap.emp_name from dim_kap dkap where DKAP.PHARM_ID = FO.PHARM_ID AND DKAP.sel_pharm = ''Y'') AS KAP
    FROM fact_order fo,
    dim_pharm dp,
    dim_whs ds ,
    (SELECT pharmacyid
    FROM dim_pharm ' || v_restricao_dim ||' ) dim_phar
    WHERE upper(order_id) LIKE NVL(upper('''||:p501_order_id||'''),''%'')
    AND fo.pharm_id = dp.pharmacy_id
    AND fo.whs_id = ds.whs_id
    and FO.ORDER_LOGISTICS = ''N''
    AND dim_phar.pharmacy_id = fo.pharm_id
    order by fo.order_id desc';
    END IF;
    END;
    The two are quite similar only the second has a condition to show only the pharmacys associated with the current user and the first one shows all the pharmacys.
    Thanks in advance.
    Bruno

  • Improve hierarchical query speed using 'start with' and 'conect by prior'

    Hi
    The query within the 'explain' runs about a second and I need to imporve it.
    There are indexes set for both the child_id and the parent_id.
    The total number of rows for the PRM_COMPONENTS table is 120000.
    I'm working on 'Oracle Database 10g Release 10.2.0.4.0 - 64bit Production' in a Linux OS.
    I've included the explain plan below.
    Any suggestions would be appreciated.
    Thanks
    EXPLAIN PLAN FOR
    SELECT substr(SYS_CONNECT_BY_PATH(usage_title, '|'), 2, instr( SYS_CONNECT_BY_PATH(usage_title, '|'), '|', -1) -2 )
    FROM prm_components p
    WHERE LEVEL > 1 and usage_id = 10301100
    START WITH parent_usage_id is null
    CONNECT BY PRIOR usage_id = parent_usage_id;
    select * from table(dbms_xplan.display);
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 6 | 174 | 4 (0)|
    |* 1 | FILTER | | | | |
    |* 2 | CONNECT BY WITH FILTERING | | | | |
    |* 3 | TABLE ACCESS FULL | PRM_COMPONENTS | 69526 | 3937K| 2468 (1)|
    | 4 | NESTED LOOPS | | | | |
    | 5 | CONNECT BY PUMP | | | | |
    | 6 | TABLE ACCESS BY INDEX ROWID| PRM_COMPONENTS | 6 | 174 | 4 (0)|
    |* 7 | INDEX RANGE SCAN | PRM_PARENT_USAGE_ID_I | 2 | | 1 (0)|
    Predicate Information (identified by operation id):
    1 - filter(LEVEL>1 AND "USAGE_ID"=10301100)
    2 - access("PARENT_USAGE_ID"=PRIOR "USAGE_ID")
    3 - filter("PARENT_USAGE_ID" IS NULL)
    7 - access("PARENT_USAGE_ID"=PRIOR "USAGE_ID")
    Note
    - 'PLAN_TABLE' is old version

    Hi
    I've resolved the issue by other means but here is the description of the table anyways.
    USAGE_ID     NOT NULL     NUMBER
    PARENT_USAGE_ID          NUMBER
    PRODUCT_CODE     NOT NULL     VARCHAR2(12)
    PRINT_OR_ONLINE     NOT NULL     CHAR(1)
    SLP_ID          VARCHAR2(24)
    RELEASE_NAME          VARCHAR2(80)
    USAGE_TITLE          VARCHAR2(255)
    ENT_USAGE_TITLE          VARCHAR2(255)
    TRANS_TITLE          VARCHAR2(255)
    REVISION_TYPE          VARCHAR2(8)
    ACTIVE          CHAR(1)
    MARKED_FOR_DELETION          CHAR(1)
    CREATED_DT          DATE
    CREATED_BY          VARCHAR2(80)
    UPDATED_DT          DATE
    UPDATED_BY          VARCHAR2(80)
    PLANNING_COMMENTS          VARCHAR2(2000)
    OUTPUT_FILENAME          VARCHAR2(200)
    TRANSFORMER_ID          NUMBER(38)
    START_PAGE          VARCHAR2(8)
    START_PAGE_NUM          NUMBER
    END_PAGE          VARCHAR2(8)
    END_PAGE_NUM          NUMBER
    VOLUME          VARCHAR2(8)
    SORT_ORDER          NUMBER
    PRIORITY          NUMBER
    XREF_BLIND_ENTRY          CHAR(1)
    SPECIAL_CATEGORY          VARCHAR2(20)
    TO_BE_REVISED          CHAR(1)
    EDITOR          VARCHAR2(80)
    DUE_DT          DATE
    POSTED_DT          DATE
    LOGICAL_UOI_ID     NOT NULL     VARCHAR2(40)
    PHYSICAL_UOI_ID     NOT NULL     VARCHAR2(40)
    EDIT_APPROV_UOI_ID          VARCHAR2(40)
    EDIT_APPROV_BY          VARCHAR2(80)
    EDIT_APPROV_DT          DATE
    FINAL_APPROV_UOI_ID          VARCHAR2(40)
    FINAL_APPROV_BY          VARCHAR2(80)
    FINAL_APPROV_DT          DATE
    PHOTO_APPROV_UOI_ID          VARCHAR2(40)
    PHOTO_APPROV_BY          VARCHAR2(80)
    PHOTO_APPROV_DT          DATE
    RIGHTS_APPROV_UOI_ID          VARCHAR2(40)
    RIGHTS_APPROV_BY          VARCHAR2(80)
    RIGHTS_APPROV_DT          DATE
    LAYOUT_APPROV_UOI_ID          VARCHAR2(40)
    LAYOUT_APPROV_BY          VARCHAR2(80)
    LAYOUT_APPROV_DT          DATE
    BLUES_APPROV_UOI_ID          VARCHAR2(40)
    BLUES_APPROV_BY          VARCHAR2(80)
    BLUES_APPROV_DT          DATE
    LAST_PUB_ONLINE_DT          DATE
    LAST_PUB_PRINT_DT          DATE
    BLIND_ENTRY_ON_DT          DATE
    BLIND_ENTRY_OFF_DT          DATE
    DELIVERY_APPROV_UOI_ID          VARCHAR2(40)
    DELIVERY_APPROV_BY          VARCHAR2(80)
    DELIVERY_APPROV_DT          DATE
    APPROVAL_STATUS          VARCHAR2(40)
    CHANGE_SINCE_LAST_DELIVERY          CHAR(1)
    USAGE_COMMENTS          VARCHAR2(2000)
    LEXILE_CODE          VARCHAR2(18)
    SERIES          VARCHAR2(8)
    USAGE_TITLE_TMP          VARCHAR2(255)
    ENT_USAGE_TITLE_TMP          VARCHAR2(255)
    WORD_COUNT          VARCHAR2(10)
    READ_LEV          VARCHAR2(7)
    GRADES          VARCHAR2(80)
    DELIVERY_TYPE     NOT NULL     CHAR(1)
    METADATA_APPROVAL_STATUS          VARCHAR2(40)
    METADATA_APPROVAL_BY          VARCHAR2(80)
    METADATA_APPROVAL_DT          DATE
    RESOURCE_FLAG          CHAR(1)
    STZ_FLAG          CHAR(1)
    RESOURCE_TYPE_CODE          VARCHAR2(16)
    ASSET_DESCRIPTION          VARCHAR2(2000)
    ROLE_CODE          VARCHAR2(16)
    PROGRAMS_DATA          VARCHAR2(256)
    TIME_TO_COMPLETE          VARCHAR2(32)
    ENTITLEMENTS_DATA          VARCHAR2(256)
    ISBN_10          VARCHAR2(32)
    ISBN_13          VARCHAR2(32)
    MFG_ITEM_NO          VARCHAR2(256)
    AR          CHAR(1)
    SRC          CHAR(1)
    SRC_POINTS          NUMBER
    AUTHORS          VARCHAR2(320)
    SEARCH_STRINGS          VARCHAR2(2000)
    PATH_SLP_ID          VARCHAR2(256)
    PATH_GTC          VARCHAR2(256)
    PATH_TITLE          VARCHAR2(2560)
    GRL          VARCHAR2(8)
    COMMON_CORE          CHAR(1)

  • How to speed up the query time after adding order by

    Hi,
    In Form 6.0 front-end and 8i server, I put an "order by date
    column desc" in Block Property and when I hit query button the
    screen will bring up the most current date data instead of
    oldest date data. The problem is slowing down the query time,
    because the "order by" does the resorting work in the user temp
    table space. The base table to be queried also contains two
    varchar2(1500) columns and that also slow down the query time.
    I am wondering if there is any other way to bring the current
    records first without slow down the query speed ?
    Any thought will be helpful and appreciated.
    Thnaks!
    Tony
    null

    anthony hsu (guest) wrote:
    : Hi,
    : In Form 6.0 front-end and 8i server, I put an "order by date
    : column desc" in Block Property and when I hit query button the
    : screen will bring up the most current date data instead of
    : oldest date data. The problem is slowing down the query time,
    : because the "order by" does the resorting work in the user
    temp
    : table space. The base table to be queried also contains two
    : varchar2(1500) columns and that also slow down the query time.
    : I am wondering if there is any other way to bring the current
    : records first without slow down the query speed ?
    : Any thought will be helpful and appreciated.
    : Thnaks!
    : Tony
    What I have done is use hints to force the use of an index in
    descending order. The part of the following example that
    pertains to your question is the index_desc hint. Syntax and
    suggestions for using hints are in the Oracle documentation.
    The column you want to order desc by would have to be in an
    index. The following example creates a view which you might not
    need to do.
    create or replace view v_sample_result as
    select
    /*+
    first_rows
    index ( sa )
    index_desc ( t c_test_altk )
    index ( r c_result_pk )
    index ( lt )
    null

  • Slow query results for simple select statement on Exadata

    I have a table with 30+ million rows in it which I'm trying to develop a cube around. When the cube processes (sql analysis), it queries back 10k rows every 6 seconds or so. I ran the same query SQL Analysis runs to grab the data in toad and exported results, and the timing is the same, 10k every 6 seconds or so. r
    I ran an execution plan it returns just this:
    Plan
    SELECT STATEMENT  ALL_ROWSCost: 136,019  Bytes: 4,954,594,096  Cardinality: 33,935,576       
         1 TABLE ACCESS STORAGE FULL TABLE DMSN.DS3R_FH_1XRTT_FA_LVL_KPI Cost: 136,019  Bytes: 4,954,594,096  Cardinality: 33,935,576  I'm not sure if there is a setting in oracle (new to the oracle environment) which can limit performance by connection or user, but if there is, what should I look for and how can I check it.
    The Oracle version I'm using is 11.2.0.3.0 and the server is quite large as well (exadata platform). I'm curious because I've seen SQL Server return 100k rows ever 10 seconds before, I would assume an exadata system should return rows a lot quicker. How can I check where the bottle neck is?
    Edited by: k1ng87 on Apr 24, 2013 7:58 AM

    k1ng87 wrote:
    I've notice the same querying speed using Toad (export to CSV)That's not really a good way to test performance. Doing that through Toad, you are getting the database to read the data from it's disks (you don't have a choice in that) shifting bulk amounts of data over your network (that could be a considerable bottleneck) and then letting Toad format the data into CSV format (process the data adding a little bottleneck) and then write the data to another hard disk (more disk I/O = more bottleneck).
    I don't know exedata but I imagine it doesn't quite incorporate all those bottlenecks.
    and during cube processing via SQL Analysis. How can I check to see if its my network speed thats effecting it?Speak to your technical/networking team, who should be able to trace network activity/packets and see what's happening in that respect.
    Is that even possible as our system resides off site, so the traffic is going through multiple networks.Ouch... yes, that could certainly be responsible.
    I don't think its the network though because when I run both at the same time, they both are still querying at about 10k rows every 6 seconds.I don't think your performance measuring is accurate. What happens if you actually do the cube in exedata rather than using Toad or SQL Analysis (which I assume is on your client machine?)

  • Straight SQL query results are considerably faster than a FUNCTION...

    Hey all,
    First time new poster, long time reader.
    I have a function that calls a specific query with 2 variables. When I execute the query and hardcode the values, it executes <strong>much </strong>quicker than when the function is called, and the values are passed. I'll give some examples below... here's the full function:
    FUNCTION get_golf_gameplay_completed (
    p_cabinet_id_in NUMBER,
    p_days_back_in NUMBER
    RETURN ref_cursor
    IS
    cu_ref_cur ref_cursor;
    BEGIN
    OPEN cu_ref_cur FOR
    SELECT golf_game_data.cabinet_id, golf_game_data.player_num,
    TO_CHAR
    (global_utility_pkg.cnvrt_gmt_to_local_time_by_cab
    (golf_game_data.cabinet_id,
    golf_game_data.server_start_time,
    golf_game_data.server_start_time
    'MM/DD/YYYY HH:MI:SS AM'
    ) cab_local_event_timestamp,
    TO_CHAR
    (global_utility_pkg.cnvrt_gmt_to_local_time_by_cab
    (golf_game_data.cabinet_id,
    golf_game_data_end_time.server_end_time,
    golf_game_data_end_time.server_end_time
    'MM/DD/YYYY HH:MI:SS AM'
    ) cab_local_session_end,
    TO_CHAR
    (global_utility_pkg.cnvrt_gmt_to_local_time_by_cab
    (golf_game_data.cabinet_id,
    golf_game_data_end_time.server_end_time,
    golf_game_data_end_time.server_end_time
    'HH:MI:SS AM'
    ) cab_local_session_end_time,
    TO_CHAR
    (global_utility_pkg.cnvrt_gmt_to_local_time_by_cab
    (golf_game_data.cabinet_id,
    golf_game_data_end_time.server_end_time,
    golf_game_data_end_time.server_end_time
    'MM/DD/YYYY'
    ) cab_local_server_end_date
    FROM golf_game_data INNER JOIN golf_game_data_end_time
    ON golf_game_data.cabinet_id =
    golf_game_data_end_time.cabinet_id
    AND golf_game_data.player_num =
    golf_game_data_end_time.player_num
    AND golf_game_data.session_id =
    golf_game_data_end_time.session_id
    WHERE golf_game_data.cabinet_id = {color:#ff0000}<strong>p_cabinet_id_in</strong>{color}
    AND golf_game_data_end_time.cabinet_id = {color:#ff0000}<strong>p_cabinet_id_in</strong>{color}
    AND golf_game_data_end_time.server_end_time IS NOT NULL
    AND golf_game_data.server_start_time &gt;= SYSDATE - {color:#ff0000}<strong>p_days_back_in</strong>{color}
    ORDER BY golf_game_data_end_time.server_end_time;
    RETURN cu_ref_cur;
    END get_golf_gameplay_completed;
    I've highlighted the variables for readability.
    This is how I've been executing the function from the SQL*Plus command line:
    select tech_report_package.get_golf_gameplay_completed(122477,10) from dual;
    The speed differences between running the raw query versus the function as a whole have been driving me nuts. For the cabinet IDs (the first variable inserted - in the case above, it would be 122477) that have a high amount of activity on the GOLF_GAME_DATA table, the query itself returns results within a few seconds (usually under 10 seconds even for the most bloated data). However, the function, when called with the same IDs, takes minutes - as long as 3 or 4 minutes in some cases.
    I've run various tests in a cloned production environment, and we've isolated any possible issues with networking, data contention, and partitioning (the GOLF_GAME_DATA table is not partitioned) - the issue is simply the query speed versus function speed, with no external influences.
    What baffles me is the fact that the function is very basic - all it does it executes the query and returns the results, just like the query... the only difference being the cursor that's opened.
    So, getting back to square one; my question was: <u>why would any query run much quicker than a function that does nothing but call the same query?</u>
    Any thoughts?
    Thanks
    -Bob

    No change after using that parameter in the session.
    Any other ideas? I was playing with results with one of my coworkers and my manager, and we were unable to produce any results that were, well, reproducable. Even after <strong>shutting the database down</strong>, and starting it back up, it seems that the results from some of the example queries that we were running were being cached on the SAN which hosts the database's files, or cached and restored after the instance was started up.
    FYI - we test on an instance that mimics production. It's a clone that's produced once a day against the production database, and has no user contention. We bring it up, and are able to test things like we would in production with no other users on the instance at the same time. We are still seeing slow results.

  • Query on Views is slow

    Hi
    in OC, i understand each question group correspond to a view and we can query it using SQL. I have tried to query on 1 record (by pt, patient ID) and it takes quite a while to respond back. From Explain plan, it can be seen that it takes a long path to return back a row.
    When i create a temp table with all records from this view, the speed to return this same record takes less than a split second. As such, may i know is there anyway to improve the query speed on such views ?
    for some advice please
    Thank you
    Boon Yiang

    One thing that you need to do to ensure good query performance in OC is to make sure the database has statistics created for the OC tables via the "analyze table" command. The cost-based query optimizer in Oracle relies on up-to-date table statistics to optimize queries. Table analysis should be performed regularly by the DBA. OC provides scripts to run the table analysis. They are in the $RXC_INSTALL directory and are named anarxctab.sql, anadestab.sql, and analrtab.sql.
    Bob Pierce
    Constella Group
    [email protected]

  • How does XDB optimize XML Query?

    I found the query speed of XDB is much slower than Berkeley XML DB.
    How does XDB optimize XML Query?
    Are there any documents on this subject?
    And can XDB create indices on XMLType ( e.g. the index on element/attribute value and/or structure index)? if yes, how to do that?

    lezhou had a valid question and asked about:
    "I found the query speed of XDB is much slower than Berkeley XML DB"
    "How does XDB optimize XML Query?"
    These point to a "XML DB Concepts Guide", which does not yet exist.
    The procedures are explained, the methods are explained. If you enable event tracing as described in the XMLDB Developers Guide 10gR2, you will see statements in your trace file which will tell you more about the XML DB architecture (and therefor you can deduct performance impact) then the manual will reveal.
    An other example:
    The xdbconfig.xsd file is neatly explained - in regards of http-port-etc
    But not what the implecations are if you alter one of the other ones (the not explained parameters).
    If you know the architecture (GROUND LEVEL), you can give an correct answer to the initial question "I found the query speed of XDB is much slower than Berkeley XML DB. How does XDB optimize XML Query?"
    The balanced tree index is constructed the same way (on the same theory) in Oracle, DB2, but apparantly X is faster because in with the same buildup/architecture/databasestructure for both products, with the same data, with the same X --> value Y is beter constructed and delivers a better performance.
    apples = apples
    oracle xmldb = berkeley xmldb --> how can i test the o.apples=b.apples and that under these circumstances o.apples are faster ;-)
    THEREFOR:
    "I have to disagree a little bit...("It speaks about all these in detail").
    Still waiting for the XMLDB Concepts Guide / Administrators Guide / Performance Guide.

  • SLOW report performance with bind variable

    Environment: 11.1.0.7.2, Apex 4.01.
    I've got a simplified report page where the report runs slowly compared to running the same query in sqldeveloper. The report region is based on a pl/sql function returning a query. If I use a bind variable in the query inside apex it takes 13 seconds to run, and if I hard code a string it takes only a few hundredths of a second. The query returns one row from a table which has 1.6 million rows. Statistics are up-to-date and the columns in the joins and where clause are indexed.
    I've run traces using p_trace=YES from Apex for both the bind variable and hard coded strings. They are below.
    The sqldeveloper explain plan is identical to the bind variable plan from the trace, yet the query runs in 0.0x seconds in sqldeveloper.
    What is it about bind variable syntax in Apex that is causing the bad execution plan? Apex Bug? 11g bug? Ideas?
    tkprof output from Apex trace with bind variable is below...
    select p.master_id link, p.first_name||' '||p.middle_name||' '||p.last_name||' '||p.suffix personname,
    p.gender||' '||p.date_of_birth g_dob, p.master_id||'*****'||substr(p.ssn,-4) ssn, p.status status
    from persons p
    where
       p.person_id in (select ps.person_id from person_systems ps where ps.source_key  like  LTRIM(RTRIM(:P71_SEARCH_SOURCE1)))
    order by 1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.01          0          1         27           0
    Fetch        2     13.15      13.22      67694      72865          0           1
    total        4     13.15      13.23      67694      72866         27           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 62  (ODPS_PRIVACYVAULT)   (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT ORDER BY (cr=72869 pr=67694 pw=0 time=0 us cost=29615 size=14255040 card=178188)
          1   FILTER  (cr=72869 pr=67694 pw=0 time=0 us)
          1    HASH JOIN RIGHT SEMI (cr=72865 pr=67694 pw=0 time=0 us cost=26308 size=14255040 card=178188)
          1     INDEX FAST FULL SCAN IDX$$_0A300001 (cr=18545 pr=13379 pw=0 time=0 us cost=4993 size=2937776 card=183611)(object id 68485)
    1696485     TABLE ACCESS FULL PERSONS (cr=54320 pr=54315 pw=0 time=21965 us cost=14958 size=108575040 card=1696485)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (ORDER BY)
          1    FILTER
          1     HASH JOIN (RIGHT SEMI)
          1      INDEX   MODE: ANALYZED (FAST FULL SCAN) OF
                     'IDX$$_0A300001' (INDEX)
    1696485      TABLE ACCESS   MODE: ANALYZED (FULL) OF 'PERSONS' (TABLE)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                       1276        0.00          0.16
      db file sequential read                       812        0.00          0.02
      direct path read                             1552        0.00          0.61
    ********************************************************************************Here's the tkprof output with a hard coded string:
    select p.master_id link, p.first_name||' '||p.middle_name||' '||p.last_name||' '||p.suffix personname,
    p.gender||' '||p.date_of_birth g_dob, p.master_id||'*****'||substr(p.ssn,-4) ssn, p.status status
    from persons p
    where
       p.person_id in (select ps.person_id from person_systems ps where ps.source_key  like  LTRIM(RTRIM('0b')))
    order by 1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.02       0.04          0          0          0           0
    Execute      1      0.00       0.00          0          0         13           0
    Fetch        2      0.00       0.00          0          8          0           1
    total        4      0.02       0.04          0          8         13           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 62  (ODPS_PRIVACYVAULT)   (recursive depth: 1)
    Rows     Row Source Operation
          1  SORT ORDER BY (cr=10 pr=0 pw=0 time=0 us cost=9 size=80 card=1)
          1   FILTER  (cr=10 pr=0 pw=0 time=0 us)
          1    NESTED LOOPS  (cr=8 pr=0 pw=0 time=0 us)
          1     NESTED LOOPS  (cr=7 pr=0 pw=0 time=0 us cost=8 size=80 card=1)
          1      SORT UNIQUE (cr=4 pr=0 pw=0 time=0 us cost=5 size=16 card=1)
          1       TABLE ACCESS BY INDEX ROWID PERSON_SYSTEMS (cr=4 pr=0 pw=0 time=0 us cost=5 size=16 card=1)
          1        INDEX RANGE SCAN IDX_PERSON_SYSTEMS_SOURCE_KEY (cr=3 pr=0 pw=0 time=0 us cost=3 size=0 card=1)(object id 68561)
          1      INDEX UNIQUE SCAN PK_PERSONS (cr=3 pr=0 pw=0 time=0 us cost=1 size=0 card=1)(object id 68506)
          1     TABLE ACCESS BY INDEX ROWID PERSONS (cr=1 pr=0 pw=0 time=0 us cost=2 size=64 card=1)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (ORDER BY)
          1    FILTER
          1     NESTED LOOPS
          1      NESTED LOOPS
          1       SORT (UNIQUE)
          1        TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                       'PERSON_SYSTEMS' (TABLE)
          1         INDEX   MODE: ANALYZED (RANGE SCAN) OF
                        'IDX_PERSON_SYSTEMS_SOURCE_KEY' (INDEX)
          1       INDEX   MODE: ANALYZED (UNIQUE SCAN) OF 'PK_PERSONS'
                      (INDEX (UNIQUE))
          1      TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                     'PERSONS' (TABLE)

    Patrick, interesting insight. Thank you.
    The optimizer must be peeking at my bind variables with it's eyes closed. I'm the only one testing and I've never passed %anything as a bind value. :)
    Here's what I've learned since my last post:
    I don't think that sqldeveloper is actually using the explain plan it says it is. When I run explain plan in sqldeveloper (with a bind variable) it shows me the exact same plan as Apex with a bind variable. However, when I run autotrace in sqldeveloper, it takes a path that matches the hard coded values, and returns results in half a second. That autotrace run is consistent with actually running the query outside of autotrace. So, I think either sqldeveloper isn't really using bind variables, OR it is using them in some other way that Apex does not, or maybe optimizer peeking works in sqldeveloper?
    Using optimizer hints to tweak the plan helps. I've tried both /*+ FIRST_ROWS */ and /*+ index(ps pk_persons) */ and both drop the query to about a second. However, I'm loath to use hints because of the very dynamic nature of the query (and Tom Kyte doesn't like them either). The hints may end up hurting other variations on the query.
    I also tested the query by wrapping it in a select count(1) from ([long query]) and testing the performance in sqldeveloper and in Apex. The performance in that case is identical with both bind variables and hard coded variables for both Apex and SqlDeveloper. That to me was very interesting and I went so far as to set up two bind variable report regions on the same page. One region wrapped the long query with select count(1) from (...) and the other didn't. The wrapped query ran in 0.01 seconds, the unwrapped took 15ish seconds with no other optimizations. Very strange.
    To get performance up to acceptable levels I have changed my function returning query to:
    1) Set the equality operator to "=" for values without wildcards and "like" for user input with wildcards. This makes a HUGE difference IF no wildcard is used.
    2) Insert a /*+ FIRST_ROWS */ hint when users chose the column that requires the sub-query. This obviously changes the optimizer's plan and improves query speed from 15 seconds to 1.5 seconds even with wildcards.
    I will NOT be hard coding any user supplied values in the query string. As you can probably tell by the query, this is an application where sql injection would be very bad.
    Jeff, regarding your question about "like '%' || :P71_SEARCH_SOURCE1 || '%'". I've found that putting wildcards around values, particularly at the beginning will negate any indexing on the column in question and slows performance even more.
    I'm still left wondering if there isn't something in Apex that is breaking the optimizer "peeking" that Patrick describes. Perhaps something in the way it switches contexts from apex_public_user to the workspace schema?

  • Essbase Studio Performance Issue : Data load into BSO cube

    Hello,
    Having succesfully built my outline by member loading through Essbase Studio, I have tried to load data into my application again with Studio. However I was never able to complete the data load because it is taking forever. Each time I tried to work with Studio in streaming mode (hoping to increase the query speed), the load gets terminated due to the following error : Socket read timed out.
    In the Studio properties file, I typed in, oracle.jdbc.ReadTimeout=1000000000, but the result has not changed. Even if it did work out, I am also not sure the streaming mode is gonna provide a much faster alternative to working in non-streaming mode. What I'd like to know is, which Essbase settings I can change (either in Essbase or Studio server) in order to speed up my data load. I am loading into a Block Storage database with 3 Dense, 8 Sparse and 2 attribute dimensions. I filtered some dimensions and tried to load data to see exactly how long it takes to create a certain number of blocks. With ODBC setting in Essbase Studio, it took me 2.15 hours to load data into my application where only 153 blocks were created with the block size of 24B. Assuming that in my real application the number of Blocks created are going to be at least 1000 times more than this , I need to make some changes in settings. I am transferring the data from Oracle Database, with 5 tables joined to a fact table (view) from the same data source. All the cache settings in Essbase are in default mode. Would changing cache settings, buffer size or multiple threads help to increase the performance? Or what would you suggest that I should do?
    Thank you very much.

    Hello user13695196 ,
    (sorry I no longer remember my system number here)
    Before it comes to any optimisation attemps in the essbase (also Studio) environment you should definitily make clear that your source data query performs well at the oracle db.
    I would recommand:
    1. to create in your db source schema a View from your sql statement (these behind your data load rule)
    2. query against this view with any GUI (Sql Developer, TOAD etc.) to fetch all rows and measure the time it takes to complete. Also count the effected (returned) number of rows for your information and for future comparing of results.
    If your query runs longer then you think is acceptable then
    a) check DB statistiks,
    b) check and/or consider creating indexes
    c) if you are unsure then kindliy ask your DBA for help. Usually they can help you very fast.
    (Don't be shy - a DBa is a human being like you and me :-) )
    Only when your sql runs fast (enough for you; or your DBA says is the best you can achieve) at the database move your effort over to essbase.
    One hint in addition:
    We had often problems when using views for dataload (not only performance but rather other strange behavior) . Thats the reaons why I like more directly to set up on (persistence) tables.
    Just to keep in mind: If nothing helps create a table from your view and then query your data from this table for your essbase data load. Normaly however this should be your last option.
    Best Regards
    (also to you Torben :-) )
    Andre
    Edited by: andreml on Mar 17, 2012 4:31 AM

  • Reduce time consumption of a bulk insert statement

    Hi All,
    I am trying to insert some large amount of data records into a Global Temporary Table (GTT) using a query like below. Below SQL statement is generated as a string and executed as a dynamic sql statement.
    INSERT INTO
    my_table ( col1, col2, col3, col4)
    SELECT c1, c2, c3, c4
    FROM tab1 t1,
    tab2 t2,
    tab3 t3
    WHERE <<dynamically generated where clause>>
    UNION ALL
    SELECT c1, c2, c3, c4
    FROM tab4 t4,
    tab5 t5,
    tab6 t6
    WHERE <<dynamically generated where clause>>
    UNION ALL
    SELECT c1, c2, c3, c4
    FROM tab4 t4,
    tab5 t5,
    tab6 t6
    WHERE <<dynamically generated where clause>>
    UNION ALL
    SELECT c1, c2, c3, c4
    FROM tab4 t4,
    tab5 t5,
    tab6 t6
    WHERE <<dynamically generated where clause>>
    The problem is, it takes some considerable amount of time(25-30 seconds) to write the resulted data set into GTT(my_table ). I have checked the SELECT statement above (without the INSERT), and it gives the result set in few seconds. Therefore i assume it's the INSERT that takes more time. (SELECT statement returns around 75000+ records). The GTT consists of 8 columns and 6 out of those are marked as a primary key.
    Are there any other mechanisms that I can use to efficiently insert large amount of data in to the table? Really appreciate all of your comments and suggestions.
    Best Regards,
    Nipuna

    Hi,
    I'm not sure about your query speed :) Don't have it against me, but sometimes users just run SELECT statement and waits unitl first rows are returned (e.g. using TOAD which fetches 500 first rows as default) and they are convinced that this is response time for whole set. You have to navigate to last record to see real response time. But probably this is not the casse here. When you insert a large set of data consider to move such statement from forms to database (you avoid data roundtrip between forms and DB). If your insert still takes to much time then try to trace session you run your statement and(if possible) try to monitor your database resource ussge. If you're not able to use dbms_trace then you can just lokk at the system v$Session views. Following two views can be helpful to investigate most time consuming waits and can help understand the reason of slowness.
    select * from v$session_wait
    select * from v$session_wait_history

  • Preformance issues when upgrading from dbxml 2.3.8 to 2.4.13

    I have recently upgraded my dbxml distribution from 2.3.8 to 2.4.13 (including the latest patch). I have noticed that many of the queries I issue take longer than they used to. Specifically if the query is complex it takes 10 times as long as it did with 2.3.8. Also I have noted a difference in speed between issuing the query via the dbxml shell (query takes ~45 seconds) vs issuing it through a c++ interface (query takes 4+ minutes). The query am issuing is:
    for $i in collection('projectDatabase.dbxml')/project
    where $i[obsblock/obsblockStatus eq 'INCOMPLETE' and obsblock/receiverBand eq '3MM' and obsblock/remainingTime >= 1.0 and ((obsblock/reqRaCoverage/@low <= 2.61799 and obsblock/reqRaCoverage/@low >= 0.654498) or (obsblock/reqRaCoverage/@high <= 2.61799 and obsblock/reqRaCoverage/@high >= 0.654498) or (0.654498 <= obsblock/reqRaCoverage/@high and 0.654498 >= obsblock/reqRaCoverage/@low) or ((obsblock/reqRaCoverage/@high < obsblock/reqRaCoverage/@low) and ((0.654498 <= obsblock/reqRaCoverage/@high) or (2.61799 <= obsblock/reqRaCoverage/@high) or (0.654498 >= obsblock/reqRaCoverage/@low) or (2.61799 >= obsblock/reqRaCoverage/@low)))) and obsblock/arrayConfiguration eq 'C' and projectID ne'opnt' and projectID ne'rpnt' and projectID ne'tilt' and projectID ne'fringe' and projectID ne'test']
    return $i
    if I simplify the query by removing references to the obsblock/reqRaCoverage/@low and obsblock/reqRaCoverage/@high nodes:
    for $i in collection('projectDatabase.dbxml')/project
    where $i[obsblock/obsblockStatus eq 'INCOMPLETE' and obsblock/receiverBand eq '3MM' and obsblock/remainingTime >= 1.0 and obsblock/arrayConfiguration eq 'C' and projectID ne'opnt' and projectID ne'rpnt' and projectID ne'tilt' and projectID ne'fringe' and projectID ne'test']
    return $i
    it returns much faster. I am wondering if this is an issue with optimization of the complex query?
    The database is fully indexed and was re-indexed when I upgraded to 2.4.13

    Doug,
    I've got your container. It's a wholedoc container with document indexes (a lot of them). Here's what I see for query speeds for your large, slow query using the dbxml shell.
    Query is:
    query "for $i in collection('pdb.dbxml')/project where $i[obsblock/obsblockStatus eq 'INCOMPLETE' and obsblock/receiverBand eq '3MM' and obsblock/remainingTime >= 1.0 and ((obsblock/reqRaCoverage/@low <= 2.61799 and obsblock/reqRaCoverage/@low >= 0.654498) or (obsblock/reqRaCoverage/@high <= 2.61799 and obsblock/reqRaCoverage/@high >= 0.654498) or (0.654498 <= obsblock/reqRaCoverage/@high and 0.654498 >= obsblock/reqRaCoverage/@low) or ((obsblock/reqRaCoverage/@high < obsblock/reqRaCoverage/@low) and ((0.654498 <= obsblock/reqRaCoverage/@high) or (2.61799 <= obsblock/reqRaCoverage/@high) or (0.654498 >= obsblock/reqRaCoverage/@low) or (2.61799 >= obsblock/reqRaCoverage/@low)))) and obsblock/arrayConfiguration eq 'C' and projectID ne'opnt' and projectID ne'rpnt' and projectID ne'tilt' and projectID ne'fringe' and projectID ne'test'] return $i"
    2.3.11 -- 2.4 seconds
    2.4.13 -- 60 seconds
    That is a significant slowdown and it needs further investigation but is almost certainly related to the fact that it is wholedoc storage. The optimizer appears to be choosing unwisely. This will take a few days to work out.
    I also changed the container to be node storage with node indexes and the times went to:
    2.3.11 -- 13 sec (slower than wholedoc)
    2.4.13 -- 40 sec (faster than wholedoc)
    I do know why your application is slower than the dbxml shell. There is a new flag that should almost always be used for wholedoc container queries -- DBXML_DOCUMENT_PROJECTION. Add that to you query excecution.
    Another thing -- query preparation is a bit slower in 2.4 so you should use prepared queries whenever possible to amortize that cost.
    Regards,
    George

  • Regarding Technical Setting of Tables.

    Hey Hi Experts,
    I have following issues within my implementation project.
    1. On the PRD server the data is huge when i see Transaction  tables like VEPO or LIPS I see that the size  Category specified ( in my case vepo is set for Catagor 4 ie 81,000 to       320,000 records but it far exceeds the limits current is 6 lacks )  in the technical setting is very low as compared to the data size  in them  ( Will that matter  my Reports performance )
    2.  There is a radio button goup where its set to "NO Buffring allowed "  (Caution ..!)
    I need you suggession for the following do i need to adjust the technical setting of  the tables if my data size is exceding the limit specified record count .
    If ineed to set it  What type of buffring will i choose.
    Plz experts...:)

    1) When you activate a table for the first time the system needs to know how much space it needs to reserve for your table. Your definition tells the system the maximum size in bytes for each record. All the system needs to know is how many records (a ball park figure) your table is likely to have. This is what you enter in the size (category). This does not mean that your table cannot exceed that size. When your table is about the exceed the specified size the system needs to allocate more space. How does it know how much space it needs to allocate? Again it looks at this technical setting value. In short the category indicates to the system the amount of disc space to allocate/reallocate your table.
    Will this effect your query speed? No.
    2) Buffering determines how many records are transferred from the database to the memory. If you use full buffering any query to the table will cause all the records in the table to be transferred into your buffer. This setting is acceptable for smaller tables. You would not want to do this with larger tables because you will not have enough space in the memory to read all the data. The partial buffering transfers all the records that qualify by part of the primary key. For example if you are reading VBAP (Sales Order Line table), if you repair the table to have partial buffering and you read one sales order (VBELN) the system automatically transfers all the data of all the lines (POSNR) of the sales order from the database to the buffer.
    Buffering helps improve performance but careful thought needs to be put into this and besides it is not advisable to repair any SAP tables.
    The best solution for large tables is archiving. The SAP R/3 system should not be used to view reports on historic data. We have the BW system for that. The BW system is designed to be flexible with data and can quickly give you the historical reports you desire. It is a best practice to archive large tables every 2 years and use the BW system for the historical data analysis.

  • BUSINESS INTELLIGENCE ENHANCEMENTS

    제품 : SQL*PLUS
    작성날짜 : 2004-05-17
    ==================================
    BUSINESS INTELLIGENCE ENHANCEMENTS
    ==================================
    PURPOSE
    Oracle 9i에서 강화된 BUSINESS INTELLIGENCE 관련 내용을 요약해 본다.
    Explanation
    아래의 내용에 대해 설명될 것이다.
    - enhanced된 Oracle9i analytical functions
    - Use grouping sets
    - Create SQL statements with the new WITH clause
    Example
    1. enhanced된 Oracle9i analytical functions
    1) Inverse percentile functions (역백분위수 함수)
    : 특정 percent에 해당하는 값을 찾는데 사용된다.
    percentile_disc : 특정 백분위수에 가까운 값을 return하는 함수.
    percentile_cont : linear interpolation을 사용하여 연속적인
    백분위수를 계산하는 함수
    ex) 1999년 11월달의 distribution channel당 판매량의 50%에 가까운
    discrete value을 찾아라.
    select c.channel_desc, avg(s.quantity_sold), percentile_disc(0.5)
    within group
    (order by s.quantity_sold desc) percentile_50
    from sales s, channels c
    where c.channel_id = s.channel_id
    and time_id between '01-NOV-1999' and '30-NOV-1999'
    group by c.channel_desc;
    2) What-if rank and distribution functions
    : 만약 어떤 data가 add된다면 이 data가 어느정도의 rank나 percenage에
    속하는지를 찾는데 사용한다. what if analysis에 해당한다.
    RANK : 그룹에서의 순위
    DENSE_RANK : 중복값을 배제한 순위
    PERCENT_RANK
    CUME_DIST
    ex) 새로운 사람이 고용되어 $10000 을 받는다면 부서당 salary을 비교할때
    어느 정도 rank인가?
    select department_id, round(avg(salary)) avg_salary,
    rank(10000) within group (order by salary desc) rank,
    dense_rank(10000) within group (order by salary desc) dense
    from employees
    group by department_id ;
    3) FIRST/LAST aggregate functions
    : 각 그룹의 첫번째나 마지막 값을 나타낼때 사용한다.
    ex) manager당 commission을 가장 많이 받는 사람과 적게 받는 사람을 구하라.
    select manager_id,
    min(salary) keep (dense_rank first order by commission_pct)
    as low_comm,
    max(salary) keep (dense_rank last order by commission_pct)
    as high_comm
    from employees
    where commission_pct is not null
    group by manager_id;
    4) WIDTH_BUCKET function
    : with_bucket은 oracle 8i의 ntile과 비슷하다. 차이점은 ntile은 있는 모든
    값을 그냥 정해진 bucket의 수로 나누는 것이고. with_bucket은 최소,최상
    값을 정할 수 있다는 점이 다르다.
    만약의 시험성적을 저장하고 있는 테이블이 있으면 각 점수별 분포를 확인
    할때 사용할 수 있다.
    With_bucket(expression, 0, 100,10)
    그러면 10점 단위로 bucket이 생성되고 각 점수별 해당 bucket이 return된다.
    ex) Low boundary value = 3000
    High boundary value = 13000
    Number of buckets = 5
    select last_name, salary ,
    width_bucket(salary,3000,13000,5) as sal_hist
    from employees ;
    5) Grouping sets
    : group by절에 사용되며 소계나 특정 level의 소계등을 구할때 쓰인다.
    2. Use grouping sets
    1) 아래의 세개의 그룹에 대한, product가 10,20,45이고 1999년 11월의 1,2일에
    해당하는 값을 구하라.
    Time, Channel, Product
    Time, Channel
    Channel, Product
    select time_id, channel_id, prod_id, round(sum(quantity_sold)) as cost
    from sales
    where (time_id = '01-DEC-1999' or time_id = '02-DEC-1999')
    and prod_id in (10,20,30)
    group by grouping sets
    ((time_id,channel_id,prod_id),(time_id,channel_id),(channel_id,prod_id)) ;
    2) GROUPING SETS vs. CUBE and ROLLUP (참고 bul#11914)
    CUBE나 ROLLUP은 필요한 group만 볼수 없지만 9i에서는 1)와 같이 원하는
    group만을 볼수 있다.
    select time_id, channel_id, prod_id,
    round(sum(quantity_sold)) as quantity_sum
    from sales
    where (time_id = '01-DEC-1999' or time_id = '02-DEC-1999')
    and prod_id in (10,20,45)
    group by cube(time_id, channel_id, prod_id);
    3) GROUPING SETS vs UNION ALL
    Grouping sets절 대신에 union all을 사용하는 것은 table을 더 많이
    scan하게 되고 문장을 구사하기가 비효율적이다.
    아래의 두 문장은 같은 결과를 return한다.
    select time_id, channel_id, prod_id,
    round(sum(quantity_sold)) as quantity_sum
    from sales
    where (time_id = '01-DEC-1999' or time_id = '02-DEC-1999')
    and prod_id in (10,20,45)
    group by grouping sets (time_id, channel_id, prod_id);
    select to_char(time_id,'DD-MON-YY'),
    round(sum(quantity_sold)) as quantity_sum
    from sales
    where (time_id = '01-DEC-1999' or time_id = '02-DEC-1999')
    and prod_id in (10,20,45)
    group by time_id
    union all
    select channel_id ,
    round(sum(quantity_sold)) as quantity_sum
    from sales
    where (time_id = '01-DEC-1999' or time_id = '02-DEC-1999')
    and prod_id in (10,20,45)
    group by channel_id
    union all
    select to_char(prod_id) ,
    round(sum(quantity_sold)) as quantity_sum
    from sales
    where (time_id = '01-DEC-1999' or time_id = '02-DEC-1999')
    and prod_id in (10,20,45)
    group by prod_id;
    4) Composite Columns
    아래의 세개의 그룹에 대한 product가 10,20,45이고 1999년 11월의 1,2일에
    해당하는 값을 구하라.
    Product
    Product, Channel,Time
    Grand Total
    select prod_id, channel_id, time_id
    , round(sum(quantity_sold)) as sales
    from sales
    where (time_id='01-DEC-1999' or time_id='02-DEC-1999')
    and prod_id in (10,20,45)
    group by rollup(prod_id,(channel_id, time_id))
    3. Create SQL statements with the new WITH clause
    복잡한 query의 Select절에서 같은 block을 한번 이상 query를 할때 쓰인다.
    Query block의 값은 user의 temporary tablespace에 저장한다.
    Perfoemance의 향상을 기대할 수 있다.
    1) 전체 회사의 총 salary의 1/8보다 많은 부서를 구하라.
    with
    summary as (
    select department_name, sum(salary) as dept_total
    from employees, departments
    where employees.department_id = departments.department_id
    group by department_name)
    select department_name, dept_total
    from summary
    where dept_total > (select sum(dept_total) * 1/8 from summary)
    order by dept_total desc ;
    2) 1)을 예전 version에서는 아래와 같이 구현하였다.
    select department_name, sum(salary) as dept_total
    from employees,departments
    where employees.department_id=departments.department_id
    group by department_name
    having sum(salary) > ( select sum(salary) * 1/8
    from employees, departments
    where
    employees.department_id=departments.department_id)
    order by sum(salary) desc;
    4. Benefits
    Improved query speed
    Enhanced developer productivity
    Minimized learning effort
    Standardized syntax
    Reference Ducumment
    Oracle 9i New features for Developer

    Hi Rajasheker,
    Without knowing the exact scenario it is hard to give you an answer but I hope the following broad guidelines might help you.
    1) There must be some thing in common or it may be a case of wrong architecture/missing buseness requirements. Identify the relationship between these two.
    2) Or, do they have a 100% parallel relationship by design? In this casse you need to create a high level common object (dummy) to facilitate the consolidatation process (for example company code or worst scheario sys-id) and enhance both cube as well as ODS.
    3) Or, if it is a complex situation: Introduce a new object which can build a bridge between these two. Ask the business about the rules.
    If it doesn't help, pl give more details.
    Bala

Maybe you are looking for

  • Right way to manage ADF Library Dependencies

    Hi, I am working on an application where we are trying to components in individual Libraries for re-usability and logical grouping. These jar files are then imported into parent projects which can result in different levels of dependency hierarchies.

  • Job is taking too much time during Delta loads

    hi when i tried to extract Delta records from R3 for Standard Extractor 0FI_GL_4 it is taking 46 mins while there are very less no. of delta records(193 records only). PFA the R3 Job log. the major time is taking in calling the Customer enhacement BW

  • About a year ago we purchased a used macbook pro and can not get music to transfer from iPhones to mac

    about a year ago we purchased a used macbook pro and can not get music to transfer from iPhones to mac

  • Setting the time in CIMC

    Newbie question.. but how do I set the time in CIMC?                  

  • Error adding up columns

    All i have a strange problem when i run the following code it produces the table, but the totals don't add up. If you look at the bottom row you can see the value is one short, am i missing something.      select trader, tradername,           max( de