Feature request - execution time in SQL Developer

I am looking for some feature to detail the execution time of packages/stored procedures/functions. When a package completes execution, lets say, it took longer time than usual ; I would like to know which stored procedures or SQLs within took the longest time. I could trace the session. It will be very helpful if there is a feature on the tool.
Thanks

Look into PL/SQL Profiling. If you are on RDBMS version 11, then profiling command appears in context menu and toolbar.

Similar Messages

  • How to know query execution time in sql plus

    HI
    I want to know the query execution time in sql plus along with statistics
    I say set time on ;
    set autotrace on ;
    select * from view where usr_id='abcd';
    if the result is 300 rows it scrolls till all the rows are retrieved and finally gives me execution time as 40 seconds or 1 minute.. (this is after all the records are scrolled )
    but when i execute it in toad it gives 350 milli seconds..
    i want to see the execution time in sql how to do this
    database server 11g and client is 10g
    regards
    raj

    what is the difference between .. the
    statistics gathered in sql plus something like this and the one that i get from plan_table in toad?
    how to format the execution plan I got in sqlplus in a proper understanding way?
    statistics in sqlplus
    tatistics
             0  recursive calls
             0  db block gets
           164  consistent gets
             0  physical reads
             0  redo size
         29805  bytes sent via SQL*Net to client
           838  bytes received via SQL*Net from client
            25  SQL*Net roundtrips to/from client
             1  sorts (memory)
             0  sorts (disk)
           352  rows processedexecution plan in sqlplus... how to format this
    xecution Plan
      0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=21 Card=1 Bytes=10
             03)
      1    0   HASH (UNIQUE) (Cost=21 Card=1 Bytes=1003)
      2    1     MERGE JOIN (CARTESIAN) (Cost=20 Card=1 Bytes=1003)
      3    2       NESTED LOOPS
      4    3         NESTED LOOPS (Cost=18 Card=1 Bytes=976)
      5    4           NESTED LOOPS (Cost=17 Card=1 Bytes=797)
      6    5             NESTED LOOPS (OUTER) (Cost=16 Card=1 Bytes=685)
      7    6               NESTED LOOPS (OUTER) (Cost=15 Card=1 Bytes=556
      8    7                 NESTED LOOPS (Cost=14 Card=1 Bytes=427)
      9    8                   NESTED LOOPS (Cost=5 Card=1 Bytes=284)
    10    9                     TABLE ACCESS (BY INDEX ROWID) OF 'USR_XR
             EF' (TABLE) (Cost=4 Card=1 Bytes=67)
    11   10                       INDEX (RANGE SCAN) OF 'USR_XREF_PK' (I
             NDEX (UNIQUE)) (Cost=2 Card=1)
    12    9                     TABLE ACCESS (BY INDEX ROWID) OF 'USR_DI
             M' (TABLE) (Cost=1 Card=1 Bytes=217)
    13   12                       INDEX (UNIQUE SCAN) OF 'USR_DIM_PK' (I
             NDEX (UNIQUE)) (Cost=0 Card=1)
    14    8                   TABLE ACCESS (BY INDEX ROWID) OF 'HDS_FCT'
              (TABLE) (Cost=9 Card=1 Bytes=143)
    15   14                     INDEX (RANGE SCAN) OF 'HDS_FCT_IX2' (IND
             EX) (Cost=1 Card=338)
    16    7                 TABLE ACCESS (BY INDEX ROWID) OF 'USR_MEDIA_
             COMM' (TABLE) (Cost=1 Card=1 Bytes=129)
    17   16                   INDEX (UNIQUE SCAN) OF 'USR_MEDIA_COMM_PK'
              (INDEX (UNIQUE)) (Cost=0 Card=1)
    18    6               TABLE ACCESS (BY INDEX ROWID) OF 'USR_MEDIA_CO
             MM' (TABLE) (Cost=1 Card=1 Bytes=129)
    19   18                 INDEX (UNIQUE SCAN) OF 'USR_MEDIA_COMM_PK' (
             INDEX (UNIQUE)) (Cost=0 Card=1)
    20    5             TABLE ACCESS (BY INDEX ROWID) OF 'PROD_DIM' (TAB
             LE) (Cost=1 Card=1 Bytes=112)
    21   20               INDEX (UNIQUE SCAN) OF 'PROD_DIM_PK' (INDEX (U
             NIQUE)) (Cost=0 Card=1)
    22    4           INDEX (UNIQUE SCAN) OF 'CUST_DIM_PK' (INDEX (UNIQU
             E)) (Cost=0 Card=1)
    23    3         TABLE ACCESS (BY INDEX ROWID) OF 'CUST_DIM' (TABLE)
             (Cost=1 Card=1 Bytes=179)
    24    2       BUFFER (SORT) (Cost=19 Card=22 Bytes=594)
    25   24         INDEX (FAST FULL SCAN) OF 'PROD_DIM_AK1' (INDEX (UNI
             QUE)) (Cost=2 Card=22 Bytes=594)

  • Why same query takes different execution time in sql 2008

    Hi!
    With below query in SQL Server 2008 R2 when I change Book_ID  to another value like '99000349'  it takes very long time to execute while both result sets have same number of records!?
    select Card_Serial,Asset_ID, Field_Name,Field_Value,Asset_Number,Field_ID,Book_ID from dbo.vw_InspectionReport where Book_ID='99000347'
    I've test it more and more,A time I ran quickest one, or longest one first, restart Windows, but for some specific Book_ID values (although with same number of result set rows) it take multiple time slower than rest of Book_IDs.
    Also showing state of the result set is different for these diffrent Book_IDs:
    for fast ones it looks like below picture:
    for slow ones it looks like below picture:
    if you note, order of returned records are different!?
    I'm waiting for your kindly reply!...

    Do you see any changes if you add a hint to the query?
    select Card_Serial,Asset_ID, Field_Name,Field_Value,Asset_Number,Field_ID,Book_ID from dbo.vw_InspectionReport where Book_ID='99000347OPTION(RECOMPILE)
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Execution time of sql query differing a lot between two computer

    hi
    execution time of a query in my computer and more than 30 different computer is less than one second but on one of our
    customers' computers, execution time is more than ten minute. databases and data and queries are same. i re-install sql but problem remains. my sql is ms sql 2008 r2.
    any one has idea for this problem?

    Hi mahdi,
    Obviously, we can't get enough information to help you troubleshoot this issue. So, please elaborate your issue with more detail so that the community members can help you in more effecient manner.
    In addition, here is a good article regarding checklist for analyzing Slow-Running queries. Please see:
    http://technet.microsoft.com/en-us/library/ms177500(v=sql.105).aspx
    And SQL Server Profiler and Performance Monitor are good tools to troubleshoot performance issue, please see:
    Correlating SQL Server Profiler with Performance Monitor:
    https://www.simple-talk.com/sql/database-administration/correlating-sql-server-profiler-with-performance-monitor/
    Regards,
    Elvis Long
    TechNet Community Support

  • Current execution time of sql

    Hi Folk
    Have Oracle 9i2.
    How can i find out the current execution time of a running or an ended sql/query? is there any view like v$session_longops? i will see the query and the time of the query.
    Regards

    Maybe this example can give you some ideas
    Example 1:
    SQL> declare
      2  start_time timestamp;
      3  end_time timestamp;
      4  i number;
      5  begin
      6  select current_timestamp into start_time
      7  from dual;
      8  select count(1) into i
      9  from all_objects;
    10  select current_timestamp into end_time
    11  from dual;
    12  dbms_output.put_line('Query Run Time : '||to_char(end_time-start_time,'HH:MM:SSXFF')||'');
    13* end;
    SQL> /
    Query Run Time : +000000 00:00:01.795964000
    PL/SQL procedure successfully completed.
    SQL> set timing on
    SQL> select count(1)
      2  from all_objects;
      COUNT(1)
         50212
    Elapsed: 00:00:01.81
    SQL> set timing off
    SQL>                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Last one month jobs execution time in sql server 2008 r2

    Dear Friends,
    We configured replication between three servers two are publishers and one subscriber for two publisher.
    my question is daily basis one job running on subscriber end it truncate and insert the data every night .
    unfortunately today job was failed I observed in jobs view history. but client requirement manually run the job and data dump into the table. but I want know its previous execution time as per that I will run the job in production hours but in jobs view
    history showing only today's fail job history. how to find the last execution time .
    note: yesterday job was successfully completed.
    Message
    Executed as user: NT AUTHORITY\SYSTEM. Cannot initialize the data source object of OLE DB provider "SQLNCLI10" for linked server "server name". [SQLSTATE 42000] (Error 7303)  OLE DB provider "SQLNCLI10" for linked
    server "server name" returned message "Unable to complete login process due to delay in opening server connection". [SQLSTATE 01000] (Error 7412).  The step failed.
    mastanvali shaik

    But what about to that particular job ? what is the name of the job ? Please assign that job name in the below script and check .. Its sure that history is not exist for that particular job anyway for your confirmation please use the below scripts and try...
     make sure to add the name .. when was the last backup taken for your system databases ?
    WHERE    JOB.name = 'Your JobName'  -- Add your job name..
    SELECT      [JobName]   = JOB.name,
                [Step]      = HIST.step_id,
                [StepName]  = HIST.step_name,
                [Message]   = HIST.message,
                [Status]    = CASE WHEN HIST.run_status = 0 THEN 'Failed'
                WHEN HIST.run_status = 1 THEN 'Succeeded'
                WHEN HIST.run_status = 2 THEN 'Retry'
                WHEN HIST.run_status = 3 THEN 'Canceled'
                END,
                [RunDate]   = HIST.run_date,
                [RunTime]   = HIST.run_time,
                [Duration]  = HIST.run_duration
    FROM        sysjobs JOB
    INNER JOIN  sysjobhistory HIST ON HIST.job_id = JOB.job_id
    WHERE    JOB.name = 'Your JobName'
    ORDER BY    HIST.run_date, HIST.run_time
    Raju Rasagounder Sr MSSQL DBA

  • Execution time of SQL

    Hi,
    I have a sql statement that is having poor performance. I wanted to check how much time it's taking on each execution in last 10 days.
    Regards

    I just replied in another thread:
    Look at DBA_HIST_SQLSTAT.ELAPSED_TIME_TOTAL/DELTA if you are licensed. Be aware that the data is kept for 8 days by default.
    You'll have hard time looking at parallel processes since they would show the total time spent by all slaves. It's not very useful for absolute analysis, but OK for relative comparison.

  • Execution times of SQL Queries

    Hello everybody,
    I have a strange behaviour on my Sun.
    PHP 4.2.2, solaris 2.8, Oracle 9iR2.
    I have a bunch of statements, executed in SQL-Plus they take about two seconds.
    Executed via PHP, the time the OCIexecutes take are more than twelve seconds.
    Has anyone any suggestions?
    greetings Markus

    Are you connecting the same (either local in both cases, or same TNS entry?)
    I can't think of anything off the top of my head that would cause it.
    Rob

  • How to reduce execution time of SQL Query

    hi ,
    i'm working on oracle ERP application i wanna to create an OAF page that shows some data on tables .
    i've wirte the query but it take long time . .
    any body can help :
    SELECT *
    FROM (SELECT person_id,
    transaction_id,
    segment1 AS TA_number,
    segment9 AS Travel_Distination,
    SUBSTR (segment5, 0, 10) AS Travel_Date,
    creation_date AS request_date,
    status,
    full_name AS Current_Approver
    FROM ( (SELECT PPF.PERSON_ID,
    ht.TRANSACTION_ID,
    pac.segment1,
    pac.segment2,
    pac.segment3,
    pac.segment4,
    pac.segment5,
    pac.segment6,
    pac.segment7,
    pac.segment8,
    pac.segment9,
    pac.creation_date,
    DECODE (al.approval_status,
    NULL, 'Pending For Approval',
    'APPROVE', 'Finally Approved',
    al.approval_status)
    status,
    al.order_number,
    almin.order_number approver_order,
    ppf2.full_name
    FROM HR_API_TRANSACTION_Values htv,
    HR_API_TRANSACTIONS ht,
    HR_API_TRANSACTION_STEPS hts,
    PER_ANALYSIS_CRITERIA pac,
    hr.Ame_Approvals_History ah,
    per_people_f ppf,
    per_people_f ppf2,
    apps.fnd_user fu,
    apps.AME_TEMP_OLD_APPROVER_LISTS al,
    apps.AME_TEMP_OLD_APPROVER_LISTS almin
    WHERE al.application_id = '-81'
    AND al.transaction_id = ht.TRANSACTION_ID
    AND (al.approval_status NOT LIKE '%REP%'
    OR al.approval_status IS NULL)
    AND al.order_number =
    (SELECT MAX (ao.order_number)
    FROM apps.AME_TEMP_OLD_APPROVER_LISTS ao
    WHERE ao.transaction_id =
    ht.TRANSACTION_ID
    AND (ao.approval_status NOT LIKE
    '%REP%'
    OR ao.approval_status IS NULL))
    AND ht.creator_person_id = PPF.PERSON_ID
    AND ht.TRANSACTION_ID = ah.transaction_id
    AND HT.TRANSACTION_ID = HTS.TRANSACTION_ID
    AND fu.employee_id = PPF2.person_id
    AND almin.order_number =
    (SELECT MIN (aomin.order_number)
    FROM apps.AME_TEMP_OLD_APPROVER_LISTS aomin
    WHERE aomin.transaction_id =
    ht.TRANSACTION_ID
    AND aomin.approval_status IS NULL)
    AND almin.transaction_id = ht.TRANSACTION_ID
    AND almin.name = fu.user_name
    AND hts.TRANSACTION_STEP_ID =
    HTV.TRANSACTION_STEP_ID
    AND HTV.NAME = 'P_ANALYSIS_CRITERIA_ID'
    AND HTV.NUMBER_VALUE =
    PAC.ANALYSIS_CRITERIA_ID
    AND SYSDATE BETWEEN ppf.effective_start_date
    AND ppf.effective_end_date
    AND PROCESS_NAME = 'TA_AEC')
    UNION
    (SELECT PPF.PERSON_ID,
    ht.TRANSACTION_ID,
    pac.segment1,
    pac.segment2,
    pac.segment3,
    pac.segment4,
    pac.segment5,
    pac.segment6,
    pac.segment7,
    pac.segment8,
    pac.segment9,
    pac.creation_date,
    DECODE (al.approval_status,
    NULL, 'Pending For Approval',
    'APPROVE', 'Finally Approved',
    al.approval_status)
    status,
    al.order_number,
    al.order_number AS approver_order,
    '' AS name
    FROM HR_API_TRANSACTION_Values htv,
    HR_API_TRANSACTIONS ht,
    HR_API_TRANSACTION_STEPS hts,
    PER_ANALYSIS_CRITERIA pac,
    hr.Ame_Approvals_History ah,
    per_people_f ppf,
    per_people_f ppf2,
    apps.fnd_user fu,
    apps.AME_TEMP_OLD_APPROVER_LISTS al,
    apps.AME_TEMP_OLD_APPROVER_LISTS almin
    WHERE al.application_id = '-81'
    AND al.approval_status IS NOT NULL
    AND al.transaction_id = ht.TRANSACTION_ID
    AND ht.creator_person_id = PPF.PERSON_ID
    AND ht.TRANSACTION_ID = ah.transaction_id
    AND HT.TRANSACTION_ID = HTS.TRANSACTION_ID
    AND PROCESS_NAME = 'TA_AEC'
    AND fu.employee_id = PPF2.person_id
    AND al.order_number =
    (SELECT MAX (ao.order_number)
    FROM apps.AME_TEMP_OLD_APPROVER_LISTS ao
    WHERE ao.transaction_id =
    ht.TRANSACTION_ID
    AND (ao.approval_status NOT LIKE
    '%REP%'
    OR ao.approval_status IS NULL))
    AND al.name = fu.user_name
    AND almin.transaction_id = ht.TRANSACTION_ID
    AND hts.TRANSACTION_STEP_ID =
    HTV.TRANSACTION_STEP_ID
    AND HTV.NAME = 'P_ANALYSIS_CRITERIA_ID'
    AND HTV.NUMBER_VALUE = PAC.ANALYSIS_CRITERIA_ID
    AND SYSDATE BETWEEN ppf.effective_start_date
    AND ppf.effective_end_date
    AND PROCESS_NAME = 'TA_AEC'))) QRSLT
    WHERE (person_id = 26773)
    ORDER BY request_date DESC

    see also this . .
    Optimizer Environment (10053)
    #     Is
    Default     Parameter     Current
    Value
    1     N     _sort_elimination_cost_ratio     5
    2     N     _pga_max_size     838860 KB
    3     N     _b_tree_bitmap_plans     false
    4     N     _fast_full_scan_enabled     false
    5     N     _like_with_bind_as_equality     true
    6     N     optimizer_secure_view_merging     false
    7     Y     optimizer_mode_hinted     false
    8     Y     optimizer_features_hinted     0.0.0
    9     Y     parallel_execution_enabled     true
    10     Y     parallel_query_forced_dop     0
    11     Y     parallel_dml_forced_dop     0
    12     Y     parallel_ddl_forced_degree     0
    13     Y     parallel_ddl_forced_instances     0
    14     Y     _query_rewrite_fudge     90
    15     Y     optimizer_features_enable     10.2.0.4
    16     Y     _optimizer_search_limit     5
    17     Y     cpu_count     4
    18     Y     active_instance_count     1
    19     Y     parallel_threads_per_cpu     2
    20     Y     hash_area_size     131072
    21     Y     bitmap_merge_area_size     1048576
    22     Y     sort_area_size     65536
    23     Y     sort_area_retained_size     0
    24     Y     _optimizer_block_size     8192
    25     Y     _sort_multiblock_read_count     2
    26     Y     _hash_multiblock_io_count     0
    27     Y     _db_file_optimizer_read_count     8
    28     Y     _optimizer_max_permutations     2000
    29     Y     pga_aggregate_target     4194304 KB
    30     Y     _query_rewrite_maxdisjunct     257
    #     Is
    Default     Parameter     Current
    Value
    31     Y     _smm_auto_min_io_size     56 KB
    32     Y     _smm_auto_max_io_size     248 KB
    33     Y     _smm_min_size     1024 KB
    34     Y     _smm_max_size     419430 KB
    35     Y     _smm_px_max_size     2097152 KB
    36     Y     _cpu_to_io     0
    37     Y     _optimizer_undo_cost_change     10.2.0.4
    38     Y     parallel_query_mode     enabled
    39     Y     parallel_dml_mode     disabled
    40     Y     parallel_ddl_mode     enabled
    41     Y     optimizer_mode     all_rows
    42     Y     sqlstat_enabled     false
    43     Y     _optimizer_percent_parallel     101
    44     Y     _always_anti_join     choose
    45     Y     _always_semi_join     choose
    46     Y     _optimizer_mode_force     true
    47     Y     _partition_view_enabled     true
    48     Y     _always_star_transformation     false
    49     Y     _query_rewrite_or_error     false
    50     Y     _hash_join_enabled     true
    51     Y     cursor_sharing     exact
    52     Y     star_transformation_enabled     false
    53     Y     _optimizer_cost_model     choose
    54     Y     _new_sort_cost_estimate     true
    55     Y     _complex_view_merging     true
    56     Y     _unnest_subquery     true
    57     Y     _eliminate_common_subexpr     true
    58     Y     _pred_move_around     true
    59     Y     _convert_set_to_join     false
    60     Y     _push_join_predicate     true
    #     Is
    Default     Parameter     Current
    Value
    61     Y     _push_join_union_view     true
    62     Y     _optim_enhance_nnull_detection     true
    63     Y     _parallel_broadcast_enabled     true
    64     Y     _px_broadcast_fudge_factor     100
    65     Y     _ordered_nested_loop     true
    66     Y     _no_or_expansion     false
    67     Y     optimizer_index_cost_adj     100
    68     Y     optimizer_index_caching     0
    69     Y     _system_index_caching     0
    70     Y     _disable_datalayer_sampling     false
    71     Y     query_rewrite_enabled     true
    72     Y     query_rewrite_integrity     enforced
    73     Y     _query_cost_rewrite     true
    74     Y     _query_rewrite_2     true
    75     Y     _query_rewrite_1     true
    76     Y     _query_rewrite_expression     true
    77     Y     _query_rewrite_jgmigrate     true
    78     Y     _query_rewrite_fpc     true
    79     Y     _query_rewrite_drj     true
    80     Y     _full_pwise_join_enabled     true
    81     Y     _partial_pwise_join_enabled     true
    82     Y     _left_nested_loops_random     true
    83     Y     _improved_row_length_enabled     true
    84     Y     _index_join_enabled     true
    85     Y     _enable_type_dep_selectivity     true
    86     Y     _improved_outerjoin_card     true
    87     Y     _optimizer_adjust_for_nulls     true
    88     Y     _optimizer_degree     0
    89     Y     _use_column_stats_for_function     true
    90     Y     _subquery_pruning_enabled     true
    #     Is
    Default     Parameter     Current
    Value
    91     Y     _subquery_pruning_mv_enabled     false
    92     Y     _or_expand_nvl_predicate     true
    93     Y     _table_scan_cost_plus_one     true
    94     Y     _cost_equality_semi_join     true
    95     Y     _default_non_equality_sel_check     true
    96     Y     _new_initial_join_orders     true
    97     Y     _oneside_colstat_for_equijoins     true
    98     Y     _optim_peek_user_binds     true
    99     Y     _minimal_stats_aggregation     true
    100     Y     _force_temptables_for_gsets     false
    101     Y     workarea_size_policy     auto
    102     Y     _smm_auto_cost_enabled     true
    103     Y     _gs_anti_semi_join_allowed     true
    104     Y     _optim_new_default_join_sel     true
    105     Y     optimizer_dynamic_sampling     2
    106     Y     _pre_rewrite_push_pred     true
    107     Y     _optimizer_new_join_card_computation     true
    108     Y     _union_rewrite_for_gs     yes_gset_mvs
    109     Y     _generalized_pruning_enabled     true
    110     Y     _optim_adjust_for_part_skews     true
    111     Y     _force_datefold_trunc     false
    112     Y     statistics_level     typical
    113     Y     _optimizer_system_stats_usage     true
    114     Y     skip_unusable_indexes     true
    115     Y     _remove_aggr_subquery     true
    116     Y     _optimizer_push_down_distinct     0
    117     Y     _dml_monitoring_enabled     true
    118     Y     _optimizer_undo_changes     false
    119     Y     _predicate_elimination_enabled     true
    120     Y     _nested_loop_fudge     100
    #     Is
    Default     Parameter     Current
    Value
    121     Y     _project_view_columns     true
    122     Y     _local_communication_costing_enabled     true
    123     Y     _local_communication_ratio     50
    124     Y     _query_rewrite_vop_cleanup     true
    125     Y     _slave_mapping_enabled     true
    126     Y     _optimizer_cost_based_transformation     linear
    127     Y     _optimizer_mjc_enabled     true
    128     Y     _right_outer_hash_enable     true
    129     Y     _spr_push_pred_refspr     true
    130     Y     _optimizer_cache_stats     false
    131     Y     _optimizer_cbqt_factor     50
    132     Y     _optimizer_squ_bottomup     true
    133     Y     _fic_area_size     131072
    134     Y     _optimizer_skip_scan_enabled     true
    135     Y     _optimizer_cost_filter_pred     false
    136     Y     _optimizer_sortmerge_join_enabled     true
    137     Y     _optimizer_join_sel_sanity_check     true
    138     Y     _mmv_query_rewrite_enabled     true
    139     Y     _bt_mmv_query_rewrite_enabled     true
    140     Y     _add_stale_mv_to_dependency_list     true
    141     Y     _distinct_view_unnesting     false
    142     Y     _optimizer_dim_subq_join_sel     true
    143     Y     _optimizer_disable_strans_sanity_checks     0
    144     Y     _optimizer_compute_index_stats     true
    145     Y     _push_join_union_view2     true
    146     Y     _optimizer_ignore_hints     false
    147     Y     _optimizer_random_plan     0
    148     Y     _query_rewrite_setopgrw_enable     true
    149     Y     _optimizer_correct_sq_selectivity     true
    150     Y     _disable_function_based_index     false
    #     Is
    Default     Parameter     Current
    Value
    151     Y     _optimizer_join_order_control     3
    152     Y     _optimizer_cartesian_enabled     true
    153     Y     _optimizer_starplan_enabled     true
    154     Y     _extended_pruning_enabled     true
    155     Y     _optimizer_push_pred_cost_based     true
    156     Y     _sql_model_unfold_forloops     run_time
    157     Y     _enable_dml_lock_escalation     false
    158     Y     _bloom_filter_enabled     true
    159     Y     _update_bji_ipdml_enabled     0
    160     Y     _optimizer_extended_cursor_sharing     udo
    161     Y     _dm_max_shared_pool_pct     1
    162     Y     _optimizer_cost_hjsmj_multimatch     true
    163     Y     _optimizer_transitivity_retain     true
    164     Y     _px_pwg_enabled     true
    165     Y     _optimizer_join_elimination_enabled     true
    166     Y     flashback_table_rpi     non_fbt
    167     Y     _optimizer_cbqt_no_size_restriction     true
    168     Y     _optimizer_enhanced_filter_push     true
    169     Y     _optimizer_filter_pred_pullup     true
    170     Y     _rowsrc_trace_level     0
    171     Y     _simple_view_merging     true
    172     Y     _optimizer_rownum_pred_based_fkr     true
    173     Y     _optimizer_better_inlist_costing     all
    174     Y     _optimizer_self_induced_cache_cost     false
    175     Y     _optimizer_min_cache_blocks     10
    176     Y     _optimizer_or_expansion     depth
    177     Y     _optimizer_order_by_elimination_enabled     true
    178     Y     _optimizer_outer_to_anti_enabled     true
    179     Y     _selfjoin_mv_duplicates     true
    180     Y     _dimension_skip_null     true
    #     Is
    Default     Parameter     Current
    Value
    181     Y     _force_rewrite_enable     false
    182     Y     _optimizer_star_tran_in_with_clause     true
    183     Y     _optimizer_complex_pred_selectivity     true
    184     Y     _optimizer_connect_by_cost_based     true
    185     Y     _gby_hash_aggregation_enabled     true
    186     Y     _globalindex_pnum_filter_enabled     true
    187     Y     _fix_control_key     0
    188     Y     _optimizer_skip_scan_guess     false
    189     Y     _enable_row_shipping     false
    190     Y     _row_shipping_threshold     80
    191     Y     _row_shipping_explain     false
    192     Y     _optimizer_rownum_bind_default     10
    193     Y     _first_k_rows_dynamic_proration     true
    194     Y     _px_ual_serial_input     true
    195     Y     _optimizer_native_full_outer_join     off
    196     Y     _optimizer_star_trans_min_cost     0
    197     Y     _optimizer_star_trans_min_ratio     0
    198     Y     _optimizer_fkr_index_cost_bias     10
    199     Y     _optimizer_connect_by_combine_sw     true
    200     Y     _optimizer_use_subheap     true
    201     Y     _optimizer_or_expansion_subheap     true
    202     Y     _optimizer_sortmerge_join_inequality     true
    203     Y     _optimizer_use_histograms     true
    204     Y     _optimizer_enable_density_improvements     false

  • FEATURE REQUEST: Design Time CSS

    NitroX really needs the ability to attach "Design Time" CSS to a JSP (ala DreamWeaver). If I'm editing a JSP, the only CSS Styles it suggests to me are the ones imported directly into the page. What about the style I define in my header.jsp? I need to ability to say, use this CSS while I'm coding, but don't import it, that will be done in a different file. If NitroX was smart enough to look at the tiles and JSPs that import the JSP that I'm working on and used their style sheets, all the better.

    NitroX can actually look at the tiles and JSPs that import the JSP that you are working on and automatically use the CSS styles (and tag libraries too) declared in them.
    To trigger this behaviour you should open the JSP page you want to edit from the including page by double clicking on the included page icon in the design editor, control-click in the source (the page attribute of the jsp:include tag for example, etc).
    M7 Support

  • Feature request:  info about executed sql in error case

    Hi,
    I'm running into this problem over and over again:
    when an exception happens on the productive server I would really like to see the sql statement that caused it. Or the whole batch of statements generated by unitOfWork.commit. Whatever is possible.
    We cannot let the server run with sql debugging on for obvious reasons (huge logfiles, performance hit) but when something happens, I want to have as much information as I can. And the sql would really help.
    Is there a way to provide something like that in the next release? I don't think I'm the only one who wants this. Let's take a vote :-)
    Ana

    Thanx, but it's not what I'm looking for.
    I can enable sql logging for my session and send it to a different appender but this still doesn't solve the problem that I'm logging every single sql statement and I only want those which provoked an error.
    Why should I log 100.000 statements when I only want to see the one that causes a problem?
    If I would use jdbc (not that I want to, I mostly love using toplink) then I could do something like this
    String sql = ...;
    try
       statement.executeUpdate(sql);
    catch(SQLException e)
       throw new MyException( " The statement " + sql " caused an error" , e);
    }So I would like to be able to do
    try
        unitOfWork.commit();
    catch(ToplinkException e)
       throw new MyException("Executing " +  unitOfWork.getExecutedSQL() + " caused an error", e);
    }Or maybe there should be a method in the Toplink DatabaseException to return the sql. Whatever. As long as I can get to it.

  • SQL Tuning and OPTIMIZER - Execution Time with  " AND col .."

    Hi all,
    I get a question about SQL Tuning and OPTIMIZER.
    There are three samples with EXPLAIN PLAN and execution time.
    This "tw_pkg.getMaxAktion" is a PLSQL Package.
    1.) Execution Time : 0.25 Second
    2.) Execution Time : 0.59 Second
    3.) Execution Time : 1.11 Second
    The only difference is some additional "AND col <> .."
    Why is this execution time growing so strong?
    Many Thanks,
    Thomas
    ----[First example]---
    Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
    Connected as dbadmin2
    SQL>
    SQL> EXPLAIN PLAN FOR
      2  SELECT * FROM ( SELECT studie_id, tw_pkg.getMaxAktion(studie_id) AS max_aktion_id
      3                    FROM studie
      4                 ) max_aktion
      5  WHERE max_aktion.max_aktion_id < 900 ;
    Explained
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3201460684
    | Id  | Operation            | Name        | Rows  | Bytes | Cost (%CPU)| Time
    |   0 | SELECT STATEMENT     |             |   220 |   880 |     5  (40)| 00:00:
    |*  1 |  INDEX FAST FULL SCAN| SYS_C005393 |   220 |   880 |     5  (40)| 00:00:
    Predicate Information (identified by operation id):
       1 - filter("TW_PKG"."GETMAXAKTION"("STUDIE_ID")<900)
    13 rows selected
    SQL>
    Execution time (PL/SQL Developer says): 0.25 seconds
    ----[/First]---
    ----[Second example]---
    Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
    Connected as dbadmin2
    SQL>
    SQL> EXPLAIN PLAN FOR
      2  SELECT * FROM ( SELECT studie_id, tw_pkg.getMaxAktion(studie_id) AS max_aktion_id
      3                    FROM studie
      4                 ) max_aktion
      5  WHERE max_aktion.max_aktion_id < 900
      6    AND max_aktion.max_aktion_id <> 692;
    Explained
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3201460684
    | Id  | Operation            | Name        | Rows  | Bytes | Cost (%CPU)| Time
    |   0 | SELECT STATEMENT     |             |    11 |    44 |     6  (50)| 00:00:
    |*  1 |  INDEX FAST FULL SCAN| SYS_C005393 |    11 |    44 |     6  (50)| 00:00:
    Predicate Information (identified by operation id):
       1 - filter("TW_PKG"."GETMAXAKTION"("STUDIE_ID")<900 AND
                  "TW_PKG"."GETMAXAKTION"("STUDIE_ID")<>692)
    14 rows selected
    SQL>
    Execution time (PL/SQL Developer says): 0.59 seconds
    ----[/Second]---
    ----[Third example]---
    SQL> EXPLAIN PLAN FOR
      2  SELECT * FROM ( SELECT studie_id, tw_pkg.getMaxAktion(studie_id) AS max_aktion_id
      3                    FROM studie
      4                 ) max_aktion
      5  WHERE max_aktion.max_aktion_id < 900
      6    AND max_aktion.max_aktion_id <> 692
      7    AND max_aktion.max_aktion_id <> 392;
    Explained
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3201460684
    | Id  | Operation            | Name        | Rows  | Bytes | Cost (%CPU)| Time
    |   0 | SELECT STATEMENT     |             |     1 |     4 |     6  (50)| 00:00:
    |*  1 |  INDEX FAST FULL SCAN| SYS_C005393 |     1 |     4 |     6  (50)| 00:00:
    Predicate Information (identified by operation id):
       1 - filter("TW_PKG"."GETMAXAKTION"("STUDIE_ID")<900 AND
                  "TW_PKG"."GETMAXAKTION"("STUDIE_ID")<>692 AND
                  "TW_PKG"."GETMAXAKTION"("STUDIE_ID")<>392)
    15 rows selected
    SQL>
    Execution time (PL/SQL Developer says): 1.11 seconds
    ----[/Third]---Edited by: thomas_w on Jul 9, 2010 11:35 AM
    Edited by: thomas_w on Jul 12, 2010 8:29 AM

    Hi,
    this is likely because SQL Developer fetches and displays only limited number of rows from query results.
    This number is a parameter called 'sql array fetch size', you can find it in SQL Developer preferences under Tools/Preferences/Database/Advanced tab, and it's default value is 50 rows.
    Query scans a table from the beginning and continue scanning until first 50 rows are selected.
    If query conditions are more selective, then more table rows (or index entries) must be scanned to fetch first 50 results and execution time grows.
    This effect is usually unnoticeable when query uses simple and fast built-in comparison operators (like = <> etc) or oracle built-in functions, but your query uses a PL/SQL function that is much more slower than built-in functions/operators.
    Try to change this parameter to 1000 and most likely you will see that execution time of all 3 queries will be similar.
    Look at this simple test to figure out how it works:
    CREATE TABLE studie AS
    SELECT row_number() OVER (ORDER BY object_id) studie_id,  o.*
    FROM (
      SELECT * FROM all_objects
      CROSS JOIN
      (SELECT 1 FROM dual CONNECT BY LEVEL <= 100)
    ) o;
    CREATE INDEX studie_ix ON studie(object_name, studie_id);
    ANALYZE TABLE studie COMPUTE STATISTICS;
    CREATE OR REPLACE FUNCTION very_slow_function(action IN NUMBER)
    RETURN NUMBER
    IS
    BEGIN
      RETURN action;
    END;
    /'SQL array fetch size' parameter in SQLDeveloper has been set to 50 (default). We will run 3 different queries on test table.
    Query 1:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id < 900
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      1.22       1.29          0       1310          0          50
    total        3      1.22       1.29          0       1310          0          50
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
         50  INDEX FAST FULL SCAN STUDIE_IX (cr=1310 pr=0 pw=0 time=355838 us cost=5536 size=827075 card=165415)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
         50   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 2:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id < 900
          AND max_aktion.max_aktion_id > 800
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.01          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      8.40       8.62          0       9351          0          50
    total        3      8.40       8.64          0       9351          0          50
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
         50  INDEX FAST FULL SCAN STUDIE_IX (cr=9351 pr=0 pw=0 time=16988202 us cost=5552 size=41355 card=8271)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
         50   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 3:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id = 600
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     18.72      19.16          0      19315          0           1
    total        3     18.73      19.16          0      19315          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
          1  INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=0 us cost=5536 size=165415 card=33083)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 1 - 1,29 sec, 50 rows fetched, 1310 index entries scanned to find these 50 rows.
    Query 2 - 8,64 sec, 50 rows fetched, 9351 index entries scanned to find these 50 rows.
    Query 3 - 19,16 sec, only 1 row fetched, 19315 index entries scanned (full index).
    Now 'SQL array fetch size' parameter in SQLDeveloper has been set to 1000.
    Query 1:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id < 900
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     18.35      18.46          0      19315          0         899
    total        3     18.35      18.46          0      19315          0         899
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
        899  INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=20571272 us cost=5536 size=827075 card=165415)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
        899   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 2:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id < 900
          AND max_aktion.max_aktion_id > 800
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     18.79      18.86          0      19315          0          99
    total        3     18.79      18.86          0      19315          0          99
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
         99  INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=32805696 us cost=5552 size=41355 card=8271)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
         99   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 3:
    SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
                         FROM studie
                  ) max_aktion
    WHERE max_aktion.max_aktion_id = 600
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1     18.69      18.84          0      19315          0           1
    total        3     18.69      18.84          0      19315          0           1
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 93  (TEST)
    Rows     Row Source Operation
          1  INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=0 us cost=5536 size=165415 card=33083)(object id 79865)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   INDEX   MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)And now:
    Query 1 - 18.46 sec, 899 rows fetched, 19315 index entries scanned.
    Query 2 - 18.86 sec, 99 rows fetched, 19315 index entries scanned.
    Query 3 - 18.84 sec, 1 row fetched, 19315 index entries scanned.

  • Custom Toolbar Buttons - built from my SQL - feature request

    I would like to request a feature to be added to SQL Developer. I'm on EA, v 4.0.0.12. I would like to be able to create edit and add custom toolbar buttons.
    I do have some of my SQL saved as Snippets. As long as I want to keep the Snippets pane open, I can create and edit snipped categories, add snippets to custom categories, and drag'n'drop a snippet from the snippet pane onto a SQL Worksheet (fills in with the SQL). This all works, although giving up  "canvas workspace" for the snippets pane is a bit of an expense.
    It would be cool to be able to create, edit and place and remove custom buttons on the Toolbar with SQL I like to (re)use often.

    I found the Exchange and filed a request there.
    For anyone wanting the URL for the Exchange feature requests, I used this:
    https://apex.oracle.com/pls/apex/f?p=43135:6:104519160383376::NO
    I do wish the Exchange were compatible with Firefox. I had to go to IE to add the request.

  • SQL Execution Time

    I would like to see exact execution time of SQL query (without seeing TOAD execution time, because that may not be correct if I executed the same query 2nd time).
    Please let me know.

    If you want to display the information in the front end, you would have to do something like
    start_time := dbms_utility.get_time();
    <<your query>>
    end_time   := dbms_utility.get_time();end_time - start_time would give you the wall clock run time of the query in hundredths of a second.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • SQL Developer 3.1 EA - Database Diff Issues

    Hi
    I am using SQL Developer 3.1 EA to generate database difference script to upgrade my db. This is really very useful and very good tool for compare two different databases and generate the difference script.
    There are few bugs (?) identified while using database diff tool.
    1. If there is a column mismatch in Source and Destination tables, directly drop the table and recreate new one. This is not a good solution for upgrading database with some other database (data loss is the major issue). Microsoft handled this situation in VS2010 to create temp table with new structure then move data to the temp table and finally drop the original table and rename temp table to original. I am expecting the same method in Database Diff also.
    Sample Code:
    --Changed TABLE
    --F_SYSTEM_2
    DROP TABLE "ED9461001ER09_D1"."F_SYSTEM_2";
    CREATE TABLE "ED9461001ER09_D1"."F_SYSTEM_2"
    (     "SYS2_SEQN" NUMBER(9,0) DEFAULT 0 NOT NULL ENABLE, ....
    2. Primary key constraint created with table creation for new tables. But late point of time, Create unique index statement also trying to create the same to lead runtime error.
    Sample Code:
    --New TABLE
    --F_SYSTEM_3
    CREATE TABLE "ED9461001ER09_D1"."F_SYSTEM_3"
    (     "SYS3_CORVU_DB_NAME" VARCHAR2(50),
         "SYS3_CORVU_DB_SCAP" VARCHAR2(8),
         "SYS3_CORVU_DB_VIEWURL" VARCHAR2(2048),
         "SYS3_CORVU_HP_NAME" VARCHAR2(50),
         CONSTRAINT "PK_F_SYSTEM_3" PRIMARY KEY ("SYS3_SEQU") ENABLE
    --New INDEX
    --PK_F_SYSTEM_3
    CREATE UNIQUE INDEX "ED9461001ER09_D1"."PK_F_SYSTEM_3" ON "ED9461001ER09_D1"."F_SYSTEM_3" ("SYS3_SEQU");
    3. If there is an difference in procedures, script generated with two Create Procedure statements (Source and Destination) which is irrelevant.
    --Changed PROCEDURE
    --GET_DI_COUNT
    CREATE OR REPLACE PROCEDURE "ED9461001ER09_D1"."GET_DI_COUNT"      (PP_SEQU int,
    --Changed PROCEDURE
    --GET_DI_COUNT
    CREATE OR REPLACE PROCEDURE EDISAPAC10_D1.GET_DI_COUNT      (PP_SEQU int,
    I am expecting your solution / clarification.
    Regards
    Nagarajan Santhan

    1. Makes sense. You can request this at the SQL Developer Exchange, so other users can vote and add weight for possible future implementation.
    2. Long standing bug. As it won't make your computer explode, apparently low priority to fix.
    3. I'm unable to use the comparison feature in EA3. I do get the progress dialog comparing the different procedures and functions, but when finished it just disappears, and no comparison sheet opens, like the whole operation just got cancelled. No exceptions in the console, just a lot of these in the logging page:
    update returning false for Grid1oracle.ide.Context[{Context.WORKSPACE=Databases.jws, Context.PROJECT=IdeConnections%23DES+-+Reddis.jpr, Context.SELECTION=[Loracle.ide.model.Element;@1353c19, Context.EVENT=null, Context.VIEW=LoggingMessagePane.Logging Page}]Can someone from the team suggest something to debug?
    Thanks,
    K.

Maybe you are looking for

  • If statement in Custom Calculation Script

    I have 16 fields and if even one of them ="1" I have to list it in another field.  I do not want to count or sum.  If one of those fields has a 1 in it I just want the other field to display Y and if none have a 1 I want that field to display N. Plea

  • How to Query up Monthly Closing Balance on a Table of Transactions

    Probably a simple question for an analytics wizard, but I'm not there yet... Suppose I have a table of financial transactions named TX with TX_DATE and TX_AMOUNT columns and that some months may not have any transactions (i.e., there are gaps). I wou

  • How do i find and download jar files?

    I have some old code and it uses import com.sun.java_cup.internal.parser When compiling gives error on this. Googleing this shows that its part of the rt.jar file. rt.jar is in the jre/lib folder. That is part of my classpath but still wont compile h

  • Automator keeping JPG keyword tags and converting to PDF?

    I scanned a bunch of documents and put them into iPhoto and meticulously tagged them. Now I've decided I'd rather have them as PDFs so I can run OCR on them) I do not want to lose my keywords. I can export the images from iPhoto and preserve the keyw

  • SAP PAL : No ScriptServer available. See SAPNote 1650957

    Hello SCN folks, Well, the following seems to be a common error to most of the beginners with SAP PAL. The error itself describes clearly that the scriptserver is not running. But I did refer to note 1650957. Now, my scriptserver is running. I restar