Optimizer not using correct execution plan

Hi ,
DB  version : 11.2.0.3
My sql query ran last month 1 hour. But the same sql query today running for four hours. Looks like optimizer is not using correct execution plan. I have used tuning advisor and applied recommended sql profile and query execution is back to normal. I can see statistics are upto date for the tables. Any other factors why the optimizer is not choosing correct execution plan ?
Thanks.

What is the correct plan according to you? Multiple factors cause optimizer to chose a different plan. As a rudimentry example - A binary index column having low cardinality than expected, after new data has been inserted. Never ever expect your query to have same execution plan till the entire lifetime, until the underlying data does not change or nobody changes database settings.
You have to give a lot of information if you are looking for performance tuning. Pls see following thread
https://forums.oracle.com/message/9362003#9362003

Similar Messages

  • IP address not used correctly at RT Ping Controllers.vi

    It seems to me that the IP address provided to RT Ping Controller.vi and RT Reboot Controller is not used correctly.
    What I do is: Set the Subnet flag true and give a numerical IP adress to RT Ping Controller.vi
    What I get is the controller information for the controllers on the subnet, even if the IP does NOT match.
    Why do I do this? I want to acquire the MAC of the controller specified by the IP to use it for the reset vi.
    What I do second is: Set the subnet flag true and give an IP address to RT Reboot Controller.vi and provide a MAC address. The IP address does not match to the controller with the MAC given. The controller with the MAC reboots, independent of having a different IP.
    What is going
    on here and how can I get the correct MAC for an IP easily (i.e., without looking it up in MAX and typing it in somewhere)?

    Try setting the Local Subnet value to FALSE and pass in the IP you want to ping. You should get just details back on the target you specified, including it's MAC address. If the IP isn't a valid RT target, you will get an error.
    When Local subnet is set to TRUE, it will attempt to ping all controllers on the local subnet (regardless of the IP address passed in)--requiring you to search the resulting array for the IP you are interested in (and yep.. it might not be there).

  • Tkprof not showing the Execution Plan for Statement

    Hi all
    using oracle 9i release 2
    I have issued the following statements
    alter session set sql_trace
    alter session set events '10046 trace name context forever, level 12';
    --then executed a pl-sql procedure
    after reading the traceout outfile it shows the Execution plan for statements directly wirtten under begin and end block and doesnot displays the plan for the statements written like this
    procedure a is
    cursor b is
    select ename,dname from dept a,emp b
    where a.deptno=b.deptno;
    begin
    for x in a loop --plan not found but stats are written
    select ename into v_ename from emp where empno=300; --does show the plan+stats
    end;
    what I am missing to get the actual plan in trace output file
    thanks in advance

    You have to exit sql*plus after running the procedure, example tkprof is below:
    declare
    cursor c is
    select ename, dname
    from emp, dept
    where emp.deptno = dept.deptno;
    begin
    for v_x in c
    loop
    dbms_output.put_line(v_x.ename || ' ' ||v_x.dname);
    end loop;
    end;
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.06 0 0 0 1
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.06 0 0 0 1
    Misses in library cache during parse: 0
    Optimizer goal: CHOOSE
    Parsing user id: 68
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    SELECT ENAME, DNAME
    FROM
    EMP, DEPT WHERE EMP.DEPTNO = DEPT.DEPTNO
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 15 0.01 0.00 0 44 0 14
    total 17 0.01 0.00 0 44 0 14
    Misses in library cache during parse: 0
    Optimizer goal: CHOOSE
    Parsing user id: 68 (recursive depth: 1)
    Rows Row Source Operation
    14 NESTED LOOPS
    14 TABLE ACCESS FULL EMP
    14 TABLE ACCESS BY INDEX ROWID DEPT
    14 INDEX UNIQUE SCAN DEPT_PK (object id 40350)
    Best Regards
    Krystian Zieja / mob

  • Optimizer not using index even after giving the hint

    Hi All,
    I am wondering why Optimixzer is not using the index in the below query
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    SQL> select column_expression
      2  from ALL_IND_EXPRESSIONS
      3  where table_name like 'GTXN_DTL_V1'
      4  and index_name = 'IDX_TXN11_V1';
    COLUMN_EXPRESSION
    TO_DATE("BOOKING_DATE",'YYYYMMDD')
    SQL> select num_rows from all_tables
      2  where table_name like 'GTXN_DTL_V1';
      NUM_ROWS
      29020867
    SQL>  explain plan for select * from gtxn_dtl_v1 where to_date(booking_date,'yyyymmdd') = to_date('030109','DDMMRR');
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3140624094
    | Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |             | 55489 |    15M|   215K  (4)| 00:43:08 |
    |*  1 |  TABLE ACCESS FULL| GTXN_DTL_V1 | 55489 |    15M|   215K  (4)| 00:43:08 |
    Predicate Information (identified by operation id):
       1 - filter(TO_DATE("BOOKING_DATE",'yyyymmdd')=TO_DATE('030109','DDMMRR
    14 rows selected.
    --Giving Hint..
    SQL> explain plan for select /*+ index(gtxn_dtl_v1 IDX_TXN11_V1) */ *
      2  from gtxn_dtl_v1
      3  where to_date(booking_date,'yyyymmdd') = to_date('030109','DDMMRR')
      4  /
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3140624094
    | Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |             | 55489 |    15M|   215K  (4)| 00:43:08 |
    |*  1 |  TABLE ACCESS FULL| GTXN_DTL_V1 | 55489 |    15M|   215K  (4)| 00:43:08 |
    Predicate Information (identified by operation id):
       1 - filter(TO_DATE("BOOKING_DATE",'yyyymmdd')=TO_DATE('030109','DDMMRR
    14 rows selected.Please suggest.
    Thanks in advance,
    Jeneesh

    porzer wrote:
    Hi!
    Why are you using the to_date ont the booking_date column? Is it a varchar2 column, what type is it.
    Because if it's a varchar2 column you could simply use
    select * from gtxn_dtl_v1 where booking_date = '20090103';
    So you wouldn't even need a function based index.
    On the other hand if it's a date you shouldn't do a to_date as well.
    Best regards,
    PPThat is not the original qury used in production. I am investigating on the prformance of the below query.
    select  txn.account_number,to_number(txn.amount_lcy) txn_amt,to_date(booking_date,'yyyymmdd') TXN_DATE,
          sal.latest_sal,sal.sal_date,customer_name,employer_name,
           decode(COMMUNICATION_TYPE_1,'MOBILE',COMMUNICATION_NO_1,decode(COMMUNICATION_TYPE_2,'MOBILE',COMMUNICATION_NO_2)) mob,
           txn.CURRENCY, CHEQUE_NUMBER,trans_dets,trans_reference,target,teller_id,acc.category,acc.inactive_marker,acc.posting_restrict,cus.sector,cus.industry
    from coreadmin.Gtxn_dtl_v1 txn,
                   (select account_number,round(to_number(nvl(amount_lcy,0)),2) latest_sal,TXN_DATE sal_date,rr
                    from
                      (select to_date(booking_date,'yyyymmdd') TXN_DATE,batch_id,account_number,amount_lcy
                             ,row_number() over (partition by account_number order by to_date(booking_date,'yyyymmdd') desc NULLS LAST,batch_id desc nulls last) rr,
                             CURRENCY, CHEQUE_NUMBER,trans_dets,trans_reference
                        from coreadmin.Gtxn_dtl_v1
                        where transaction_code = '204'
                    and to_number(amount_lcy) > 0)
                        where rr = 1
                     ) sal,customers_live cus,accounts_live acc
    where to_date(booking_date,'yyyymmdd') between to_date('030109','DDMMRR') and to_date('030209','DDMMRR')
    and txn.account_number = sal.account_number
    and txn.CUSTOMER_ID = cus.CUSTOMER_number(+)
    and acc.id = sal.account_number
    and target in ('30','31','32')Edited by: jeneesh on Mar 25, 2009 12:38 PM
    Corrected the query.
    The column is of VARCHAR2 type. This is because, the table is loaded, through sqlldr, every day from flat files generated form GLOBUS banking system. Column is kept as VARCHAR2 to minimize the loading issues.

  • Optimizer not using indexes

    DBAs,
    I have a select query which is using index scan when quired in prod. database and is executing in 20secs.and is using full table scan in non prod. db and is taking 48 secs.I rebuilded indexes & took stats in non-prod db but even it is taking 47 secs.
    Please advice......

    Here are the details
    EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS( -ownname => 'TCD_PRD_STG', -
    estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, -
    method_opt => 'for all columns size AUTO' -
    SQL> EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS (‘JOE’,’EMPLOYEE’);
    EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS('TCD_PRD_STG',DBMS_STATS.AUTO_SAMPLE_SIZE);
    1)Oracle versions are 10.2.0.2 in both prod & non-prod.
    2)Explain plan of prod. db
    SQL> SELECT ITEM_REFERENCE_ID FROM (SELECT DISTINCT * FROM ITEMS WHERE PUBLICATION_ID=20 AND ITEM_T
    YPE=16 AND ( ( ( SCHEMA_ID=31 ) ) AND ( ( (ITEM_REFERENCE_ID IN (SELECT ITEM_REFERENCE_ID FROM
    ( SELECT ITEM_REFERENCE_ID, COUNT(KEYWORD) AS tempkeywordcount FROM ITEM_CATEGORIES_AND_KEYWORDS WHE
    RE KEYWORD IN ('Africa') AND CATEGORY = 'Region' AND PUBLICATION_ID=20 GROUP BY ITEM_REFERENCE_ID) t
    empselectholder WHERE tempkeywordcount=1)) OR (ITEM_REFERENCE_ID IN (SELECT ITEM_REFERENCE_ID FROM (
    SELECT ITEM_REFERENCE_ID, COUNT(KEYWORD) AS tempkeywordcount FROM ITEM_CATEGORIES_AND_KEYWORDS WHER
    E KEYWORD IN ('Aig') AND CATEGORY = 'Region' AND PUBLICATION_ID=20 GROUP BY ITEM_REFERENCE_ID) temps
    electholder WHERE tempkeywordcount=1)) ) ) ) ORDER BY LAST_PUBLISHED_DATE DESC) WHERE ROWNUM<51;
    no rows selected
    Elapsed: 00:00:21.74
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=192 Card=50 Bytes=
    650)
    1 0 COUNT (STOPKEY)
    2 1 VIEW (Cost=192 Card=79 Bytes=1027)
    3 2 SORT (ORDER BY STOPKEY) (Cost=192 Card=79 Bytes=92272)
    4 3 HASH (UNIQUE) (Cost=191 Card=79 Bytes=92272)
    5 4 FILTER
    6 5 TABLE ACCESS (BY INDEX ROWID) OF 'ITEMS' (TABLE)
    (Cost=190 Card=808 Bytes=943744)
    7 6 INDEX (RANGE SCAN) OF 'IDX_ITEMS_PUB_URL' (IND
    EX) (Cost=107 Card=17024)
    8 5 FILTER
    9 8 HASH (GROUP BY) (Cost=42 Card=1 Bytes=540)
    10 9 TABLE ACCESS (BY INDEX ROWID) OF 'ITEM_CATEG
    ORIES_AND_KEYWORDS' (TABLE) (Cost=41 Card=1 Bytes=540)
    11 10 INDEX (RANGE SCAN) OF 'IX_ITEM_KEYWORDS' (
    INDEX) (Cost=35 Card=7403)
    12 5 FILTER
    13 12 HASH (GROUP BY) (Cost=3 Card=1 Bytes=540)
    14 13 TABLE ACCESS (BY INDEX ROWID) OF 'ITEM_CATEG
    ORIES_AND_KEYWORDS' (TABLE) (Cost=2 Card=1 Bytes=540)
    15 14 INDEX (RANGE SCAN) OF 'IX_ITEM_KEYWORDS' (
    INDEX) (Cost=1 Card=50)
    Statistics
    21 recursive calls
    0 db block gets
    4950582 consistent gets
    4060 physical reads
    13100 redo size
    240 bytes sent via SQL*Net to client
    333 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    0 rows processed
    explain plan of non-prod db
    1* SELECT ITEM_REFERENCE_ID FROM (SELECT DISTINCT * FROM ITEMS WHERE PUBLICATION_ID=20 AND ITEM_T
    SQL> /
    ITEM_REFERENCE_ID
    96672
    96680
    Elapsed: 00:00:47.74
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=502 Card=50 Bytes=
    650)
    1 0 COUNT (STOPKEY)
    2 1 VIEW (Cost=502 Card=255 Bytes=3315)
    3 2 SORT (ORDER BY STOPKEY) (Cost=502 Card=255 Bytes=40035
    4 3 HASH (UNIQUE) (Cost=501 Card=255 Bytes=40035)
    5 4 FILTER
    6 5 TABLE ACCESS (FULL) OF 'ITEMS' (TABLE) (Cost=500
    Card=2618 Bytes=411026)
    7 5 FILTER
    8 7 HASH (GROUP BY) (Cost=881 Card=1 Bytes=29)
    9 8 TABLE ACCESS (FULL) OF 'ITEM_CATEGORIES_AND_
    KEYWORDS' (TABLE) (Cost=880 Card=11 Bytes=319)
    10 5 FILTER
    11 10 HASH (GROUP BY) (Cost=881 Card=1 Bytes=29)
    12 11 TABLE ACCESS (FULL) OF 'ITEM_CATEGORIES_AND_
    KEYWORDS' (TABLE) (Cost=880 Card=1 Bytes=29)
    Statistics
    0 recursive calls
    0 db block gets
    5912606 consistent gets
    0 physical reads
    0 redo size
    387 bytes sent via SQL*Net to client
    435 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    2 rows processed

  • Mail does not use correct outgoing server

    Problem
    Mail in Mavericks (and also IOS6 snf 7) does not use the outgoing smtp server associated with the account being used. This results in either a failure to send or mail being put into the wrong "Sent" folder.
    Background
    I have most of my non-icloud mail pulled down to a home server from which my laptops and ios devices access various accounts using imap. The local home server also provides smtp and relays all mail to my ISP. Recently I have been switching from an old home server (Mac Mini) to a new one (Raspberry pi running the debian Raspbian version with fetchmail,  Dovecot and postfix setup).
    While making the changeover I had access to both the old server and the new server enabled on my laptop. Naturally the majority of the settings (Name, email address etc) were the same although the name and address of the server to be used for both incoming and outgoing mail were different. This seemed tp work fine for incoming. However when I tested outgoing mail the copy that should appear in the "sentt" box  went missing. This usually indicates some mistake in the setting of the sent mail folder on the accounts but this all checked out.
    I eventually found the missing outgoing mail intended to go through the new server in the sent box on the old server. Further trials showed that no matter what I did the outgoing would always default to the old (and therefore first set up) outgoing server.
    Deleting the old account and the old outgoing server of course should cure the problem and will eventually be the solution for me when I kill the old server. But i wanted to get to the bottom of the problem in case it reappeared in other contexts. Checking the "use only this server" box produced failure to send rather than the correct result.
    Trialing various alterntaive settings showed that the problem occurs when there are two outgoing servers with the email address in the settings and the user name on the outgoing server name being the same. I had assumed that changing the description field would distinguish between the various settings; however this did not work. Changing the Name Field did work eg by putting (O) after my name.
    It appears therefore that mail selects the outgoing server on the basis of email address (with the full name included) and user name (trrespective of the actual server or the Description that shows in the smtp server listing. In some ways this is logical but it produces problems in the context I have described and would also be a difficulty if you wanted to use an alternative server when in a different location.
    I have trawled support and elsewhere for any thing similar. Lots of mail problems (don't get me started on the way icloud loses outgoing each time you edit the list!) but I have not found any posts on precisiely this point.
    IOS devices seem to have similar problems, but a quick attempt at a similar solution does not work, and I cannot be bothered  to test the options. I will simply clean them out and put in the new accounts.
    Advice
    Grateful for any comments or advice from people who have encountered similar problems and whether my diagnosis if correct. Have I missed any obvious corrections that would clear this up. I do not know whether this is a Mavericks issue or also appears in earlier OSX versions.
    CPE

    Peter,
    Where ever the Sent and Trash folders show in the Sidebar, highlight first one, and then the other, followed by clicking on Mailbox in the Menubar, and choosing Use This Mailbox For, and choose the function.
    Keep us posted on your progress.
    Ernie

  • Oracle not using correct index

    Hi,
    I have a fact table (a big table) and a dimension table representing dates.
    My query is
    select fdat.*, dd.dim_date from fdat_bitmap fdat, dim_dates dd where
    fdat.dim_date_id = dd.dim_date_id
    and dd.dim_date > TO_DATE('2011-12-20 00:00:00' , 'YYYY-MM-DD HH24:MI:SS');
    and the corresponding plan is:
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 76M| 9173M| 709K (1)| 02:21:51 |
    |* 1 | HASH JOIN | | 76M| 9173M| 709K (1)| 02:21:51 |
    |* 2 | INDEX FAST FULL SCAN| UI_DD_DATES_ID | 6951 | 97314 | 8 (0)| 00:00:01 |
    | 3 | TABLE ACCESS FULL | FDAT_BITMAP | 198M| 20G| 708K (1)| 02:21:39 |
    Predicate Information (identified by operation id):
    1 - access("FDAT"."DIM_DATE_ID"="DD"."DIM_DATE_ID")
    2 - filter("DD"."DIM_DATE">TO_DATE(' 2011-12-20 00:00:00', 'syyyy-mm-dd
    hh24:mi:ss'))
    17 rows selected
    When I change the query to:
    select fdat.*, dd.dim_date from fdat_bitmap fdat, dim_dates dd where
    fdat.dim_date_id = dd.dim_date_id
    and fdat.dim_date_id > 20111220;
    Explain plan changes to:
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 390K| 46M| 43958 (1)| 00:08:48 |
    | 1 | MERGE JOIN | | 390K| 46M| 43958 (1)| 00:08:48 |
    | 2 | TABLE ACCESS BY INDEX ROWID | FDAT_BITMAP | 390K| 41M| 43948 (1)| 00:08:48 |
    | 3 | BITMAP CONVERSION TO ROWIDS| | | | | |
    |* 4 | BITMAP INDEX RANGE SCAN | I_FDATB_DIM_DID | | | | |
    |* 5 | SORT JOIN | | 6991 | 97874 | 9 (12)| 00:00:01 |
    |* 6 | INDEX FAST FULL SCAN | UI_DD_DATES_ID | 6991 | 97874 | 8 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    4 - access("FDAT"."DIM_DATE_ID">20111220)
    filter("FDAT"."DIM_DATE_ID">20111220)
    5 - access("FDAT"."DIM_DATE_ID"="DD"."DIM_DATE_ID")
    filter("FDAT"."DIM_DATE_ID"="DD"."DIM_DATE_ID")
    6 - filter("DD"."DIM_DATE_ID">20111220)
    22 rows selected
    My question is why the first query not resulting in the plan similar to second one ? How can make it come with plan similar to the second one without changing the query ?
    Thanks,
    -Rakesh

    user12257218 wrote:
    My query is
    select fdat.*, dd.dim_date from fdat_bitmap fdat, dim_dates dd where
    fdat.dim_date_id = dd.dim_date_id
    and dd.dim_date > TO_DATE('2011-12-20 00:00:00' , 'YYYY-MM-DD HH24:MI:SS');When I change the query to:
    select fdat.*, dd.dim_date from fdat_bitmap fdat, dim_dates dd where
    fdat.dim_date_id = dd.dim_date_id
    and fdat.dim_date_id > 20111220;My question is why the first query not resulting in the plan similar to second one ? How can make it come with plan similar to the second one without changing the query ?
    To a very large extent this is because the two queries are not logically equivalent - unless you have a constraint in place that enforces the rule that:
    dd.dim_date_id > 20111220 if, and only if, dd.dim_date > TO_DATE('2011-12-20 00:00:00' , 'YYYY-MM-DD HH24:MI:SS')
    A constraint like: (dim_date_id = to_number(to_char(dim_date,'yyyymmdd'))) might help - provided both columns also have NOT NULL declarations (or "is not null" constraints), and provided that that's appropriate for the way your application works.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

  • Windows File Share not using Correct NIC Interface.

    Hi
    I have Windows 2012 R2 server with two 1gb NICs.  I want to use NIC1 for the regular network traffic/Internet and the NIC2 for database traffic.  NIC1 has IP Addresses of 64.141.xxx.xxx (I am not showing the last 6 digits for security reasons)
    and 192.168.10.21.  NIC2 has the IP Address of 192.168.11.2.
    The Workstation accessing this server also has two NICS.  NIC1 has the IP address of 64.141.xxx.xxx and 192.168.10.22.  NIC2 has the IP Address of 192.168.11.4.
    I created a share on the server myDataShare$.  On the Workstation I access the share using \\192.168.11.2\myDataShare$ and everything works accept that the data from the share is sent across NIC1 but I want it to go over NIC2.  If I add static
    routes to both machines it makes no difference all network traffic always goes out on NIC1 for both machines.
    If look at the status of each NIC on the send and receive bytes only NIC1 show any traffic.  NIC2 is silent.  How do I get traffic for the 192.168.11.0/24 subnet to use NIC2?
    The strange thing is that if I ping a 192.168.11.0/24 address it goes over NIC2 but file sharing seems to ignore NIC2 altogether.
    Thanks,
    Simon

    Hi Simon,
    Windows Vista and later are based on the strong host model. In the strong host model, the host can only send packets on an interface if the interface is assigned the source
    IP address of the packet being sent. Also the concept of a primary IP address does not exist.
    If the program specifies a source IP address, that IP address is used as the source IP address for connections sourced from that socket and the adapter associated with that
    source IP is used as the source interface. The route table is searched but only for routes that can be reached from that source interface.
    You can try to set the “skipassource” flag ,by using this flag, the added new addresses are not used for outgoing packets unless explicitly set for use by outgoing packets.
    The detail information about skipassource flag:
    Source IP address selection on a Multi-Homed Windows Computer
    http://blogs.technet.com/b/networking/archive/2009/04/25/source-ip-address-selection-on-a-multi-homed-windows-computer.aspx
    The related KB:
    How multiple adapters on the same network are expected to behave
    https://support.microsoft.com/en-us/kb/175767/en-us
    The related third party article:
    Exchange 2013 on Windows Server 2012 with multiple IP addresses on a single NIC
    http://blog.enowsoftware.com/solutions-engine/bid/185076/Exchange-2013-on-Windows-Server-2012-with-multiple-IP-addresses-on-a-single-NIC
    I’m glad to be of help to you!
    *** This response contains a reference to a third party World Wide Web site. Microsoft is providing this information as a convenience to you. Microsoft does not control these
    sites and has not tested any software or information found on these sites; therefore, Microsoft cannot make any representations regarding the quality, safety, or suitability of any software or information found there. There are inherent dangers in the use
    of any software found on the Internet, and Microsoft cautions you to make sure that you completely understand the risk before retrieving any software from the Internet. ***
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Analogue devices not using assigned dial plan

    Hi,
    We are using Tenor gateways for running analogue devices (fax, cordless phones) through Lync 2013. It works fine on our standard dial plan which all end-users are on. However, for a business reason, we need to do restricted dialing on the cordless phones
    (e.g. dial a 5 digit access code to dial out).
    The cordless phones exist in AD (created using new-csanalogdevice in the Lync Mgmt Shell) and I'm using the following command to assign the dial plan:
    get-csanalogdevice "My test phone" | grant-csdialplan -policyname "restrictedCallDialPlan(77777)" -v
    When I run get-csanalogdevice "My test phone" | Select DialPlan I can see I am using the restricted dialling dial plan but the calls are not restricted - e.g. I can call my mobile phone as normal with out having to preceed the number with
    77777.
    First thought was there was a configuration an issue with the dial plan. Checking it in the Lync Ctrl Panel it all looks fine, and it more importantly, it works as expected on Common Area Phones (Polycom CX500).
    If I have to, I can do the restriction on the Tenor gateways but it's not a "nice" solution as the config gets messy. Ideally we'd like to do it in Lync so we can manage it all from one place.
    Is anyone familiar with assigning dial plans to analogue extensions in Lync and know of a reason this wouldn't work?
    Many thanks

    Hello,
              I experienced the same behaviour utilising analogue devices within Lync, during my research it appeared that while you can set a dial plan against an analogue device, this will never take affect and
    will only inherit normalisation rules defined in the Global dial plan. Please see the following article below under the "Call Routing For Analogue Devices" section, around half way down this section it explains in details the grant-csdialplan behaviour
    for an analogue device. In my case, I utilised a feature on the AudioCodes MediaPack devices which we were using for analogue connectivity in order to restrict dialling on a per port basis.
    http://www.mylynclab.com/2013/04/microsoft-lync-facts-about-fax.html
    Regards.
    http://www.b4z.co.uk

  • Wine - not using correct fonts under Xorg 7.0

    How can I tell Wine to use fonts from the correct location under Xorg 7.0? There's nothing in winecfg that I can see related to it, and no file in /etc/wine to alter...

    Snowman wrote:wine 0.9.5-1 was out today. Did you try it?
    I did indeed, but sadly it had no effect.

  • MOVED: Graphics cards not using correct generation of PCIe

    This topic has been moved to GAMING Motherboards.
    https://forum-en.msi.com/index.php?topic=253322.0

    Quote from: jdthedj69 on 19-February-15, 09:12:38
    Well see that's the weird thing, I've read a little about this and a lot of people have it switch when the render test runs because of power saving features in the graphics cards and such, but mine does the opposite, it starts in 3.0 and within 5 to 10 seconds switches down to 1.1.
    if the freq. drops, it will also drop the gen version, its power saving feature of the card
    stress the card with furmark or better render then watch out the gen version

  • SQL Query C# Using Execution Plan Cache Without SP

    I have a situation where i am executing an SQL query thru c# code. I cannot use a stored procedure because the database is hosted by another company and i'm not allowed to create any new procedures. If i run my query on the sql mgmt studio the first time
    is approx 3 secs then every query after that is instant. My query is looking for date ranges and accounts. So if i loop thru accounts each one takes approx 3 secs in my code. If i close the program and run it again the accounts that originally took 3 secs
    now are instant in my code. So my conclusion was that it is using an execution plan that is cached. I cannot find how to make the execution plan run on non-stored procedure code. I have created a sqlcommand object with my queary and 3 params. I loop thru each
    one keeping the same command object and only changing the 3 params. It seems that each version with the different params are getting cached in the execution plans so they are now fast for that particular query. My question is how can i get sql to not do this
    by either loading the execution plan or by making sql think that my query is the same execution plan as the previous? I have found multiple questions on this that pertain to stored procedures but nothing i can find with direct text query code.
    Bob;
     

    I did the query running different accounts and different dates with instant results AFTER the very first query that took the expected 3 secs. I changed all 3 fields that i've got code for parameters for and it still remains instant in the mgmt studio but
    still remains slow in my code. I'm providing a sample of the base query i'm using.
    select i.Field1, i.Field2, 
    d.Field3  'Field3',
    ip.Field4 'Field4', 
    k.Field5 'Field5'
    from SampleDataTable1 i, 
    SampleDataTable2 k, 
    SampleDataTable3 ip,
    SampleDataTable4 d 
    where i.Field1 = k.Field1 and i.Field4 = ip.Field4 
    i.FieldDate between '<fromdate>' and  '<thrudate>' 
    and k.Field6 = <Account>
    Obviously the field names have been altered because the database is not mine but other then the actual names it is accurate. It works it just takes too long in code as described in the initial post. 
    My params setup during the init for the connection and the command.
    sqlCmd.Parameters.Add("@FromDate", SqlDbType.DateTime);
            sqlCmd.Parameters.Add("@ThruDate", SqlDbType.DateTime);
            sqlCmd.Parameters.Add("@Account", SqlDbType.Decimal);
    Each loop thru the code changes these 3 fields.
        sqlCommand.Parameters["@FromDate"].Value = dtFrom;
        sqlCommand.Parameters["@ThruDate"].Value = dtThru;
        sqlCommand.Parameters["@Account"].Value = sAccountNumber;
    SqlDataReader reader = sqlCommand.ExecuteReader();
            while (reader.Read())
                reader.Close();
    One thing i have noticed is that the account field is decimal(20,0) and by default the init i'm using defaults to decimal(10) so i'm going to change the init to 
       sqlCmd.Parameters["@Account"].Precision = 20;
       sqlCmd.Parameters["@Account"].Scale = 0;
    I don't believe this would change anything but at this point i'm ready to try anything to get the query running faster. 
    Bob;

  • Locked table stats on volatile IOT result in suboptimal execution plan

    Hello,
    since upgrading to 10gR2 we are experiencing weird behaviour in execution plans of queries which join tables with a volatile IOT on which we deleted and locked statistics.
    Execution plan of the example query running ok (SYS_IOT... is the volatile IOT):
       0       SELECT STATEMENT Optimizer Mode=ALL_ROWS (Cost=12 Card=1 Bytes=169)
       1    0    SORT AGGREGATE (Card=1 Bytes=169)
       2    1      NESTED LOOPS OUTER (Cost=12 Card=1 Bytes=169)
       3    2        NESTED LOOPS OUTER (Cost=10 Card=1 Bytes=145)
       4    3          NESTED LOOPS (Cost=6 Card=1 Bytes=121)
       5    4            NESTED LOOPS OUTER (Cost=5 Card=1 Bytes=100)
       6    5              NESTED LOOPS (Cost=5 Card=1 Bytes=96)
       7    6                INDEX FAST FULL SCAN ...SYS_IOT_TOP_76973 (Cost=2 Card=1 Bytes=28)
       8    6                TABLE ACCESS BY INDEX ROWID ...VSUC (Cost=3 Card=1 Bytes=68)
       9    8                  INDEX RANGE SCAN ...VSUC_VORG (Cost=2 Card=1)Since 10gR2 the index on the joined table is not used:
       0       SELECT STATEMENT Optimizer Mode=ALL_ROWS (Cost=857 Card=1 Bytes=179)
       1    0    SORT AGGREGATE (Card=1 Bytes=179)
       2    1      NESTED LOOPS OUTER (Cost=857 Card=1 Bytes=179)
       3    2        NESTED LOOPS OUTER (Cost=855 Card=1 Bytes=155)
       4    3          NESTED LOOPS (Cost=851 Card=1 Bytes=131)
       5    4            NESTED LOOPS OUTER (Cost=850 Card=1 Bytes=110)
       6    5              NESTED LOOPS (Cost=850 Card=1 Bytes=106)
       7    6                TABLE ACCESS FULL ...VSUC (Cost=847 Card=1 Bytes=68)
       8    6                INDEX RANGE SCAN ...SYS_IOT_TOP_76973 (Cost=3 Card=1 Bytes=38)I did a UNLOCK_TABLE_STATS and GATHER_TABLE_STATS on the IOT and everything worked fine - the database used the first execution plan.
    Also, setting OPTIMIZER_FEATURES_ENABLE to 10.1.0.4 results in the correct execution plan, whereas 10.2.0.2 (standard on 10gR2) doesn't use the index - so i suppose it's an optimizer problem/bug/whatever.
    I've also tried forcing the index with a hint - it's scanning the index but the costs are extremly high.
    Any help would be greatly appreciated,
    regards
    -sd

    sdeng,
    The first thing you should do is to switch to using the dbms_xplan package for generating execution plans. Amongst other things, this will give you the filter and access predicates as they were when Oracle produced the execution plan. It will also report comments like: 'dynamic sampling used for this statement'.
    If you have deleted and locked stats on the IOT, then 10gR2 will (by default) be using dynamic sampling on that object - which means (in theory) it gets a better picture of how many rows really are there, and how well they might join to the next table. This may be enought to explain the change in plan.
    What you might try, if the first plan is guaranteed to be good, is to collect stats on the IOT when there is NO data in the IOT, then lock the stats. (Alternatively, fake some stats that say the table is empty if it never is really empty).
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • Problems with execution plans of OL queries in MGP

    I'm just facing some strange behavior of OL MGP process. Its' performance is really poor on one of our servers and I just executed Consperf to figure out that the execution plans looks really weird. It looks like OL doesn't use available indexes at all even though statistics are ok and even when I execute the same SQL manually I can see that the execution plan looks totally different - there are almost none TABLE ACCESS FULL lookups. Is there any OL setup property which could cause this strange behavior?
    Consperf explain plan output for one of the snapshots:
    ********** BASE - Publication item query ***********
    SELECT d.VISITID, d.TASKID, d.QTY FROM HEINAPS.PA_TASKS d
    WHERE d.VisitID IN (SELECT h.VisitID FROM HEINAPS.PA_VISITS_H_LIST h WHERE h.DSM = ?)
    | Operation | Name | Rows | Bytes| Cost | Optimizer
    | SELECT STATEMENT | | 1 | 24 | 0 |ALL_ROWS
    | FILTER | | | | |
    | HASH JOIN RIGHT SEMI | | 2M| 61M| 20743 |
    | TABLE ACCESS FULL |PA_VISITS_H_LIST | 230K| 2M| 445 |ANALYZED
    | TABLE ACCESS FULL |PA_TASKS | 11M| 134M| 6522 |ANALYZED
    explain plan result of the same query executed in Pl/SQL Developer:
    UPDATE STATEMENT, GOAL = ALL_ROWS               Cost=3345     Cardinality=39599     Bytes=2969925
    UPDATE     Object owner=MOBILEADMIN     Object name=CMP$JPHSK_PA_TASKS               
    HASH JOIN ANTI               Cost=3345     Cardinality=39599     Bytes=2969925
    TABLE ACCESS BY INDEX ROWID     Object owner=MOBILEADMIN     Object name=CMP$JPHSK_PA_TASKS     Cost=1798     Cardinality=39599     Bytes=910777
    INDEX RANGE SCAN     Object owner=MOBILEADMIN     Object name=CMP$1527381C     Cost=239     Cardinality=49309     
    VIEW     Object owner=SYS     Object name=VW_SQ_1     Cost=1547     Cardinality=29101     Bytes=1513252
    NESTED LOOPS               Cost=1547     Cardinality=29101     Bytes=640222
    INDEX RANGE SCAN     Object owner=HEINAPS     Object name=IDX_PAVISITSHL_DSM_VISITID     Cost=39     Cardinality=1378     Bytes=16536
    INDEX RANGE SCAN     Object owner=HEINAPS     Object name=PK_PA_TASKS     Cost=2     Cardinality=21     Bytes=210
    This query and also few others run in MGP for few minutes for each user, because of the poor execution plan. Is there any method how to force OL to use "standard" execution plans the DB produces to get MGP back to usable performance?

    The problem is that the MGP process does not run the publication item query as such. What id does is wrap it up inside insert and update statements and then execute via java, and this is what can cause problems.
    Set the trace to all for MGPCOMPOSE on a user, wait for the MGP cycle and you will find in the trace files a series of files for the user. Look through this and you should find the actual wrapped up query that is executed. This should also be in the consperf file. Consperf should give a few different execution stats for the query (ins_1, ins_2) if these are better then set these in c$consperf. The automatic setting does nort always choose the best one.
    If all else fails, try expressing the query in other ways and test them in the MGP process. I have found that this kind of trial and error is the only approach
    couple of bits about the query below
    1) do you sopecifically need to restrict the columns from HEINAPS.PA_TASKS ? if not use select * in the PI select statement as it tends to bind better
    2) what is the data type of HEINAPS.PA_VISITS_H_LIST.DSM. If numberic, then do a to_number() on the bind variable and the type casting is not very efficient

  • Avoid execution plan that resolves unused join

    There are two tables Master and LookUp.
    Master references LookUp by its indexed primary key.
    CREATE TABLE "LookUp" (
    ID_LU NUMBER NOT NULL,
    DATA VARCHAR2(100) );
    CREATE UNIQUE INDEX LOOKUP_PK ON "LookUp"(ID_LU);
    ALTER TABLE "LookUp" ADD (
    CONSTRAINT LOOKUP_PK
    PRIMARY KEY (ID_LU)
    USING INDEX );
    CREATE TABLE "Master" (
    ID NUMBER NOT NULL,
    DATA VARCHAR2(100),
    ID_LU NUMBER );
    CREATE UNIQUE INDEX MASTER_PK ON "Master"(ID);
    ALTER TABLE "Master" ADD (
    CONSTRAINT MASTER_PK
    PRIMARY KEY (ID)
    USING INDEX );
    ALTER TABLE "Master" ADD (
    CONSTRAINT FK_MASTER
    FOREIGN KEY (ID_LU)
    REFERENCES "LookUp" (ID_LU));
    Selecting rows from LookUp with LEFT OUTER JOIN Master produces a query execution plan that does not consider Master as it is not used.
    SELECT t1.ID_LU FROM "LookUp" t1
    LEFT OUTER JOIN "Master" t2
    ON t1.ID_LU = t2.ID_LU;
    PLAN_ID     ID     PARENT_ID     DEPTH     OPERATION     OPTIMIZER     OPTIONS     OBJECT_NAME     OBJECT_ALIAS     OBJECT_TYPE
    2     0          0     SELECT STATEMENT     ALL_ROWS                    
    2     1     0     1     TABLE ACCESS          FULL     Master     T1@SEL$2     TABLE
    But selecting rows from Master with LEFT OUTER JOIN LookUp produces a not specular query execution plan that considers LookUp table although it is not used.
    SELECT t1.ID_LU FROM "Master" t1
    LEFT OUTER JOIN "LookUp" t2
    ON t1.ID_LU = t2.ID_LU;
    PLAN_ID     ID     PARENT_ID     DEPTH     OPERATION     OPTIMIZER     OPTIONS     OBJECT_NAME     OBJECT_ALIAS     OBJECT_TYPE
    1     0          0     SELECT STATEMENT     ALL_ROWS                    
    1     1     0     1     HASH JOIN          OUTER               
    1     2     1     2     INDEX          FAST FULL SCAN     LOOKUP_PK     T1@SEL$2     INDEX (UNIQUE)
    1     3     1     2     TABLE ACCESS          FULL     Master     T2@SEL$1     TABLE
    For example Sql Server 2005 does not make distiction between the two query execution plans.
    I would like to know why sql optimizer behaves this way and especially if there is a hint or an option that helps optimizer to avoid involving unused join tables.

    Actually, something does not add up. Left outer join selects all rows in left table even if there is no matching row in right table. Left table in first query is Lookup table. So I can not understand how execution plan:
    SELECT t1.ID_LU FROM "LookUp" t1
    LEFT OUTER JOIN "Master" t2
    ON t1.ID_LU = t2.ID_LU;
    PLAN_ID ID PARENT_ID DEPTH OPERATION OPTIMIZER OPTIONS OBJECT_NAME OBJECT_ALIAS OBJECT_TYPE
    2 0 0 SELECT STATEMENT ALL_ROWS
    2 1 0 1 TABLE ACCESS FULL Master T1@SEL$2 TABLEbypasses Lookup table. On my 10.2.0.4.0 I get:
    SQL> SELECT t1.ID_LU FROM "LookUp" t1
      2  LEFT OUTER JOIN "Master" t2
      3  ON t1.ID_LU = t2.ID_LU;
    no rows selected
    Execution Plan
    Plan hash value: 3482147238
    | Id  | Operation          | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |        |     1 |    26 |     5  (20)| 00:00:01 |
    |*  1 |  HASH JOIN OUTER   |        |     1 |    26 |     5  (20)| 00:00:01 |
    |   2 |   TABLE ACCESS FULL| LookUp |     1 |    13 |     2   (0)| 00:00:01 |
    |   3 |   TABLE ACCESS FULL| Master |     1 |    13 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("T1"."ID_LU"="T2"."ID_LU"(+))
    Note
       - dynamic sampling used for this statement
    Statistics
            209  recursive calls
              0  db block gets
             48  consistent gets
              0  physical reads
              0  redo size
            274  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              6  sorts (memory)
              0  sorts (disk)
              0  rows processedI do question this plan. I would expect FULL INDEX SCAN of LOOKUP_PK index. And for second query I get plan same a OP:
    SQL> SELECT t1.ID_LU FROM "Master" t1
      2  LEFT OUTER JOIN "LookUp" t2
      3  ON t1.ID_LU = t2.ID_LU;
    no rows selected
    Execution Plan
    Plan hash value: 3856835961
    | Id  | Operation          | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |           |     1 |    26 |     2   (0)| 00:00:01 |
    |   1 |  NESTED LOOPS OUTER|           |     1 |    26 |     2   (0)| 00:00:01 |
    |   2 |   TABLE ACCESS FULL| Master    |     1 |    13 |     2   (0)| 00:00:01 |
    |*  3 |   INDEX UNIQUE SCAN| LOOKUP_PK |     1 |    13 |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - access("T1"."ID_LU"="T2"."ID_LU"(+))
    Note
       - dynamic sampling used for this statement
    Statistics
              1  recursive calls
              0  db block gets
              7  consistent gets
              0  physical reads
              0  redo size
            274  bytes sent via SQL*Net to client
            385  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processed
    SQL> SY.

Maybe you are looking for

  • How to print diffrent color and diffrent size of text in JTextArea ?

    Hello All, i want to make JFrame which have JTextArea and i append text in JTextArea in diffrent size and diffrent color and also with diffrent fonts. any body give me any example or help me ? i m thanksfull. Arif.

  • Internal server error when I try to view my portlet jsp page

    Hi, I am a newbie on Oracle portal and extremely frustrated. Does this technology actually work?? I followed every single step in portlet developer guide and tried several times Java portlet/Oracle PDK portlet but this thing just doesnt work. Lately

  • How to override Ctrl-Click behavior in Java L&F

    On the Mac, Ctrl-Click is the popup trigger. While Java L&F designers were obviously aware of this (it is warned about in the Java Look and Feel Design Guidelines, 2nd ed, page 106). However, just two pages later (page 108), they then generically spe

  • F5702 - AIBU, Balance in transaction currency

    Hello When settling AUC (AIBU) I get error message F5702 - Balance in transaction currency. I have searched the forum and the internet, but without success. Only thing I have found as 'maybe useable' is that the balance in a period may not be negativ

  • Entourage Forward as Attachment script

    Does anyone know if this exists, and if not, might some kind soul out there create one for us? As you may know Entourage is the only email program out there which wont properly forward html messages, and instead forces you to forward the html message