Smilar query different behaviour

Hi all
I have a table with 2 indexes on 10g R2 on Windows 2003.
]one is composite primary key index (STOW.PK_SM350_TRANSACTION_AUDIT)
columns (SM300_TRANSACTIONID, SM350_TRANSACTIONAUDITID)
one is single index
STOW.SM350_IDX1 column (SM300_TRANSACTIONID)
first query is
select count(*) from stow.sm350_transaction_audit where sm300_transactionid = '9B96428447C64BB682F2F004777F42B815933';
second query is
select * from stow.sm350_transaction_audit where sm300_transactionid = '9B96428447C64BB682F2F004777F42B815933';
Where clause on two query is same and and queries return zero rows
Problem is; first query runs with index range scan. Second query runs with full table scan.. The index which is used is single column non-unique index (still doesnt have any nulls on column).
When I hint the second query to force to use single index it runs faster as expected.
Indexe and table statistics are up-to-date (I tried to gather %10 and both %100 for index but the result were same)
The 10053 trace output for the first query is below
BASE STATISTICAL INFORMATION
Table Stats::
Table: SM350_TRANSACTION_AUDIT Alias: SM350_TRANSACTION_AUDIT
#Rows: 24600584 #Blks: 699502 AvgRowLen: 185.00
Index Stats::
Index: PK_SM350_TRANSACTION_AUDIT Col#: 1 2
LVLS: 3 #LB: 180135 #DK: 25777456 LB/K: 1.00 DB/K: 1.00 CLUF: 1800277.00
Index: SM350_IDX1 Col#: 1
LVLS: 3 #LB: 157875 #DK: 3779 LB/K: 41.00 DB/K: 194.00 CLUF: 733950.00
SINGLE TABLE ACCESS PATH
Column (#1): SM300_TRANSACTIONID(VARCHAR2)
AvgLen: 34.00 NDV: 7754 Nulls: 0 Density: 1.8727e-004
Histogram: HtBal #Bkts: 254 UncompBkts: 254 EndPtVals: 207
Table: SM350_TRANSACTION_AUDIT Alias: SM350_TRANSACTION_AUDIT
Card: Original: 24600584 Rounded: 4648929 Computed: 4648929.26 Non Adjusted: 4648929.26
Access Path: TableScan
Cost: 153817.92 Resp: 153817.92 Degree: 0
Cost_io: 153018.00 Cost_cpu: 9901578323
Resp_io: 153018.00 Resp_cpu: 9901578323
Access Path: index (index (FFS))
Index: PK_SM350_TRANSACTION_AUDIT
resc_io: 39406.00 resc_cpu: 5664988114
ix_sel: 0.0000e+000 ix_sel_with_filters: 1
Access Path: index (FFS)
Cost: 39863.66 Resp: 39863.66 Degree: 1
Cost_io: 39406.00 Cost_cpu: 5664988114
Resp_io: 39406.00 Resp_cpu: 5664988114
Access Path: index (index (FFS))
Index: SM350_IDX1
resc_io: 34537.00 resc_cpu: 5395455540
ix_sel: 0.0000e+000 ix_sel_with_filters: 1
Access Path: index (FFS)
Cost: 34972.89 Resp: 34972.89 Degree: 1
Cost_io: 34537.00 Cost_cpu: 5395455540
Resp_io: 34537.00 Resp_cpu: 5395455540
Access Path: index (skip-scan)
SS sel: 0.18898 ANDV (#skips): 4871330
SS io: 4871330.27 vs. index scan io: 34042.00
Skip Scan rejected
Access Path: index (IndexOnly)
Index: PK_SM350_TRANSACTION_AUDIT
resc_io: 34045.00 resc_cpu: 1216715625
ix_sel: 0.18898 ix_sel_with_filters: 0.18898
Cost: 34143.30 Resp: 34143.30 Degree: 1
Access Path: index (AllEqRange)
Index: SM350_IDX1
resc_io: 29838.00 resc_cpu: 1162075527
ix_sel: 0.18898 ix_sel_with_filters: 0.18898
Cost: 29931.88 Resp: 29931.88 Degree: 1
Best:: AccessPath: IndexRange Index: SM350_IDX1
Cost: 29931.88 Degree: 1 Resp: 29931.88 Card: 4648929.26 Bytes: 0
The 10053 trace output for the second query is below
BASE STATISTICAL INFORMATION
Table Stats::
Table: SM350_TRANSACTION_AUDIT Alias: SM350_TRANSACTION_AUDIT
#Rows: 24600584 #Blks: 699502 AvgRowLen: 185.00
Index Stats::
Index: PK_SM350_TRANSACTION_AUDIT Col#: 1 2
LVLS: 3 #LB: 180135 #DK: 25777456 LB/K: 1.00 DB/K: 1.00 CLUF: 1800277.00
Index: SM350_IDX1 Col#: 1
LVLS: 3 #LB: 157875 #DK: 3779 LB/K: 41.00 DB/K: 194.00 CLUF: 733950.00
SINGLE TABLE ACCESS PATH
Column (#1): SM300_TRANSACTIONID(VARCHAR2)
AvgLen: 34.00 NDV: 7754 Nulls: 0 Density: 1.8727e-004
Histogram: HtBal #Bkts: 254 UncompBkts: 254 EndPtVals: 207
Table: SM350_TRANSACTION_AUDIT Alias: SM350_TRANSACTION_AUDIT
Card: Original: 24600584 Rounded: 4648929 Computed: 4648929.26 Non Adjusted: 4648929.26
Access Path: TableScan
Cost: 153975.66 Resp: 153975.66 Degree: 0
Cost_io: 153018.00 Cost_cpu: 11854128503
Resp_io: 153018.00 Resp_cpu: 11854128503
Access Path: index (skip-scan)
SS sel: 0.18898 ANDV (#skips): 4871330
SS io: 4871330.27 vs. index scan io: 34042.00
Skip Scan rejected
Access Path: index (RangeScan)
Index: PK_SM350_TRANSACTION_AUDIT
resc_io: 374255.00 resc_cpu: 6416159397
ix_sel: 0.18898 ix_sel_with_filters: 0.18898
Cost: 374773.35 Resp: 374773.35 Degree: 1
Access Path: index (AllEqRange)
Index: SM350_IDX1
resc_io: 168538.00 resc_cpu: 4856139355
ix_sel: 0.18898 ix_sel_with_filters: 0.18898
Cost: 168930.32 Resp: 168930.32 Degree: 1
Best:: AccessPath: TableScan
Cost: 153975.66 Degree: 1 Resp: 153975.66 Card: 4648929.26 Bytes: 0
Any idea about the wrong cost calculation for the second query???? Or can anyone explain me the truth If I thinking wrong ???

Thank you for your comments Steven
You are right. I have the power to know the data (I am DBA not Developer :) )
These are the max min values and the searched value for sm300_transactionid
min= 00020978E13B45AEA8556D8AF431CD15
max= FFF617D95A2D4B34AB085FED512EB9E7
whr= 9B96428447C64BB682F2F004777F42B815933
Do you think this can cause the problem ? CBO knows that this column is not listed on column stats so it try the full table scan to find it.
And if this is the problem can I make the assumption below
My index is on not null column. I do a search on not null column with a value out of max min range. If CBO chooses the full tablescan for a value which is not on the table , Can I say that, in these cases CBO thinks that table is more reliable than index ????
Message was edited by:
coskan

Similar Messages

  • SQL Server Management Studio and Native Client different behaviour on delete

    I have a problem with transaction containing insert and delete to same table and some select/insert/update to some other tables. Problematic table has primary key defined as combination of column1 and column2.
    When two different instances using Native Client execute simultaneously this code and make inserts to table then delete part of code causes deadlock. However this doesn't happen when trying this situation in MS SQL Server Management Studio query.
    Is there some option that is missing from Native Client connection string which can cause this different behaviour?

    Hello,
    I don't think there is a difference in the behavior. SSMS uses ADO.NET and that Provider base on the Native Client.
    The difference will be more that way, when the transaction is commited and so the locks released. I guess your application keeps the transaction (much) longer open; you should commit a transaction as soon as possible to avoid long time locks and so
    deadlocks.
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • In-different behaviour of join condition

    My query is of in-different behaviour of join condition with join between a varchar2 column and a number column. I am using the following join condition :-
    CM.UPDATED_BY=to_char(LM.LOGIN_ID)
    where CM.UPDATED_BY is a varchar2 column and LM.LOGIN_ID is a number column. Now, CM.UPDATED_BY also has number only but some previous & old data is having varchar2 data. So, for that reason, i put the to_char before LM.LOGIN_ID, otherwise, only
    CM.UPDATED_BY=to_char(LM.LOGIN_ID)
    would having been okay. Now, my real question is that the query with the condition,
    CM.UPDATED_BY=to_char(LM.LOGIN_ID)
    works fine as long as there is no 'character' data in 'CM.UPDATED_BY'. If i put the condition:
    CM.UPDATED_BY=to_char(LM.LOGIN_ID)
    then, the query is taking too long and the output is also not coming. Please help in solving my doubt as i need it resolved urgently.

    1) Did you intend for all 4 join conditions to be identical? From the text of your question, it sounds like some of the conditions should be different.
    2) How do you compare the running time when CM.UPDATED_BY has non-numerica data and when it has numeric data? If you are altering the data in the table, are you reanalyzing the table between runs? Is there a difference in the query plan in the two cases?
    Justin
    Distributed Database Consulting, Inc.
    www.ddbcinc.com/askDDBC

  • Different behaviour of XMLType storage

    hi,
    I have problem with different behaviour of storage type "BINARY XML" and regular storage ( simple CLOB I guess) of the XMLType datatype.
    Setup
    - Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
    - XML file ( "Receipt.xml" ) with a structure like :
    <?xml version="1.0" encoding="UTF-8"?>
    <ESBReceiptMessage xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
         <ESBMessageHeader>
              <MsgSeqNumber>4713</MsgSeqNumber>
              <MessageType>Receipt</MessageType>
              <MessageVersion>1.1</MessageVersion>
         </ESBMessageHeader>
         <Receipt>
              <ReceiptKey>1234567-03</ReceiptKey>          
              <ReceiptLines>
                   <ReceiptLine><Material><MaterialKey>00011-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
                   <ReceiptLine><Material><MaterialKey>00021-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
                   <ReceiptLine><Material><MaterialKey>00031-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
    .....etc....etc.....etc...
                   <ReceiptLine><Material><MaterialKey>09991-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
                   <ReceiptLine><Material><MaterialKey>10001-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
                   <ReceiptLine><Material><MaterialKey>10011-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
              </ReceiptLines>
         </Receipt>
    </ESBReceiptMessage>=> 1 Header element : "Receipt" and exactly 1001 "ReceiptLine" elements.
    Problem:
    Test 1 :
    drop table xml_ddb;
    CREATE TABLE xml_ddb (id number,xml_doc XMLType);
    INSERT INTO xml_ddb (id, xml_doc)  VALUES (4716,XMLType(bfilename('XMLDIR', 'Receipt.xml'),nls_charset_id('AL32UTF8')));
    select count(1) from (
    SELECT dd.id,ta.Receiptkey,li.materialkey,li.qty
       FROM xml_ddb dd,
            XMLTable('/ESBReceiptMessage/Receipt' PASSING dd.xml_doc
                     COLUMNS ReceiptKey VARCHAR2(28) PATH 'ReceiptKey',
                             ReceiptLine XMLType PATH 'ReceiptLines/ReceiptLine') ta,
            XMLTable('ReceiptLine' PASSING ta.ReceiptLine
                     COLUMNS materialkey VARCHAR2(14)  PATH 'Material/MaterialKey',
                             qty         NUMBER(10)    PATH 'Qty') li
      COUNT(1)
          1001
    1 row selected.The storage of the XMLType column has not been specified.
    => All 1001 detailled rows are selected.
    => Everything is fine
    Test 2 :
    drop table xml_ddb;
    CREATE TABLE xml_ddb (id number,xml_doc XMLType) XMLType xml_doc store AS BINARY XML; -- <---- Different storage type
    INSERT INTO xml_ddb (id, xml_doc)  VALUES (4716,XMLType(bfilename('XMLDIR', 'Receipt.xml'),nls_charset_id('AL32UTF8')));
    select count(1) from (
    SELECT dd.id,ta.Receiptkey,li.materialkey,li.qty
       FROM xml_ddb dd,
            XMLTable('/ESBReceiptMessage/Receipt' PASSING dd.xml_doc
                     COLUMNS ReceiptKey VARCHAR2(28) PATH 'ReceiptKey',
                             ReceiptLine XMLType PATH 'ReceiptLines/ReceiptLine') ta,
            XMLTable('ReceiptLine' PASSING ta.ReceiptLine
                     COLUMNS materialkey VARCHAR2(14)  PATH 'Material/MaterialKey',
                             qty         NUMBER(10)    PATH 'Qty') li
      COUNT(1)
          1000
    1 row selected.Storage of the XMLType column has been defined as "BINARY XML"
    => Only 1000 rows are select
    => One row is missing.
    After some tests : There seems to be a "hard border" of 1000 rows that comes with the different datatype ( So if you put 2000 rows into the XML you will get also only 1000 rows back )
    Question
    As I am a newbie in XMLDB :
    - Is the "construction" with the nested tables in the select-statement maybe not recommended/"allowed" ?
    - Are there different ways to get back "Head" + "Line" elements in a relational structure ( even if there are more than 1000 lines) ?
    Thanks in advance
    Bye
    Stefan

    hi,
    General
    You are right. I have a predefined XSD structure. And now I try to find a way to handle this in Oracle ( up to now, we are doing XML handling in Java ( with JAXB ) outside the DB)
    => So I will take a look at the "object-relational" storage. Thank's for that hint.
    Current thread
    The question, whether there is an "artifical" border of 1000 rows, when joining 2 XML tables together, is still open....
    (although it might be not interesting for me anymore :-), maybe somebody else will need the answer...)
    Bye
    Stefan

  • Different behaviour between 1.4,1.5_05, 1.5_07.

    Hi and thanks in advance for your help,
    Looking for pointers to a solution for a problem I have.
    A set of Java classes subscribe to a subscription service and listen for updates on a network. To do this the JVM uses JNI (actually a JIntegra vendor product) to talk to the Windows DLL files on the server, it�s these files that actually listen for the updates. The information is then passed back up to the JVM (via the JIntegra libraries). The strange behaviour i get is as follows:
    1.     No problems with JDK 1.4.*
    2.     JDK1.5_05: The program runs fine for a few minutes and then crashes out BUT with no stack trace or errors at all (it just terminates).
    3.     JDK 1.5_07: The program runs ok to start with but then consumes all the file handles on the Windows server. After approx 4hrs the program hangs as it has consumed all the servers spare file handles (in fact 3,800,000 of them).
    The added problem of debugging this, is that we use a vendor for the Java/COM+ interaction and we don�t have the source code, But I have tried different versions of the JIntergra without any change in the behaviour (so i don't think its that). It appears to be the JVM version that causes the change in behaviour.
    I do not understand why I get such different behaviour from the different versions of Java? Although both JDK1.5 don't work.
    I was wondering if someone can point me in the right direction for trying to get JVM information outputted when the JDK1.5_05 crashes out.
    How can I put a hook in the JVM in my code to output information when it exits. I realise its doing it unexpectedly so this type of solution might not work but I find it strange that no error is thrown.
    Also; Does anyone know of any bugs that might possibly explain any this behaviour?
    Any ideas anyone.

    Jintegra recommend using 1.5. I will check the JNI bugs database to see if anything like this has happened before:
    Got this one stacktrace recently:
    # An unexpected error has been detected by HotSpot Virtual Machine:
    # Internal Error (4D555445583F57494E13120E4350500080), pid=1944, tid=1500
    # Java VM: Java HotSpot(TM) Client VM (1.5.0_07-b03 mixed mode, sharing)
    # Can not save log file, dump to screen..
    # An unexpected error has been detected by HotSpot Virtual Machine:
    # Internal Error (4D555445583F57494E13120E4350500080), pid=1944, tid=1500
    # Java VM: Java HotSpot(TM) Client VM (1.5.0_07-b03 mixed mode, sharing)
    --------------- T H R E A D ---------------
    Current thread is native thread
    Stack: [0x045e0000,0x04620000), sp=0x0461fb08, free space=254k
    Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
    V [jvm.dll+0x11f2eb]
    V [jvm.dll+0x62f13]
    V [jvm.dll+0xd1741]
    V [jvm.dll+0xd18f0]
    V [jvm.dll+0x90d16]
    --------------- P R O C E S S ---------------
    Java Threads: ( => current thread )
    0x03029730 JavaThread "J-Integra COM initialization thread (please don't touch)" daemon [_thread_blocked, id=1684]
    0x00237c28 JavaThread "DestroyJavaVM" [_thread_blocked, id=3112]
    0x009fc7c8 JavaThread "Thread-0" [_thread_blocked, id=2996]
    0x009a16a8 JavaThread "Low Memory Detector" daemon [_thread_blocked, id=2860]
    0x00238478 JavaThread "CompilerThread0" daemon [_thread_blocked, id=1464]
    0x0099f730 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=3168]
    0x009998e0 JavaThread "Finalizer" daemon [_thread_blocked, id=2928]
    0x0023fae0 JavaThread "Reference Handler" daemon [_thread_blocked, id=2268]
    Other Threads:
    0x009982d8 VMThread [id=2036]
    0x009a2ce8 WatcherThread [id=532]
    VM state:not at safepoint (normal execution)
    VM Mutex/Monitor currently owned by a thread: None
    Heap
    def new generation total 576K, used 244K [0x22ab0000, 0x22b50000, 0x22f90000)
    eden space 512K, 47% used [0x22ab0000, 0x22aec330, 0x22b30000)
    from space 64K, 6% used [0x22b40000, 0x22b410a8, 0x22b50000)
    to space 64K, 0% used [0x22b30000, 0x22b30000, 0x22b40000)
    tenured generation total 1408K, used 917K [0x22f90000, 0x230f0000, 0x26ab0000)
    the space 1408K, 65% used [0x22f90000, 0x23075430, 0x23075600, 0x230f0000)
    compacting perm gen total 8192K, used 1354K [0x26ab0000, 0x272b0000, 0x2aab0000)
    the space 8192K, 16% used [0x26ab0000, 0x26c02a20, 0x26c02c00, 0x272b0000)
    ro space 8192K, 67% used [0x2aab0000, 0x2b00d9f8, 0x2b00da00, 0x2b2b0000)
    rw space 12288K, 46% used [0x2b2b0000, 0x2b853808, 0x2b853a00, 0x2beb0000)
    Not sure what 'Internal Error (4D555445583F57494E13120E4350500080)' represents.
    Thanks for your advice.

  • Different behaviour of Flash content when on server

    Hi
    I have noticed different behaviours of my Flash movie in case
    a/ I am checking offline content (SWF) using the fault view (Show All)
    b/ I am checking  the SWF in a HTML file on a server.
    More concrete: Depending on a number of conditions I attach a movieclip to a certain object. This works fine in offline mode.
    Another example: Depending on a certain zoom level of the application, I unload some SWF.
    All works fine offline.
    When I check this online, the attach movieclip function only works in some cases, the unload of SWF does not work .
    What can be the cause ?
    bestregards
    eG

    Not sure about most of your questions -- this is all new stuff for me, too. But on this one item, I hope this helps:
    For example it tells me that a plugin (which has already been installed) needs to be installed.If you installed the Mozilla browser after initially installing the java plugin in IE, then the plugin needs to be installed within the Mozilla browser's plugins directory. And I have never been able to get Netscape or Firefox to install a Forms plugin properly. It seems like I need to run IE to get the plugin installed automatically. Then I go back to the other browser and all is ok. Looking into the Mozilla-based browser's plugins folders, I can see that running the plugin install through IE also copies the corresponding .dll file into all the Mozilla-based browsers' plugins folders.
    But since you have already installed the plugin in IE, I am not sure how you would get it to work for the other browser. But if you can identify the .dll required, just copy it yourself into your Mozilla browser's folder.

  • Same query different timings , different sessions at the same time

    I am running exactly the same query from 2 different sessions almost simultaneously and in one session it is taking 2 seconds and in the other it is taking 20 seconds. The explain plans in both the sessions (by set autotrace on) are exactly the same. The timing is almost same for succesive runs of the query in the same sessions. That is when I run the query again in the "slow session" it is always around 20 seconds and when I run the query again in the "fast session" it is always fast. The queries are being run within a few seconds of each other so the load on the database is almost same.
    My hunch is that it is a database parameter that needs to be changed to solve this problem, can someone guide me with this ....which parameters I should ask our DBAs to adjust ? Our database is Oracle 10G.
    Regards
    Amitabha

    Duplicate thread
    Same query different timings , different sessions at the same time
    Gints Plivna
    http://www.gplivna.eu

  • Different behaviour of j_security_check between 6.1 and 8.1

    Hi,
    we have a web-application that is secured via security-constraints in
    the web.xml. We are authenticating using FORM-based authentification and
    had this app deployed on a wls6.1. Now we have moved to 8.1 and have a
    different behaviour between 6.1 and 8.1 for log-in forms using the
    j_security_check. We have login-boxes on most pages:
    <form method="POST" action="j_security_check">
                                  Username<br><input type="text" name="j_username" size=11
    class="texteingabe"><br>
                                  Password<br><input type="password" name="j_password" size=8
    class="texteingabe"> <input type=image
    src="/dvrWebApp/htdocs/images/pfeil_rechts_hi.gif" border=0><br>
    </form>
    that on 6.1 redirected after the login to the page they were placed on -
    on 8.1 the login redirects to the root of the web-app, regardless of the
    page they are placed- can i change this back to the old 6.1-Behaviour?!
    cheers
    stf

    This sounds like bollocks, I was doing some form based security recently under 8.1 and there was no attempt to direct to any page i was not trying to access.
    The idea is you try to access a protected resource, your not authorized, so you are authenticated, then you continue on to that resource.
    Are you sure, that the resource your accessing does not direct you there because of some inbuilt business logic in your JSP/servlet ?

  • Different behaviour inside a Thread

    Hello ,
    I am developing a software to help in automating certain tasks related to certain voice switch , i connect to the switch using ssh . i send commands generally using a button that ptints the content of a textbox into the outputstream . every thing works fine as long as i print the command into the outputstream from the code of the button press event . This hangs the interdace because i have to wait for a specific output format . So i used a Thread to execute the same code so that the interface doesn't hang . The strange thing is after the thread finishes i am not able to receive any thing from the switch ( although the printing into the output stream doesn't produce any exceptions and this is not the same if i executed the same code without the thread) . The question is why the program has different behaviour inside a thread and outside it ?
    Best Regards ,

    >
    Sounds like this actually is more related to Swing than concurrency (if I'm correct). I think that everything is working, but you are failing to update the UI with the result. You are only allowed to update the UI from the AWT thread, so you probably need to publish the result using invokeLater or use a SwingWorker.

  • Getting different behaviour when query hits a single row?

    Hi
    I have a page where people type in search conditions and when they see the results (an ordinary tabluar report) they can click a certain row to assign the key of that row to a hidden field in my page.
    There's a requirement that if they're clever enough to hit a single matching row, then the key of that row is assigned to my hidden item without them having to go through the stress and hard work related to clicking a link :-)
    So I'll probably make a process that tests to see if their search returns a single row and only do the report if not. But that means performing the resulting SQL twice when there's more than one row.
    Is there a way to have the number of rows returned by the SQL source for a report affect the behaviour of the page? AFAIK the #TOTAL_ROWS# etc. are only applicable in the header & footer of the report region, so I guess there's no way to do what I imagined?
    Jakob

    You could query your rows into a collection and determine the number of records in the collection. Check out the HTMLDB_COLLECTION (APEX_COLLECTION) API section in the HTML DB (APEX) documentation.
    Your multiple record region where they select their row could then be based on:
    SELECT * FROM HTMLDB_COLLECTION WHERE collection_name = 'my collection'
    Mike

  • SQL Server 2012 Physical vs. Hyper-V Same Query Different Results

    I have a database that is on physical hardware (16 CPU's, 32GB Ram).
    I have a copy of the database that was attached to a virtual Hyper-V server (16 CPU's, 32GB Ram).
    Both Servers and SQL Servers are identical OS=2008R2 Standard, SQL Server 2012R2 Standard same patch level SP1 CU8.
    Same query run on both servers return same data set, but the time is much different 26 Sec on Physical, 5 minutes on virtual.
    Statistics are identical on both databases, query execution plane is identical on both queries.
    Indices are identical on both databases.
    When I use set statistics IO, I get different results between the two servers.
    One table in particular (366k rows) on physical shows logical reads of 15400, on Hyper-V reports logical reads of 418,000,000 that is four hundred eighteen million for the same table.
    When the query is run on the physical it uses no CPU, when run on the Hyper-V it takes 100% of all 16 processors.
    I have experimented with Maxdop and it does exactly what it should by limiting processors but it doesn't fix the issue.

    A massive difference in logical reads usually hints at differences in the query plan.
    When you compare query plans, it is essential that you look at actual query plans.
    Please note that if your server / Hyper-V supports parallelism (which is almost always nowadays), then you are likely to have two query plans: a parallel and a serial query plan. Of course the actual query plan will make clear which one is used in which
    case.
    To say this again, this is by far the most likely reason for your problem.
    There are other (unlikely) reasons that could be the case here:
    runaway parallel threads or other bugs in the optimizer or engine. Make sure you have installed the latest service pack
    Maybe the slow server (Hyper-V) has extreme fragmentation in the relevant tables
    As mentioned by Erland, you have much much more information about the query and query plan than we do. You already know whether or not parallelism is used, how many threads are being used in it, if you have no, one or several Loop Joins in the query (my
    bet is on at least one, possibly more), etc. etc.
    With the limited information you can share (or choose to share), involving PSS is probably your best course of action.
    Gert-Jan

  • Execution time of sql query differing a lot between two computer

    hi
    execution time of a query in my computer and more than 30 different computer is less than one second but on one of our
    customers' computers, execution time is more than ten minute. databases and data and queries are same. i re-install sql but problem remains. my sql is ms sql 2008 r2.
    any one has idea for this problem?

    Hi mahdi,
    Obviously, we can't get enough information to help you troubleshoot this issue. So, please elaborate your issue with more detail so that the community members can help you in more effecient manner.
    In addition, here is a good article regarding checklist for analyzing Slow-Running queries. Please see:
    http://technet.microsoft.com/en-us/library/ms177500(v=sql.105).aspx
    And SQL Server Profiler and Performance Monitor are good tools to troubleshoot performance issue, please see:
    Correlating SQL Server Profiler with Performance Monitor:
    https://www.simple-talk.com/sql/database-administration/correlating-sql-server-profiler-with-performance-monitor/
    Regards,
    Elvis Long
    TechNet Community Support

  • Same query, different results depending on compute statistics!!!

    This one is really weird, you would think I am making this up but I am not...
    We have one really long query that uses several large inline views and usually returns a few rows. All of a sudden the query stopped working -- i.e. it returned no rows. We tried rebuilding indexes; this didn't help. We tried computing full statistics and this fixed the problem. Has anyone heard of compute statistics affecting the output of a query????
    About a week later, the problem happened again. Computing estimate statistics didn't help. Only computing full statistics fixed the problem.
    The only thing I can note, is that this database was recently upgraded from 9.2.0.6 to 9.2.0.7, but I checked the install log files and there are no errors.
    Luckily this is just a development database but we are a little worried that it might re-occur in production. We have a few other development databases that have also been upgraded to 9.2.0.7 and none of these have the problem.
    We have compared the init.ora files, no real differences. Any other ideas? Maybe a full export import?

    Thanks, will do, but I am a little doubtful it is
    fixed by 9.2.0.8 because it works on one of our
    9.2.0.7 environments... Although if it is a statistics issue, it's likely a corner case, so you have to have a number of things in alignment. It's quite possible that, for example, two systems have identical structures and identical data but slightly different query plans because the data in one table is physically ordered differently on one system than in another which slightly changes the clustering_factor of an index which causes the CBO to use that index on one system and not on another. You may also end up with slightly different statistics because you have histograms on a column in one system and not in another.
    Looks like we are going to 9.2.0.8 anyway because of
    the end-of-life service support forcing us to go to
    9.2.0.8 anyway.If it reproduces on 9.2.0.8 (and I'd tend to suspect it will), it's certainly worth raising an issue. Unless you have an extended support contract, though, I wouldn't hold out a lot of hope for a patch if this isn't already fixed, since 9.2 leaves Premier Support at the end of the month...
    Justin

  • Different behaviour in MAX vs. LabVIEW when writing to IMAQdx GigE attribute

    Hi, I am controlling a Dalsa GigE camera in LabVIEW RT using IMAQdx.  Apart from a couple of quirks with the interface we are acquiring images without much problems at the moment.  
    However, there are one or two issues that are confusing us.  In this case, it is possible to set an attribute in MAX (a command attribute that instructs the camera to perform internal calibration) but when setting the same attribute in LabVIEW the error 0xBFF69010 (-1074360304) Unable to set attribute is thrown.  See attached images.
    I check whether the attribute is writable before attempting a write.  It is, however the write is unsuccessful and reading the iswritable attribute then returns false.  In MAX I can write to this attribute without any issues.  
    Is there anything that I need to configure/read/write in my LabVIEW code that MAX does.  Does MAX write to all attributes (based on the values in the XML file) when it opens the camera or does it simply read all the values from the camera.  When LabVIEW opens a camera reference does it perform the same steps as what MAX does - I'm trying to figure out what the difference between MAX and LabVIEW could be that could be causing this behaviour.
    Any help will be appreciated.
    Solved!
    Go to Solution.
    Attachments:
    Diagram.png ‏15 KB
    FrontPanel.png ‏8 KB
    It works in MAX.png ‏20 KB

    AnthonV wrote:
    Hi, I am controlling a Dalsa GigE camera in LabVIEW RT using IMAQdx.  Apart from a couple of quirks with the interface we are acquiring images without much problems at the moment.  
    However, there are one or two issues that are confusing us.  In this case, it is possible to set an attribute in MAX (a command attribute that instructs the camera to perform internal calibration) but when setting the same attribute in LabVIEW the error 0xBFF69010 (-1074360304) Unable to set attribute is thrown.  See attached images.
    I check whether the attribute is writable before attempting a write.  It is, however the write is unsuccessful and reading the iswritable attribute then returns false.  In MAX I can write to this attribute without any issues.  
    Is there anything that I need to configure/read/write in my LabVIEW code that MAX does.  Does MAX write to all attributes (based on the values in the XML file) when it opens the camera or does it simply read all the values from the camera.  When LabVIEW opens a camera reference does it perform the same steps as what MAX does - I'm trying to figure out what the difference between MAX and LabVIEW could be that could be causing this behaviour.
    Any help will be appreciated.
    Hi AnthonV,
    "Quirky" is a good way to describe the Spyder3 when it comes to the GigE Vision/GenICam interface (as opposed to Dalsa's driver which communicates using custom serial commands to the camera over ethernet)....
    The Spyder3 has a lot of timing-dependent issues. It is possible that the delay between opening the camera and setting that feature is different via MAX vs your code in LabVIEW. Also, there are certain cases where MAX will surpress the error from being displayed. Ignoring the error shown vs. not, do you see the feature take effect in either of the two cases?
    The basic behavior between MAX and LabVIEW is the same. In both cases when you open the camera all the settings are loaded from our camera file which has the saved camera settings. This file is created the first time you open the camera and is updated whenever you click Save in MAX or call an API function to save the settings. In any case, I do know that the Spyder3 has various issues saving/restoring settings to our camera files.
    I suggest talking with Dalsa about the issues you are having. They might be able to set you up with newer firmware that addresses some of these problems (we have worked with them in the past to identify many of them).
    Eric

  • Different behaviour ABAP vs JAVA Runtime for mixed value in total row

    Hello to all
    (BW 7.01 Support package 08)
    I hope you may have anr good idea on my issue as I couldn't find anything in OSS messages or other source to solve the issue.
    Description:
    I have a query ,where we calculate sales variance ( current versus prior year) over several countries. Country is shown in row, where so the key figures (3x)  current sales, prior sales and sales variance is displayed in columns.
    Issue:
    Current and prior sales is not  displayed on 'overall result' as the sum over all countries is a mixed value of different currencies and therefore is displayed as * value. This is totally correct.
    However sales variance is displayed differently in ABAP Web runtme to JAVA WEB runtime
    ABAP Web runtime -> mixed value is displayed as * value
    JAVA Web runtime -> mixed values is calculated and displayed regardless of mixed currencies
    So please let me know, if anyone had the same experience and would have a nice solution or hint for that issue
    Best regards
    Christian

    Hi Christian,
    Did you check the display options in your Qurey Designer?
    If you don't find, please use T-code RSRT may be you may find a helpful option.
    Concerning the display of currencies, I think that you need to configure a parameter in T-Code SPRO.
    finally, the support package 08 of you BW version is old, may be you need to upgrade it.
    In my society, we resolved many issues by updating our support packages.
    But ask your administrators about consequences before making it.
    Hope it helps,
    Best Regards,
    Amine

Maybe you are looking for

  • Managed environment with Acrobat reader 9

    Hello all, I'm making a new image for my organisation and want too use Reader 9 as the default PDF reader. Only thing is when I log in the first time as a managed user Reader wants too install additional stuff that need Admin rights? (not something y

  • Move contacts from Palm Desktop to Cell??

    I've got a non-responsive Centro which I've taken to Verizon for trouble-shooting. In the meantime, I plan to use an old LG for a couple of days. Does anyone know how to move contacts from Palm Desktop to the LG?

  • Where to find proDAD heroglyph 2.5 to download?

    Hello, Can anyone help me out please? I tried to find proDAD heroglyph 2.5 to download but there is no direct link i can find. Please can anyone help me? A big thanks

  • How can I downgrade iphone 4s to ios 7.1.2 from 8.1.2

    Hello, My iphone 4s cannot connect to wifi. sometimes I cannot turn wifi on(it's greyed out). sometimes I can connect but it alternately connect and disconnect I think this happened when I upgraded my ios to 8.1.2(I'm not sure). when it's greyed out,

  • Update and combobox

    Hi all! I have a problem that i to reach impasse.There fore,Can you help me? i am writting code for your's form ,This Form have one datagrid to display data.in datagrid ,I used 1 Combobox to select values for ***(nam,nu), Ok button ,cancel button and