Different result comparing AWR to TKPROF

Hi,
I runned last night event 10046 on my database using the following commands:
ALTER SYSTEM SET statistics_level = ALL;
ALTER SYSTEM SET events '10046 trace name context forever, level 12';
Today, i compared a single statment from TKPROF result to the AWR and found
big different results:
The TKPROF shows:
Executions is : 1
elpased time is :51.39 seconds
cpu time is : 0.23 second
Gets per Exec is : 72
SELECT CM_CUST_DIM_INST_PROD.INST_PROD_ID, CM_CUST_DIM_INST_PROD.NAP_PRODUCT_ID, CM_CUST_DIM_INST_PROD.NAP_PACKEAGE, CM_CUST_DIM_INST_PROD.PRODUCT_ID, CM_CUST_DIM_INST_PROD.PRODUCT_DESCR, CM_CUST_DIM_INST_PROD.PRODUCT_GROUP, CM_CUST_DIM_INST_PROD.PRODUCT_GROUP_DESCR, CM_CUST_DIM_INST_PROD.PROD_CATEGORY, CM_CUST_DIM_INST_PROD.PROD_CATEGORY_DESCR, CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE, CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE_DESCR, CM_CUST_DIM_INST_PROD.NAP_AREA2, CM_CUST_DIM_INST_PROD.NAP_PHONE_NUM, CM_CUST_DIM_INST_PROD.NAP_CANCEL_DT, CM_CUST_DIM_INST_PROD.NAP_SERVICE_OPN_DT, CM_CUST_DIM_INST_PROD.NAP_MAKAT_CD, CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS, CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS_DESCR, CM_CUST_DIM_INST_PROD.NAP_RTRV_INSPRD_ID
FROM CM_CUST_DIM_INST_PROD ,
cm_ip_service_delta, cm_ip_service_delta cm_ip_service_delta2
WHERE CM_CUST_DIM_INST_PROD.prod_grp_type in ('INTR', 'HOST') and
CM_CUST_DIM_INST_PROD.Inst_Prod_Id = cm_ip_service_delta.inst_prod_id(+) and
CM_CUST_DIM_INST_PROD.Nap_Makat_Cd = cm_ip_service_delta.nap_billing_catnum(+)
and cm_ip_service_delta.nap_billing_catnum is null and
cm_ip_service_delta.inst_prod_id is null
and cm_ip_service_delta2.inst_prod_id = CM_CUST_DIM_INST_PROD.Nap_Packeage
ORDER BY INST_PROD_ID
call count cpu elapsed disk query current rows
Parse 1 0.01 0.03 0 22 0 0
Execute 1 0.02 1.79 0 32 0 0
Fetch 13 0.19 49.56 0 18 0 661
total 15 0.23 51.39 0 72 0 661
The AWR report shows
Executions is : 1
elpased time is :697.91 seconds
cpu time is :41.89 second
Gets per Exec is : 351,105.00
Executions Gets per Exec CPU Time (s) Elapsed Time (s) SQL Id SQL
1 351,105.00 41.89 697.91 6hh4jdx9dvjzw
6hh4jdx9dvjzw
SELECT CM_CUST_DIM_INST_PROD.INST_PROD_ID, CM_CUST_DIM_INST_PROD.NAP_PRODUCT_ID, CM_CUST_DIM_INST_PROD.NAP_PACKEAGE, CM_CUST_DIM_INST_PROD.PRODUCT_ID, CM_CUST_DIM_INST_PROD.PRODUCT_DESCR, CM_CUST_DIM_INST_PROD.PRODUCT_GROUP, CM_CUST_DIM_INST_PROD.PRODUCT_GROUP_DESCR, CM_CUST_DIM_INST_PROD.PROD_CATEGORY, CM_CUST_DIM_INST_PROD.PROD_CATEGORY_DESCR, CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE, CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE_DESCR, CM_CUST_DIM_INST_PROD.NAP_AREA2, CM_CUST_DIM_INST_PROD.NAP_PHONE_NUM, CM_CUST_DIM_INST_PROD.NAP_CANCEL_DT, CM_CUST_DIM_INST_PROD.NAP_SERVICE_OPN_DT, CM_CUST_DIM_INST_PROD.NAP_MAKAT_CD, CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS, CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS_DESCR, CM_CUST_DIM_INST_PROD.NAP_RTRV_INSPRD_ID FROM CM_CUST_DIM_INST_PROD , cm_ip_service_delta, cm_ip_service_delta cm_ip_service_delta2 WHERE CM_CUST_DIM_INST_PROD.prod_grp_type in ('INTR', 'HOST') and CM_CUST_DIM_INST_PROD.Inst_Prod_Id = cm_ip_service_delta.inst_prod_id(+) and CM_CUST_DIM_INST_PROD.Nap_Makat_Cd = cm_ip_service_delta.nap_billing_catnum(+) and cm_ip_service_delta.nap_billing_catnum is null and cm_ip_service_delta.inst_prod_id is null and cm_ip_service_delta2.inst_prod_id = CM_CUST_DIM_INST_PROD.Nap_Packeage ORDER BY INST_PROD_ID
Does one can explain the different results ?
Thank You

Hi Virag,
I ran the statment from sqlplus and after that i generated an addm report:
As you can see below TKPROF show that elspaed time was : 50.76 second,
while ADDM show:
"was executed 1 times and had an average elapsed time of 751 seconds."
ALTER SESSION SET max_dump_file_size = unlimited;
ALTER SESSION SET tracefile_identifier = '10046';
ALTER SESSION SET statistics_level = ALL;
ALTER SESSION SET events '10046 trace name context forever, level 12';
SELECT CM_CUST_DIM_INST_PROD.INST_PROD_ID, CM_CUST_DIM_INST_PROD.NAP_PRODUCT_ID, CM_CUST_DIM_INST_PROD.NAP_PACKEAGE, CM_CUST_DIM_INST_PROD.PRODUCT_ID, CM_CUST_DIM_INST_PROD.PRODUCT_DESCR, CM_CUST_DIM_INST_PROD.PRODUCT_GROUP, CM_CUST_DIM_INST_PROD.PRODUCT_GROUP_DESCR, CM_CUST_DIM_INST_PROD.PROD_CATEGORY, CM_CUST_DIM_INST_PROD.PROD_CATEGORY_DESCR, CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE, CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE_DESCR, CM_CUST_DIM_INST_PROD.NAP_AREA2, CM_CUST_DIM_INST_PROD.NAP_PHONE_NUM, CM_CUST_DIM_INST_PROD.NAP_CANCEL_DT, CM_CUST_DIM_INST_PROD.NAP_SERVICE_OPN_DT, CM_CUST_DIM_INST_PROD.NAP_MAKAT_CD, CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS, CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS_DESCR, CM_CUST_DIM_INST_PROD.NAP_RTRV_INSPRD_ID
FROM CM_CUST_DIM_INST_PROD ,
cm_ip_service_delta, cm_ip_service_delta cm_ip_service_delta2
WHERE CM_CUST_DIM_INST_PROD.prod_grp_type in ('INTR', 'HOST') and
CM_CUST_DIM_INST_PROD.Inst_Prod_Id = cm_ip_service_delta.inst_prod_id(+) and
CM_CUST_DIM_INST_PROD.Nap_Makat_Cd = cm_ip_service_delta.nap_billing_catnum(+)
and cm_ip_service_delta.nap_billing_catnum is null and
cm_ip_service_delta.inst_prod_id is null
and cm_ip_service_delta2.inst_prod_id = CM_CUST_DIM_INST_PROD.Nap_Packeage
ORDER BY INST_PROD_ID
ALTER SESSION SET EVENTS '10046 trace name context off';
EXIT
call count cpu elapsed disk query current rows
Parse 1 0.05 0.05 0 0 0 0
Execute 1 0.02 1.96 24 32 0 0
Fetch 46 0.19 48.74 6 18 0 661
total 48 0.26 50.76 30 50 0 661
Rows Row Source Operation
661 PX COORDINATOR (cr=50 pr=30 pw=0 time=50699289 us)
0 PX SEND QC (ORDER) :TQ10003 (cr=0 pr=0 pw=0 time=0 us)
0 SORT ORDER BY (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND RANGE :TQ10002 (cr=0 pr=0 pw=0 time=0 us)
0 FILTER (cr=0 pr=0 pw=0 time=0 us)
0 HASH JOIN RIGHT OUTER (cr=0 pr=0 pw=0 time=0 us)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND BROADCAST :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
3366 INDEX FAST FULL SCAN IDX_CM_SERVICE_DELTA (cr=9 pr=6 pw=0 time=47132 us)(object id 1547887)
0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND BROADCAST :TQ10001 (cr=0 pr=0 pw=0 time=0 us)
3366 INDEX FAST FULL SCAN IDX_CM_SERVICE_DELTA (cr=9 pr=0 pw=0 time=20340 us)(object id 1547887)
0 PX BLOCK ITERATOR PARTITION: 1 4 (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL CM_CUST_DIM_INST_PROD PARTITION: 1 4 (cr=0 pr=0 pw=0 time=0 us)
RECOMMENDATION 1: SQL Tuning, 56% benefit (615 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"6wd7sw8adqaxv".
RELEVANT OBJECT: SQL statement with SQL_ID 6wd7sw8adqaxv and
PLAN_HASH 2594021963
SELECT CM_CUST_DIM_INST_PROD.INST_PROD_ID,
CM_CUST_DIM_INST_PROD.NAP_PRODUCT_ID,
CM_CUST_DIM_INST_PROD.NAP_PACKEAGE, CM_CUST_DIM_INST_PROD.PRODUCT_ID,
CM_CUST_DIM_INST_PROD.PRODUCT_DESCR,
CM_CUST_DIM_INST_PROD.PRODUCT_GROUP,
CM_CUST_DIM_INST_PROD.PRODUCT_GROUP_DESCR,
CM_CUST_DIM_INST_PROD.PROD_CATEGORY,
CM_CUST_DIM_INST_PROD.PROD_CATEGORY_DESCR,
CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE,
CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE_DESCR,
CM_CUST_DIM_INST_PROD.NAP_AREA2, CM_CUST_DIM_INST_PROD.NAP_PHONE_NUM,
CM_CUST_DIM_INST_PROD.NAP_CANCEL_DT,
CM_CUST_DIM_INST_PROD.NAP_SERVICE_OPN_DT,
CM_CUST_DIM_INST_PROD.NAP_MAKAT_CD,
CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS,
CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS_DESCR,
CM_CUST_DIM_INST_PROD.NAP_RTRV_INSPRD_ID
FROM CM_CUST_DIM_INST_PROD ,
cm_ip_service_delta, cm_ip_service_delta cm_ip_service_delta2
WHERE CM_CUST_DIM_INST_PROD.prod_grp_type in ('INTR', 'HOST') and
CM_CUST_DIM_INST_PROD.Inst_Prod_Id =
cm_ip_service_delta.inst_prod_id(+) and
CM_CUST_DIM_INST_PROD.Nap_Makat_Cd =
cm_ip_service_delta.nap_billing_catnum(+)
and cm_ip_service_delta.nap_billing_catnum is null and
cm_ip_service_delta.inst_prod_id is null
and cm_ip_service_delta2.inst_prod_id =
CM_CUST_DIM_INST_PROD.Nap_Packeage
ORDER BY INST_PROD_ID
RATIONALE: SQL statement with SQL_ID "6wd7sw8adqaxv" was executed 1
times and had an average elapsed time of 751 seconds.
RATIONALE: At least one execution of the statement ran in parallel.
Thanks.

Similar Messages

  • Oracle function and query return different results

    Hi, I am using oracle 10g database.
    Function is :
    create or replace FUNCTION FUNC_FAAL(myCode number,firstDate date
    *, secondDate date)*
    RETURN INTEGER as
    rtr integer;
    BEGIN
    select count() into rtr*
    from myschema.my_table tbl where tbl.myDateColumn between firstDate and
    secondDate and tbl.kkct is null and tbl.myNumberColumn  = myCode ;
    return (rtr);
    END FUNC_FAAL;
    This function returns 117177 as result.
    But if I run same query in the function seperately ;
    select count()*
    from myschema.my_table tbl
    where tbl.myDateColumn between firstDate and secondDate
    and tbl.kkct is null and tbl.myNumberColumn  = myCode ;
    I get different result 11344 (which is the right one).
    Table and function are in the same schema.
    What can be the problem ?
    Thanks.

    1. i think ur parameter name and Column names are same Firstdate and seconddate try to choose different name
    2. try using Trunc function around your dates
    where trunc(tbl.myDateColumn) between trunc(firstDate) and trunc(secondDate)then compare the result....sometimes time elements comes into play.
    Baig
    [My Oracle Blog|http://baigsorcl.blogspot.com/]

  • ODS and Infocube showing different results

    Hi,
    I loaded same data to Infocube and ODS but they are both showing different result in BEX.
    ODS
    ODS report is showing summarised values e.g
    vendor               order value                   invoice value        date
    00001                    20                               20                 01.01.2007
    Infocube
    result is showing not summarised values
    vendor               order value                   invoice value        date
    00001                    20                                 0                      0
                                  0                                20                  01.01.2007
    I would like to make the infocube result same as the ODS because I am remodelling so that we start reporting from Infocube rather than ODS.
    thanks

    Hi,
    this is the scenario
    currently we are reporting from ODS object which fetch data from 3 ODSes. now I am asked to remodel so that we can report from infocube not ODS.
    steps I have taken so far.
    - created new infocube
    - created 3 update rules ( same Update rules that gives the reporting ODS data).
    my target is to make sure that the current ODS  report is the same as the infocube report but so far the report is not consistent
    I compared the data in the ODS and infocube and they are exactly the same but the report is showing differently

  • Same Effects Panel parameters-different results on Tiff & Psd file format?

    Just a quick question to all the users and the Adobe Lightroom team & see if anybody has the same problem.
    I have one file saved in a Tiff format & the exact duplicate saved in a PSD, however when I sync the Effects Panel settings between the two, I get different results. The difference isn't black & white but clearly visible. The grain in the PSD file is less coarse & pronounced and the post crop vignetting is much more subtle. The color balance is also different. In other words every adjustments seems to be more subtle & less pronounced on a PSD file compare to a Tiff file.
    Not the end of the world but clearly something to keep in mind when syncing settings between the two formats and something that shouldn't be there in the first place.
    Thanks

    Now you are providing more detail.
    "... but lesser quality at least in terms of natural movement."
    It sounds very much like 11 is exporting 25p/30p which is why you see non smooth motion.
    25p/30p is the result of deinterlacing 50i/60i. When you take two FIELDS that are 1/50th or 1/60th apart and create 1 image, these images will be 1/25th or 1/30th second apart.
    You can create 1 frame through several methods.
    With iM09 deinterlacing occured only if you imported as LARGE, OPTIMIZED video, or used a function or FX that scales video. Perhaps with 11, Apple has decided that its time to FORCE any kind of interlaced video to be deinterlaced.
    In other words, LARGE and FULL only relate to size.
    One possible advantage is that Apple has finally decided to deinterlace using BLEND during import. This would look better, but it creates 25p or 30p video.
    They may hinting its time to buy a progressive camera.

  • SQL Server 2012 Physical vs. Hyper-V Same Query Different Results

    I have a database that is on physical hardware (16 CPU's, 32GB Ram).
    I have a copy of the database that was attached to a virtual Hyper-V server (16 CPU's, 32GB Ram).
    Both Servers and SQL Servers are identical OS=2008R2 Standard, SQL Server 2012R2 Standard same patch level SP1 CU8.
    Same query run on both servers return same data set, but the time is much different 26 Sec on Physical, 5 minutes on virtual.
    Statistics are identical on both databases, query execution plane is identical on both queries.
    Indices are identical on both databases.
    When I use set statistics IO, I get different results between the two servers.
    One table in particular (366k rows) on physical shows logical reads of 15400, on Hyper-V reports logical reads of 418,000,000 that is four hundred eighteen million for the same table.
    When the query is run on the physical it uses no CPU, when run on the Hyper-V it takes 100% of all 16 processors.
    I have experimented with Maxdop and it does exactly what it should by limiting processors but it doesn't fix the issue.

    A massive difference in logical reads usually hints at differences in the query plan.
    When you compare query plans, it is essential that you look at actual query plans.
    Please note that if your server / Hyper-V supports parallelism (which is almost always nowadays), then you are likely to have two query plans: a parallel and a serial query plan. Of course the actual query plan will make clear which one is used in which
    case.
    To say this again, this is by far the most likely reason for your problem.
    There are other (unlikely) reasons that could be the case here:
    runaway parallel threads or other bugs in the optimizer or engine. Make sure you have installed the latest service pack
    Maybe the slow server (Hyper-V) has extreme fragmentation in the relevant tables
    As mentioned by Erland, you have much much more information about the query and query plan than we do. You already know whether or not parallelism is used, how many threads are being used in it, if you have no, one or several Loop Joins in the query (my
    bet is on at least one, possibly more), etc. etc.
    With the limited information you can share (or choose to share), involving PSS is probably your best course of action.
    Gert-Jan

  • Different results on consecutive runs of OFT for Web Applications

    I am using Oracle Functional Testing for Web Applications to test share point site [using: OS = WinXP SP3, default browser = IE8.0.6]
    I’ve recorded simple scenario - browsing through two pages (home page > menu link > page 1)
    there is no user input involved – just browsing)
    problem:
    I am running the same test multiple times and I get different results – sometimes (less often) I get clear ‘Passed’ results but more often I get warnings about missing items, Severe content differences)
    upon comparing Master and Tested HTML I see that test fails because no content is being logged under Tested:HTML node (the content displays properly in right browser pane)
    any hints?

    You probably need a close no save step at the end of your action.
    In the save for web dialog, when you record the action, save to the folder you
    want the batch dialog to put the images. Then set the batch dialog as below.
    MTSTUNER

  • Why do I get two different results from the same coefficients?

    I am getting two different results from the Polynomial Evaluation function.
    For the first one, I am getting the coefficients from a Polynomial Fit function.  I feed the coefficients from the Fit function into the Poly Eval function and get the correct result of 12.8582 when I evaluate 49940.
    For the second one, I create constant array of the SAME values that were returned from the Polynomial Fit function (i typed them in).  However, I am getting an incorrect result of -120.7913 when I feed the constant array into the Poly Eval function when I evauate 49940.
    How can this happen when I am using the same array values?
    Attached is an image of what I am explaining.
    Solved!
    Go to Solution.
    Attachments:
    polynomial_evaluation.jpg ‏213 KB

    Hi Altran,
    are you sure about using the "same" coefficients?
    Did you compare them? Did you (atleast) set the display properties to 17 significant digits?
    Please attach a VI instead of a picture...
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • Comparing AWR data with baselines.

    Hello,
    I am studying AWR, which will be used in our project for performance analysis.
    My requirement is to create a baseline of load between 2 snapshots during normal system load. We perform load/stress testing before final deployment. We are planning to take snapshots before and after load/stress tests. I want to compare the performance of instance during load/stress test with that of baseline performance.
    Oracle provides package DBMS_WORKLOAD_REPOSITORY for different functions of AWR. It provides function CREATE_BASELINE function for creating baseline. But in report generation function AWR_REPORT_HTML or AWR_REPORT_TXT no parameter is specified for comparing it with the baseline.
    Is it possible to compare AWR data with baseline, during report generation? Or currently Oracle supports only baseline creation and dropping that baseline. It can not be used for performance comparison.
    Thanks,
    Shailesh

    Hi,
    Take a look on the note: 543188
    With rgds,
    Anil Kumar Sharma .P
    Kindly Assign the points to the helpful answers.
    Message was edited by: Anil Kumar Sharma

  • Different results of an store procedure

    Hi everyone,
    I have a partitioned table (by day of the week) and an store procedure that selects some records from a especific partition (that one that represents "today").
    Inside Oracle client I get the result I'm expecting, but when I call the procedure from a java client I get a different result (it searches in another partition).
    Any ideas?? if you need some code let me know.
    Thanks in advance,
    Charlie Garcia.

    Well, partitioned by day of the week means the column is declared as?
    I'm assuming there's a bind mismatch between the defined value in the table, and what you are passing in to the routine.
    So for example, your table is partitioned on a date column and you pass in a string to your stored routine. Then Oracle has to do a conversion for you (if it can) and compare the value you passed in, to the converted value.
    for example, if you pass in '01-05-01' as a string, what date would you say that is?
    ME_XE?select to_date('01-05-03', 'dd-mm-yy') from dual;
    TO_DATE('01-05-03','DD-MM-
    01-MAY-2003 12 00:00
    1 row selected.
    Elapsed: 00:00:00.00
    ME_XE?select to_date('01-05-03', 'mm-dd-yy') from dual;
    TO_DATE('01-05-03','MM-DD-
    05-JAN-2003 12 00:00
    1 row selected.
    Elapsed: 00:00:00.01
    ME_XE?select to_date('01-05-03', 'yy-dd-mm') from dual;
    TO_DATE('01-05-03','YY-DD-
    05-MAR-2001 12 00:00
    1 row selected.

  • Different results in harmonic analisys

    I am analysing data in the same time with the:
    1.Harmonic Analyser VI and
    2.RealFFT VI-(for getting the 1_sided FFT, than graphing that data to see the harmonics and afterwards I am using Peak Detector VI).
    The frequency of the main harmonics and the peaks for them from the other VI are pretty different, and so are the peaks and amplitudes. Which method is more precise. Thank you.

    My guess is they both do precisely what they are supposed to. But your question was, "Which method is more precise?"
    Can you generate a calibrated wavform? If you had a function generator you could mess around with that and compare the results.
    There are lots of tools for comparing a couple of datasets. But you need to have exactly the same inputs to compare the results.
    You can simulate an input with an array of data. I just used the Basic Harmonic Analyzer Example VI to create an array of data for a sinusoidal signal. You could do likewise and write that out as a spreadsheet file and use it to test and compare other VI's.
    I have a VI that compares many of the Filter VI's. I save a dataset and pass it through my favor
    ite ones and view them overlayed on top of the raw data. This tells me how much they are distorting the signal. There are various coefficients that can be tweaked to get different results. I guess I can attach that so you can look at it.
    I couldn't find any VI's in the examples for LV6.1 with the names you gave. Where exactly did your VI's come from?
    Mike
    Attachments:
    PC30_Linear_A3135_5_&_6_raw_1a_3.28.02.dat ‏174 KB
    Data_Compare.llb ‏263 KB

  • Same circuit twice with different results

    I've been having some trouble having Multisim match a circuit I have modeled in MATLAB. During the troubleshooting process I put voltage probes on each wire used. The circuit portion in question involves an AC Voltage source connected to a resistor which is then connected to an inductor. I made a copy of the circuit and started from scratch. One of the circuits gives a voltage drop across the resistor, while the other one doesn't drop any voltage at all. I've checked all of the connections and they are all good. The only difference I can see is that one has the pins labeled 1 - 2 while the other is 2 - 1. Does anyone know what the problem is here? I shouldn't not be getting two different results for the same circuit.

    Hi Jmerc,
    Maybe you can post the circuits so that we can compare.
    Tien P.
    National Instruments

  • How to Create a new column from two different result sets

    How to Create a new column from two different result sets, both the result set uses the different date dimensions.

    i got solutions for this is apply filters in column formula it self, based on the requirement.

  • Filter expression producing different results after upgrade to 11.1.1.7

    Hello,
    We recently did an upgrade and noticed that on a number of reports where we're using the FILTER expression that the numbers are very inflated. Where we are not using the FILTER expression the numbers are as expected. In the example below we ran the 'Bookings' report in 10g and came up with one number and ran the same report in 11g (11.1.1.7.0) after the upgrade and got two different results. The data source is the same database for each envrionment. Also, in running the physical SQL generated by the 10g and 11g version of the report we get different the inflated numbers from the 11g SQL. Any ideas on what might be happening or causing the issue?
    10g report: 2016-Q3......Bookings..........72,017
    11g report: 2016-Q3......Bookings..........239,659
    This is the simple FILTER expression that is being used in the column formula on the report itself for this particular scenario which produces different results in 10g and 11g.
    FILTER("Fact - Opportunities"."Won Opportunity Amount" USING ("Opportunity Attributes"."Business Type" = 'New Business'))
    -------------- Physical SQL created by 10g report -------- results as expected --------------------------------------------
    WITH
    SAWITH0 AS (select sum(case when T33142.OPPORTUNITY_STATUS = 'Won-closed' then T33231.USD_LINE_AMOUNT else 0 end ) as c1,
    T28761.QUARTER_YEAR_NAME as c2,
    T28761.QUARTER_RANK as c3
    from
    XXFI.XXFI_GL_FISCAL_MONTHS_V T28761 /* Dim_Periods */ ,
    XXFI.XXFI_OSM_OPPTY_HEADER_ACCUM T33142 /* Fact_Opportunity_Headers(CloseDate) */ ,
    XXFI.XXFI_OSM_OPPTY_LINE_ACCUM T33231 /* Fact_Opportunity_Lines(CloseDate) */
    where ( T28761.PERIOD_NAME = T33142.CLOSE_PERIOD_NAME and T28761.QUARTER_YEAR_NAME = '2012-Q3' and T33142.LEAD_ID = T33231.LEAD_ID and T33231.LINES_BUSINESS_TYPE = 'New Business' and T33142.OPPORTUNITY_STATUS <> 'Duplicate' )
    group by T28761.QUARTER_YEAR_NAME, T28761.QUARTER_RANK)
    select distinct SAWITH0.c2 as c1,
    'Bookings10g' as c2,
    SAWITH0.c1 as c3,
    SAWITH0.c3 as c5,
    SAWITH0.c1 as c7
    from
    SAWITH0
    order by c1, c5
    -------------- Physical SQL created by the same report as above but in 11g (11.1.1.7.0) -------- results much higher --------------------------------------------
    WITH
    SAWITH0 AS (select sum(case when T33142.OPPORTUNITY_STATUS = 'Won-closed' then T33142.TOTAL_OPPORTUNITY_AMOUNT_USD else 0 end ) as c1,
    T28761.QUARTER_YEAR_NAME as c2,
    T28761.QUARTER_RANK as c3
    from
    XXFI.XXFI_GL_FISCAL_MONTHS_V T28761 /* Dim_Periods */ ,
    XXFI.XXFI_OSM_OPPTY_HEADER_ACCUM T33142 /* Fact_Opportunity_Headers(CloseDate) */ ,
    XXFI.XXFI_OSM_OPPTY_LINE_ACCUM T33231 /* Fact_Opportunity_Lines(CloseDate) */
    where ( T28761.PERIOD_NAME = T33142.CLOSE_PERIOD_NAME and T28761.QUARTER_YEAR_NAME = '2012-Q3' and T33142.LEAD_ID = T33231.LEAD_ID and T33231.LINES_BUSINESS_TYPE = 'New Business' and T33142.OPPORTUNITY_STATUS <> 'Duplicate' )
    group by T28761.QUARTER_YEAR_NAME, T28761.QUARTER_RANK),
    SAWITH1 AS (select distinct 0 as c1,
    D1.c2 as c2,
    'Bookings2' as c3,
    D1.c3 as c4,
    D1.c1 as c5
    from
    SAWITH0 D1),
    SAWITH2 AS (select D1.c1 as c1,
    D1.c2 as c2,
    D1.c3 as c3,
    D1.c4 as c4,
    D1.c5 as c5,
    sum(D1.c5) as c6
    from
    SAWITH1 D1
    group by D1.c1, D1.c2, D1.c3, D1.c4, D1.c5)
    select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4, D1.c5 as c5, D1.c6 as c6 from ( select D1.c1 as c1,
    D1.c2 as c2,
    D1.c3 as c3,
    D1.c4 as c4,
    D1.c5 as c5,
    sum(D1.c6) over () as c6
    from
    SAWITH2 D1
    order by c1, c4, c3 ) D1 where rownum <= 2000001
    Thank you,
    Mike
    Edited by: Mike Jelen on Jun 7, 2013 2:05 PM

    Thank you for the info. They are definitely different values since ones on the header and the other is on the lines. As the "Won Opportunity" logical column is mapped to multiple LTS it appears the OBI 11 uses a different alogorthim to determine the most efficient table to use in the query generation vs 10g. I'll need to spend some time researching the impact to adding a 'sort' to the LTS. I'm hoping that there's a way to get OBI to use similar logic relative to 10g in how it generated the table priority.
    Thx again,
    Mike

  • PreparedStatement & regular Statement - different results for same select

    I was wondering if someone could either
    i) try this out for me to confirm my results or
    ii) let me know what I am doing wrong
    I'm one of the developers on a product and am currently investigating localization for the Thai language...just checking to see that Java and Swing have no problems with it. The only bewildering thing which has happened is noticing that some values which are fetched from the database display in Thai perfectly and other values display as a garble. Sometimes the exact same column is displayed correctly in one part of the program but is not OK in another part. I think I've figured out what it going on and suspect a bug in Oracle's JDBC:
    Some selects were configured as PreparedStatements and those return the Thai properly. The more common case however was for programmers to use a simple Statement object for their select and it is in those that the multi-byte strings don't get returned properly.
    The following code shows the problem that I am experiencing. I am basically executing the exact same select in 2 different ways and they are both giving different results as long as the column being queried contains a Thai character. If someone could grab and check it out and let me know if they see the same thing, I'd appreciate it. Just change the column/table name and the username/password/databaseIP to get it to run.
    <code>
    import java.sql.*;
    public class SelectTest {
    public static void main(String[] args) {
    try {
    DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
    Connection conn = DriverManager.getConnection( "jdbc:oracle:thin:@10.4.31.168:1524:ora8",
    "dms_girouard",
    "girouard");
    String sqlCommand = "select C0620_Title from T0620_SwSheet";
    Statement statement = conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,
    ResultSet.CONCUR_READ_ONLY);
    ResultSet resultSet1 = statement.executeQuery(sqlCommand);
    while (resultSet1.next()) {
    if (resultSet1.getString("C0620_Title") != null) {
    System.out.println("resultSet1 Title = " + resultSet1.getString("C0620_Title"));
    PreparedStatement preparedStatement = conn.prepareStatement(sqlCommand);
    ResultSet resultSet2 = preparedStatement.executeQuery();
    while (resultSet2.next()) {
    if (resultSet2.getString("C0620_Title") != null) {
    System.out.println("resultSet2 Title = " + resultSet2.getString("C0620_Title"));
    catch (Exception e) {
    System.out.println(e.getMessage());
    </code>

    Hi Peter,
    Are you using NCHAR column for Thai , or is your database character set set for Thai.
    If you are using a NCHAR column for holding Thai data, then you have to use the
    OraclePreparedStatement.setFormOfUse(...) before executing the select.
    Regards
    Elango. Hi Elangovan,
    Thank you for answering.
    The datatype on the column is VARCHAR2.
    I did my initial tests without doing anything special to make sure the database is localized for Thai, and I was happy to find that almost everything still worked fine - I was able to save and retrieve Thai strings to the database almost perfectly.
    The only problem I discovered was the difference between Statement and PreparedStatement selects on a column containing Thai. Colleagues of mine have said they see the same thing when testing on a Oracle database which has been configured specifically for the Thai customer.
    I read somewhere that the current JDBC drivers are using an older version of the Unicode standard than the most current version of the Java SDK and that it was causing some problems with Korean. I'm wondering if maybe it's the same problem with Thai.

  • Mobile account creation has different result...

    Created managed preference group:
    - Finder: Show connected servers on Desktop
    - Mobility: Set to create Mobile account, synching off, HomeFolder on startup volume.
    - Login: Maps three SMB paths to Windows Server folders
    - Active Directory: UNC path set in user's Active Directory profile to Home Folder on Windows Server
    There are two different conditions that give different results:
    (1) User logs on to a particular Mac once (using AD network account), prior to applying managed preferences. User is not member of preference group. Mac creates full set of User Account folders in user's home. Then user's name is applied to preference group, and user logs on.
    (2) User's name applied to preference group, prior to logging on to a particular Mac. User logs in using AD network account.
    Results: With condition (1), User gets a full set of typical folders in local home folder, UNC network home folder is mapped to dock - the two Home folders (local and network) are kept separated, all works as anticipated.
    With condition (2), user does not consistently get full set of folders in local home folder, UNC network home folder is mapped to dock - and local "Library/Preferences" and "Downloads" is copied into network home. Occasionally, user's AD account gets locked out due to too many failures while attempting to access folders.
    I would greatly appreciate anyone who can lead me to understand this.
    Thank you...David

    The usual approach with Open Directory is to either use Workgroup Manager to define a managed login preference for a computer group to define that those member computers should cause the use of mobile accounts on those computers, or to do the same thing via Profile Manager.
    Note: If you are using Mavericks you must use Profile Manager as it does not support this via Workgroup Manager managed preferences.
    This will not require users to need admin authorisation.

Maybe you are looking for