Two different plans in TEST & Prod

hi,
we have a problem in production database where we find some sql statements running very slow.
but if you run same SQL statement in TEST it runs < 2 secs.
Production DB Prod.SchemaA  is exported into TEST DB as Test.SchemaA
When study the explain plan, we find Prod explain plan is different than test. if you create sql profile, by copying TEST explain plan, it would run faster in Production.
Now our question is why optimizer goes through two different plans when the schema structure same and data almost same in two databases?
Note that, we have two almost identical schema's in Production. Prod.SchemaA and Prod.SchemaB has same object names but some Prod.SchemaB may have small difference in indexes/constraints.
Users would run same SQL statement both in Prod.SchemaA & Prod.SchemaB time to time.
thanks
neal

You can have a clear picture about the accuracy of your statistics by getting the execution plan from memory into the TEST and PROD environment. You can proceed as follows
PROD> alter session set statistics_level=ALL;
PROD> execute your query;
PROD> select * from table (dbms_xplan.display_cursor(null,null, 'ALLSTATS LAST'));
TEST> alter session set statistics_level=ALL;
TEST> execute your query
TEST> select * from table (dbms_xplan.display_cursor(null,null, 'ALLSTATS LAST'));
This will give an execution plan showing the estimations(E-Rows) done by the CBO and the Actual (A-Rows) rows generated allowing you to judge the accuracy of your statistics.
The predicate part of the execution plan can also show inportant information.
Best regards
Mohamed Houri
www.hourim.wordpress.com

Similar Messages

  • Need two different reports_path in AS 10g

    Hi,
    I have AS 10g with one Report-Service and
    I do need two different REPORT_PATH for Testing and Production.
    I set the different REPORTS_PATH=xxx into the different xxx.ENV ,
    but it doesn't work,
    the Reports_Path is setting from the registry REPORTS_PATH.
    what is a way to set reports_path dynamic ?
    do I need a REPORTS_CLASSPATH in the xxx.ENV
    or can I assign the REPORTS_PATH to rp2rro.pll ?
    Norbert

    Thank you,
    set the different env in reports-config works.
    But the next is:
    I do use RP2RRO library and a param_list to start the report with rwservlet :
    RP2RRO.RP2RRO_RUN_PRODUCT (REPORTS,p_ReportName, SYNCHRONOUS, RUNTIME, FILESYSTEM, pl_id, NULL);
    and would like to use this to give the report the env with the RP2RRO-Service rp2rroReportOther:
    RP2RRO.setOthers('ENVID=TEST');
    But it doesn't work, and I think the reason is, that RP2RRO.setOthers doesn't work
    with param_list : "You cannot use rp2rroReportOther with parameter lists"
    (rp2rro.pll version is from 2002 )
    what to do ?
    can I change the RP2RRO to set RP2RRO.setOthers('ENVID=TEST') WITH param_list ?
    Regards
    Norbert
    'will create a new thread
    Message was edited by:
    astramare

  • Extracting unique records from two different tables record

    Hello,
    In the following code of two different tables www.testing.com exists in both tables. I want to compare two different columns of the two different tables to get unique records.
    SQL> select unique(videoLinks) from saVideos where sa_id=21;
    VIDEOLINKS
    www.testing.com
    SQL> ed
    Wrote file afiedt.buf
      1* select unique(picLinks) from saImages where sa_id=21
    SQL> /
    PICLINKS
    test
    test14
    www.hello.com
    www.testing.comThanks & best regards

    Unfortunatly you didn't mention the expected output. I guess it would be the one line
    "www.testing.com"
    in that case simply join the two tables.
    select *
    from saVideos v
    join saImages i on i.sa_id = v.sa_id and i.picLinks = v.videoLinks
    where v.sa_id=21;If needed then you could change the select list to retrieve only distinct values.
    select unique v.sa_id, v.videolinks
    from saVideos v
    join saImages i on i.sa_id = v.sa_id and i.picLinks = v.videoLinks
    where v.sa_id=21;I usually avoid distinct/unique whereever possible. This requires the database to do a sort and makes the query slow.
    Edited by: Sven W. on Feb 10, 2011 1:55 PM

  • Query to compare different Plan versions in CCA

    My requirement is to create a Cost Center query that compares two different plan versions, selected by the user with different time periods. We do not use any SEM functionality and are in version 3.0B.
    There is a standard R3 report but this gives a total figure and does not split the figures by periods.
    Does anyone know a Business Content query that can resolve this issue?

    Hi ,
    Instead of going for a business content query u can go for a formula variable using replacement path and assign the required values for the output and in that by using the customer exit with the help of abapers
    Hope this will be a clear solution to ur problem
    Pls assign full points if answer is satisfied
    Regards ,
    Subash Balakrishann

  • Read Only/ Plan Mode Changes for a layout with two different queries

    Hi,
    I have a situvation where a layout opens in a read only mode and when
    the users click on button plan, it changes to plan mode. The catch here however is the queries
    for read mode and plan mode are slightly different, in the read only mode the query has subtotals
    and other calculations that are not a part of the plan mode ( where they input forecasts).
    I know if the read and plan query are the same, we can achieve this through the command
    SET_DATA_ENTRY_MODE. In case they are different, as above, how can i achieve this.
    Thanks
    Rashmi.

    Say you are using two different queries as DP1 & DP2 for Display and Plan mode respectively. You also have one Analysis grid item in Web template which initially points to DP1.
    On PLAN button call command SET_ITEM_PARAMETER to set the data provider of analysis item to DP2.
    OR
    You have only one dataprovider in your web template as DP1 initially pointing to Query 1 which you want to show in Display mode. Then on PLAN button call a command SET_DATA_PROVIDER_PARAMETERS to point the DP1 to 2nd query instead of 1st query.  You can find this command under Commands for data Provider --> Basic data provider commands.
    Edited by: Deepti Maru on Nov 27, 2009 9:52 PM

  • Why two different explain plan for same objects?

    Believe or not there are two different databases, one for processing and one for reporting, plan is show different for same query. Table structure and indexes are same. It's 11G
    Thanks
    Good explain plan .. works fine..
    Plan
    SELECT STATEMENT  ALL_ROWSCost: 12,775  Bytes: 184  Cardinality: 1                                                                        
         27 SORT UNIQUE  Cost: 12,775  Bytes: 184  Cardinality: 1                                                                   
              26 NESTED LOOPS                                                              
                   24 NESTED LOOPS  Cost: 12,774  Bytes: 184  Cardinality: 1                                                         
                        22 HASH JOIN  Cost: 12,772  Bytes: 178  Cardinality: 1                                                    
                             20 NESTED LOOPS SEMI  Cost: 30  Bytes: 166  Cardinality: 1                                               
                                  17 NESTED LOOPS  Cost: 19  Bytes: 140  Cardinality: 1                                          
                                       14 NESTED LOOPS OUTER  Cost: 16  Bytes: 84  Cardinality: 1                                     
                                            11 VIEW DSSADM. Cost: 14  Bytes: 37  Cardinality: 1                                
                                                 10 NESTED LOOPS                           
                                                      8 NESTED LOOPS  Cost: 14  Bytes: 103  Cardinality: 1                      
                                                           6 NESTED LOOPS  Cost: 13  Bytes: 87  Cardinality: 1                 
                                                                3 INLIST ITERATOR            
                                                                     2 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PS_JOB_FAMILY_TBL Cost: 10  Bytes: 51  Cardinality: 1       
                                                                          1 INDEX RANGE SCAN INDEX DSSODS.DRV_PS_JOB_FAMILY_TBL_CL_SETID Cost: 9  Cardinality: 1 
                                                                5 TABLE ACCESS BY INDEX ROWID TABLE DSSADM.DIM_JOBCODE Cost: 3  Bytes: 36  Cardinality: 1            
                                                                     4 INDEX RANGE SCAN INDEX DSSADM.STAN_JB_FN_IDX Cost: 2  Cardinality: 1       
                                                           7 INDEX UNIQUE SCAN INDEX (UNIQUE) DSSODS.DRV_PS_JOBCODE_TBL_SEQ_KEY_RPT Cost: 0  Cardinality: 1                 
                                                      9 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PS_JOBCODE_TBL_RPT Cost: 1  Bytes: 16  Cardinality: 1                      
                                            13 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PSXLATITEM_RPT Cost: 2  Bytes: 47  Cardinality: 1                                
                                                 12 INDEX RANGE SCAN INDEX DSSODS.PK_DRV_RIXLATITEM_RPT Cost: 1  Cardinality: 1                           
                                       16 TABLE ACCESS BY INDEX ROWID TABLE DSSADM.DIM_JOBCODE Cost: 3  Bytes: 56  Cardinality: 1                                     
                                            15 INDEX RANGE SCAN INDEX DSSADM.DIM_JOBCODE_EXPDT1 Cost: 2  Cardinality: 1                                
                                  19 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PS_JOB_RPT Cost: 11  Bytes: 438,906  Cardinality: 16,881                                          
                                       18 INDEX RANGE SCAN INDEX DSSODS.DRV_PS_JOB_JOBCODE_RPT Cost: 2  Cardinality: 8                                     
                             21 INDEX FAST FULL SCAN INDEX (UNIQUE) DSSADM.Z_PK_JOBCODE_PROMPT_TBL Cost: 12,699  Bytes: 66,790,236  Cardinality: 5,565,853                                               
                        23 INDEX RANGE SCAN INDEX DSSADM.DIM_PERSON_EMPL_RCD_SEQ_KEY Cost: 1  Cardinality: 1                                                    
                   25 TABLE ACCESS BY INDEX ROWID TABLE DSSADM.DIM_PERSON_EMPL_RCD Cost: 2  Bytes: 6  Cardinality: 1                                                         This bad plan ... show merge join cartesian and full table ..
    Plan
    SELECT STATEMENT  ALL_ROWSCost: 3,585  Bytes: 237  Cardinality: 1                                                              
         26 SORT UNIQUE  Cost: 3,585  Bytes: 237  Cardinality: 1                                                         
              25 NESTED LOOPS SEMI  Cost: 3,584  Bytes: 237  Cardinality: 1                                                    
                   22 NESTED LOOPS  Cost: 3,573  Bytes: 211  Cardinality: 1                                               
                        20 MERGE JOIN CARTESIAN  Cost: 2,864  Bytes: 70,446  Cardinality: 354                                          
                             17 NESTED LOOPS                                     
                                  15 NESTED LOOPS  Cost: 51  Bytes: 191  Cardinality: 1                                
                                       13 NESTED LOOPS OUTER  Cost: 50  Bytes: 180  Cardinality: 1                           
                                            10 HASH JOIN  Cost: 48  Bytes: 133  Cardinality: 1                      
                                                 6 NESTED LOOPS                 
                                                      4 NESTED LOOPS  Cost: 38  Bytes: 656  Cardinality: 8            
                                                           2 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DIM_JOBCODE Cost: 14  Bytes: 448  Cardinality: 8       
                                                                1 INDEX RANGE SCAN INDEX REPORT2.STAN_PROM_JB_IDX Cost: 6  Cardinality: 95 
                                                           3 INDEX RANGE SCAN INDEX REPORT2.SETID_JC_IDX Cost: 2  Cardinality: 1       
                                                      5 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DIM_JOBCODE Cost: 3  Bytes: 26  Cardinality: 1            
                                                 9 INLIST ITERATOR                 
                                                      8 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PS_JOB_FAMILY_TBL Cost: 10  Bytes: 51  Cardinality: 1            
                                                           7 INDEX RANGE SCAN INDEX REPORT2.DRV_PS_JOB_FAMILY_TBL_CL_SETID Cost: 9  Cardinality: 1       
                                            12 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PSXLATITEM_RPT Cost: 2  Bytes: 47  Cardinality: 1                      
                                                 11 INDEX RANGE SCAN INDEX REPORT2.PK_DRV_RIXLATITEM_RPT Cost: 1  Cardinality: 1                 
                                       14 INDEX UNIQUE SCAN INDEX (UNIQUE) REPORT2.DRV_PS_JOBCODE_TBL_SEQ_KEY_RPT Cost: 0  Cardinality: 1                           
                                  16 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PS_JOBCODE_TBL_RPT Cost: 1  Bytes: 11  Cardinality: 1                                
                             19 BUFFER SORT  Cost: 2,863  Bytes: 4,295,552  Cardinality: 536,944                                     
                                  18 TABLE ACCESS FULL TABLE REPORT2.DIM_PERSON_EMPL_RCD Cost: 2,813  Bytes: 4,295,552  Cardinality: 536,944                                
                        21 INDEX RANGE SCAN INDEX (UNIQUE) REPORT2.Z_PK_JOBCODE_PROMPT_TBL Cost: 2  Bytes: 12  Cardinality: 1                                          
                   24 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PS_JOB_RPT Cost: 11  Bytes: 1,349,920  Cardinality: 51,920                                               
                        23 INDEX RANGE SCAN INDEX REPORT2.DRV_PS_JOB_JOBCODE_RPT Cost: 2  Cardinality: 8                                          

    user550024 wrote:
    I am really surprise that the stat for good sql are little old. I just computed the states of bad sql so they are uptodate..
    There is something terribly wrong..Not necessarily. Just using the default stats collection I've seen a few cases of things suddenly going wrong. As the data increases, it gets closer to an edge case where the inadequacy of the statistics convinces the optimizer to do a wrong plan. To fix, I could just go into dbconsole, set the stats back to a time when they worked, and locked them. In most cases it's definitely better to figure out what is really going on, though, to give the optimizer better information to work with. Aside from the value of learning how to do it, for some cases it's not so simple. Also, many think the default settings of the database statistic collection may be wrong in general (in 10.2.x, at least). So much depends on your application and data that you can't make too many generalizations. You have to look at the evidence and figure it out. There is still a steep learning curve for the tools to look at the evidence. People are here to help with that.
    Most of the time it works better than a dumb rule based optimizer, but at the cost of a few situations where people are smarter than computers. It's taken a lot of years to get to this point.

  • Two different HASH GROUP BY in execution plan

    Hi ALL;
    Oracle version
    select *From v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE    11.1.0.7.0      Production
    TNS for Linux: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - ProductionSQL
    select company_code, account_number, transaction_id,
    decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,
    (last_day(to_date('04/21/2010','MM/DD/YYYY')) - min(z.accounting_date) ) age,sum(z.amount)
    from
         select /*+ PARALLEL(use, 2) */    company_code,substr(account_number, 1, 5) account_number,transaction_id,
         decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,use.amount,use.accounting_date
         from financials.unbalanced_subledger_entries use
         where use.accounting_date >= to_date('04/21/2010','MM/DD/YYYY')
         and use.accounting_date < to_date('04/21/2010','MM/DD/YYYY') + 1
    UNION ALL
         select /*+ PARALLEL(se, 2) */  company_code, substr(se.account_number, 1, 5) account_number,transaction_id,
         decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,se.amount,se.accounting_date
         from financials.temp2_sl_snapshot_entries se,financials.account_numbers an
         where se.account_number = an.account_number
         and an.subledger_type in ('C', 'AC')
    ) z
    group by company_code,account_number,transaction_id,decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type)
    having abs(sum(z.amount)) >= 0.01explain plan
    Plan hash value: 1993777817
    | Id  | Operation                      | Name                         | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT               |                              |       |       | 76718 (100)|          |        |      |            |
    |   1 |  PX COORDINATOR                |                              |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)          | :TQ10002                     |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,02 | P->S | QC (RAND)  |
    |*  3 |    FILTER                      |                              |       |       |            |          |  Q1,02 | PCWC |            |
    |   4 |     HASH GROUP BY              |                              |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,02 | PCWP |            |
    |   5 |      PX RECEIVE                |                              |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,02 | PCWP |            |
    |   6 |       PX SEND HASH             | :TQ10001                     |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,01 | P->P | HASH       |
    |   7 |        HASH GROUP BY           |                              |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,01 | PCWP |            |
    |   8 |         VIEW                   |                              |    15M|  2055M| 76116   (1)| 00:15:14 |  Q1,01 | PCWP |            |
    |   9 |          UNION-ALL             |                              |       |       |            |          |  Q1,01 | PCWP |            |
    |  10 |           PX BLOCK ITERATOR    |                              |    11 |   539 |  1845   (1)| 00:00:23 |  Q1,01 | PCWC |            |
    |* 11 |            TABLE ACCESS FULL   | UNBALANCED_SUBLEDGER_ENTRIES |    11 |   539 |  1845   (1)| 00:00:23 |  Q1,01 | PCWP |            |
    |* 12 |           HASH JOIN            |                              |    15M|   928M| 74270   (1)| 00:14:52 |  Q1,01 | PCWP |            |
    |  13 |            BUFFER SORT         |                              |       |       |            |          |  Q1,01 | PCWC |            |
    |  14 |             PX RECEIVE         |                              |    21 |   210 |     2   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    |  15 |              PX SEND BROADCAST | :TQ10000                     |    21 |   210 |     2   (0)| 00:00:01 |        | S->P | BROADCAST  |
    |* 16 |               TABLE ACCESS FULL| ACCOUNT_NUMBERS              |    21 |   210 |     2   (0)| 00:00:01 |        |      |            |
    |  17 |            PX BLOCK ITERATOR   |                              |    25M|  1250M| 74183   (1)| 00:14:51 |  Q1,01 | PCWC |            |
    |* 18 |             TABLE ACCESS FULL  | TEMP2_SL_SNAPSHOT_ENTRIES    |    25M|  1250M| 74183   (1)| 00:14:51 |  Q1,01 | PCWP |            |
    Predicate Information (identified by operation id):
       3 - filter(ABS(SUM(SYS_OP_CSR(SYS_OP_MSR(SUM("Z"."AMOUNT"),MIN("Z"."ACCOUNTING_DATE")),0)))>=.01)
      11 - access(:Z>=:Z AND :Z<=:Z)
           filter(("USE"."ACCOUNTING_DATE"<TO_DATE(' 2010-04-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "USE"."ACCOUNTING_DATE">=TO_DATE(' 2010-04-21 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
      12 - access("SE"."ACCOUNT_NUMBER"="AN"."ACCOUNT_NUMBER")
      16 - filter(("AN"."SUBLEDGER_TYPE"='AC' OR "AN"."SUBLEDGER_TYPE"='C'))
      18 - access(:Z>=:Z AND :Z<=:Z)I have few doubts regarding this execution plan and i am sure my questions would get answered here.
    Q-1: Why am i getting two different HASH GROUP BY operations (Operation id 4 & 7) even though there is only a single GROUP BY clause ? Is that due to UNION ALL operator that is merging two different row sources and HASH GROUP BY is being applied on both of them individually ?
    Q-2: What does 'BUFFER SORT' (Operation id 13) indicate ? Some time i got this operation and sometime i am not. For some other queries, i have observing around 10GB TEMP space and high cost against this operation. So just curious about whether it is really helpful ? if no, how to avoid that ?
    Q-3: Under PREDICATE Section, what does step 18 suggest ? I am not using any filter like this ? access(:Z>=:Z AND :Z<=:Z)

    aychin wrote:
    Hi,
    About BUFFER SORT, first of all it is not specific for Parallel Executions. This step in the plan indicates that internal sorting have a place. It doesn't mean that rows will be returned sorted, in other words it doesn't guaranty that rows will be sorted in resulting row set, because it is not the main purpose of this operation. I've previously suggested that the "buffer sort" should really simply say "buffering", but that it hijacks the buffering mechanism of sorting and therefore gets reported completely spuriously as a sort. (see http://jonathanlewis.wordpress.com/2006/12/17/buffer-sorts/ ).
    In this case, I think the buffer sort may be a consequence of the broadcast distribution - and tells us that the entire broadcast is being buffered before the hash join starts. It's interesting to note that in the recent of the two plans with a buffer sort the second (probe) table in the hash join seems to be accessed first and broadcast before the first table is scanned to allow the join to occur.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
    +"Science is more than a body of knowledge; it is a way of thinking"+
    +Carl Sagan+                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Data loading from planning forms into two different essbase application

    Hi All,
    I have 2 different planning application into 2 different planning and essbase server but the application repository for these applications is same which is lying into 3rd server. Due to this when I add the member into any application it will automatically reflect into another application because the application repository is same. When I refresh the applications the meta data is loaded into there corresponding Essbase server.
    My problem is:
    When I enter the data figure into the planning form it is going into there corresponding Essbase cube. But I want when user enter the data into 1 planning application form it should be reflected into both Essbase application , LIKE METADATA which is not happening. We are doing this for fail over support.

    Hi,
    I am sorry but that is not how I look at, the point of failure for a planning application would the web server, repository, essbase database...
    If it is the web server element then surely you would have two planning web servers pointing to one planning application and one repository.... then if one web server went down to switch to the other web server?
    You won't get a planning application to write to two seperate essbase applications, if you want failover on essbase then you will need to look at alternative methods.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Create two different iCloud accounts with 2 iPhones (family plan)?

    How do I create two different iCloud accounts with 2 iPhones (family plan). Both phones are registered under my name, and when my wife set up her phone with iOS 5, it will only let her log in using my account. I thought it used your iTunes account info, is this not the case?  I really want her to have her own seperate account on her phone.
    --Brian

    I resolved it by using our seperate apple ID's. Since we already had our own itunes accounts I canceled iCloud on her phone that was using my account and logged in with hers.  I did read somewhere where you can have multiple iCloud accounts under the same user ID but I'm not sure how that works because that is exactly how ours were set up initially and all the suddon all of her contacts and calendar items were on my phone and iPad and vica-versa . It was a pain to remove them one at a time after I broke the accoutns apart.  I know that doesn;t really answer your question, but that was my workaround. Of course that also mean no app sharing.

  • Running two different tests on separate monitors

    Can I run two different labview executables from one PC on separate monitors
    simultaneously? I have a 2 station test facility which I would like to use 1 PC and 2 touchscreen monitors so the 2 operators ccan run different tests on each station  simultaneously.
    Solved!
    Go to Solution.

    As long as you are careful about not using the same resources (files, DAQ devices, etc), you can have as many executables running as you want.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • How can I Generate two different reports from single execution of Test cases in NI teststand

    Hi,
    My requirement is to generate two different reports from NI teststand. One for the Logging of error descriptions and the other report is by default generated by the Teststand. How can i generate a txt file that contains error descriptions other than that mentioned in the default report?
    Solved!
    Go to Solution.

    Do you need to do that just for these two sequences but not for other sequences? I don't see a problem to use SequenceFilePostStepRuntimeError. Create this callback in both sequence files and configure them to log into the same file. SequenceFilePostStepRuntimeError callback is called after each step of the sequence file if it has runtime error. You can access the calling step error information via RunState.Caller.Step.Result.Error property. Take a look to attached example.
    The "other way" is useful if you need to log errors not for every step of the sequence file, but for some of them. This is more complex, because you need to create a custom step types for these steps. For the custom step you can create substeps (post-step in your case) which will be executed every time after step of this type executed. Then, this is you job to determine if error happened in the step, acces to step's error information is via Step.Result.Error property. 
    Also, be aware that step's post-expression is not executed in case of error in the step.
    Sergey Kolbunov
    CLA, CTD
    Attachments:
    SequenceFilePostStepRuntimeError_Demo.seq ‏7 KB

  • Different plan for same SQL in different version of DB

    Hello,
    At client side i have upgraded database from 11.2.0.1.0 to 11.2.0.2.0 . and after that i am facing performance related issue.
    is this happen because of up-gradation ?
    for testing i have created two different version db (i.e 11.2.0.1.0 to 11.2.0.2.0 ) with same volume and same parameter. now i am executing same query then i am getting different plan.
    please help me to resolve this issue..

    Which issue? CBO is upgraded with every new release! So, yes, execution plans may change because of that.
    And sensible DBA's first upgrade a test database.
    Also no statement is to be seen.
    Secondly, you can always set optimizer_features_enable to the lower release.
    Finally, please look up 'upgradation' in an English dictionary. You won't find it!!!
    Sybrand Bakker
    Senior Oracle DBA

  • What is the best way to port complete applications from DEV - Test - PROD

    Hi,
    One of my customers recently asked me, Supposing I do the complete integrations and modelling in SOA Suite on the DEV Environment. Then,
    What is the best way to port complete applications from DEV -> Test -> PROD ??
    Also, since the URLs in use in the DEV environment would be very different from other environments, what is the easiest way to maintain them, and to build in Access Control mechanisms ?
    Best Regards

    It has been discussed here in detail-
    SOA 11g  Composite Deployment across multiple Instances: Best Practice
    since the URLs in use in the DEV environment would be very different from other environments, what is the easiest way to maintain them, and to build in Access Control mechanisms ?You may use deployment plan for this purpose. For access control, you may use Role Based access of Weblogic and EM. Please refer -
    http://download.oracle.com/docs/cd/E17904_01/integration.1111/e10226/appx_roles_privs.htm#BABIHDFJ
    Regards,
    Anuj

  • Clubbing two Pl Order to one prod Order

    Hi,
    In MTO scenario we have two Sales orders from two different customers for the same FG and two pl orders are generated after MRP now we want to have a single prod Order for these two as production people does not bother about customers while producing and after conformation of this single Prod order we need to allocate to the customers according to their sales order qty.
    How to map this in sap, as we want to use Individual u201C02u201D indicator in MRP views against availability check.
    Please suggestu2026u2026u2026u2026u2026u2026..
    pavan

    Dear ,
    As you have mentioned that two planned order generated  from different sales order , it means finsihed good will be in two individual Sales Order Stock once the production gets complete and subsequently , invoice n delivery will be done .
    Now , in that case why two planned Order should be clubbed in PO and how will you do the cost analysis in individual level as its MTO order .
    I think  you can achieve this if you follow MTS busniess process with Planning Stratgey 20 or Gross Requirement Palnnig -11(Mixed MRP) .In that case , you need to Sale the FG as traded item (TAN Item categoery ) and stok will be lying in Unrestricted stock .
    Just analysis and revert us if you need farther help
    Regards
    JH

  • Uprezzing film timeline which uses two different codecs

    So I have my final film edit in FCP 6 which uses two different codecs...DVCPROHD 720p60 at 23.98fps and HDV 1080p24 at 23.98fps. There are only about 15 clips which are the HDV codec. My next step is to color correct in apple Color. So I did a quick test where I sent a couple clips of each codec, on the same timeline, into Color and back out into FCP, which seemed to roundtrip without any hitches...the only difference is that the DVCPRO clips came back as ProRes 422HQ 1280x720 and the HDV clips came back as 1920x1080...which is all fine. However, I do want 1080p (1920x1080) as my final output...for Bluray as well as DCP if my film gets picked up by a festival. So what I plan to do for my final output is export my completed Colored timeline to compressor and use ProRes 422 HQ at 1080p. My question is...since there isn't a specific 1080p ProRes HQ codec (like there is for DVCPROHD) but simply the "Geometry" tab in Compressor where you can change the 'Dimensions' to 1920x1080...is this how I "Uprez" my DVCPRO 720p clips to match my HDV 1080p clips?
    I just want to make sure NOW, that my workflow will be good all the way up to finishing, before I start sending sections of my timeline to Color...
    Thanks!
    David

    OK...you wanted a 1080p final, right? So why shrink the HDV down to 720p, and then blow it back up to 1080p? That's the point of doing both formats separately, and upscaling only the 720p...keeping the HDV at full size.  How noticeable will it be? Test to find out.  Personally, I'd do it my way. 
    And yes, upscale AFTER you color correct.  As for re-color correcting down the line...why? Color correcting is the final process...do it until it's right...why would you go back? And if you did adjust a couple shots, then just render those back out, and only upscale those and put into that final sequence again.  It's a small patch.
    >why is it better to send the DVCPRO 720p stuff for upscale BEFORE marrying it with the HDV 1080p stuff.
    because the HDV is already 1080. Scaling it down to 720, then back up will be TWO generation losses. Why do that?
    >my method of upscaling the 720p with the 1080p stuff at the same time seemed logical to me (the 720p is upscaled while the 1080p stays the same)
    But it doesn't...you first will be shrinking it to 720p...then blowing up to 1080p. And HDV is a GOP format without real frames, so the compression really pops when you upscale.
    >just want to know WHY your method is better??
    Because I tested this out..mixing 720p DVCPRO HD and HDV...and my method gave the best results.  Call it "experience."

Maybe you are looking for