Query parse and execution order

Hi,
In the below SQL, I could understand that it would parse from Right to Left, first filter condition fieldname, Syntax is verified and then join column names.
From the below execution plan, I believe it executes from right to left, first filters the data and then joins the two tables, is this correct.
Is this in the fixed order or keep changing based on the statistics or any other parameter.
I would like to know how this SQL Select statement is executed in the run-time, whether data is joined first and then the filter condition is applied or
the other way round. Please give me more details on this, thank you.
SELECT * FROM EMP E, DEPT D
WHERE E.DEPTID = D.DEPTID AND
D.DEPTNAME = 'DEPT1';
Below is the execution plan,
SQL > SELECT * FROM EMP E, DEPT D
2 WHERE E.DEPTID = D.DEPTID AND
3 D.DEPTNAME = 'DEPT1';
Execution Plan
Plan hash value: 1123238657
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 143 | 5 (20)| 00:00:01 |
|* 1 | HASH JOIN | | 1 | 143 | 5 (20)| 00:00:01 |
| 2 | TABLE ACCESS FULL| EMP | 1 | 78 | 2 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| DEPT | 1 | 65 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("E"."DEPTID"="D"."DEPTID")
3 - filter("D"."DEPTNAME"='DEPT1')
Note
- dynamic sampling used for this statement

>
- Oracle does the full table scan of EMP table and makes an in-memory hash table with DEPTID as the hash key
- DEPT table is being read; as Oracle reads DEPT table, applies the filter predicate ("D"."DEPTNAME"='DEPT1'), applies the hashing function to the join key (DEPTID)
and uses it to locate the matching row from EMP
- rows are returned to the client
>
I believe that is correct for this particular query and plan only because there is only one row in each table. If the tables had many more records then the smaller of the two tables would be chosen to create the hash table and the following should apply.
The DEPT table is the smaller of the two tables so Oracle would do a full table scan of DEPT to make the in-memory hash with DEPTID as the hash key.
Then the EMP table (the larger table) is scanned and the DEPTID value used to probe the hash table to find the matching record and then the other filter predicate ("D"."DEPTNAME"='DEPT1') used to eliminate rows.
See section 11.6.4 Hash Joins in the Performance Tuning Guide
>
Hash joins are used for joining large data sets. The optimizer uses the smaller of two tables or data sources to build a hash table on the join key in memory. It then scans the larger table, probing the hash table to find the joined rows.
This method is best used when the smaller table fits in available memory. The cost is then limited to a single read pass over the data for the two tables.
>
This example uses a copy of emp and dept with no primary key or constraints
SQL> SELECT * FROM EMP1 E, DEPT1 D
  2  WHERE E.DEPTNO = D.DEPTNO AND
  3  D.DNAME = 'RESEARCH';
Execution Plan
Plan hash value: 619452140
| Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT   |       |     5 |   380 |     7  (15)| 00:00:01 |
|*  1 |  HASH JOIN         |       |     5 |   380 |     7  (15)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| DEPT1 |     1 |    30 |     3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| EMP1  |    14 |   644 |     3   (0)| 00:00:01 |
Predicate Information (identified by operation id):
   1 - access("E"."DEPTNO"="D"."DEPTNO")
   2 - filter("D"."DNAME"='RESEARCH')
Note
   - dynamic sampling used for this statement (level=2)

Similar Messages

  • How query cost and execution time are releated ?

    hi experts,
      i am curious to know, how the query cost and execution time is related?
     Query taking less time ,query cost is 65%, but query taking more time but query cost is 0%.
    how to connect both and improve query performance.
    Thanks

    i think you are refering to cost (relative to the batch) execution 65%, where there are more that one statement, it may compare the cost of each statement with in the batch
    i assume it mainly take subtree cost and IO stat as cost, but in some cases i may wrong when there is multi line function and many other facter influence the cost, and i would say it depends on the query
    cost is unit-less
    The reason these costs exist is because of the query optimization SQL Server does: it does cost-based optimization, which means that the optimizer formulates a lot of different ways to execute the query, assigns a cost to each of these alternatives, and
    chooses the one with the least cost. The cost tagged on each alternative is heurestically calculated, and is supposed to roughly reflect the amount of processing and I/O this alternative is going to take.
    refer :
    http://blogs.msdn.com/b/sqlqueryprocessing/archive/2006/10/11/what-is-this-cost.aspx
    Thanks
    Saravana kumar C

  • Expression parsing and execution

    Hi All,
    I need ideas/code/suggestions from you all. I am working on a Formula Builder UI. In this a user can create their own expressions choosing some variable values also.
    This will contain IF, AND , OR and NOT also from expression creation. For example it may look like this
    IF my_variable > 25 THEN my_variable *20 ELSE my_varibale/2 END
    So these kind of lines should be parsed first for completeness and then should be executed for thier values.
    Hope to get a supportive help from all of you.
    Regards,
    Sandeep

    Hi,
    I am trying to evaluate an expression like this =IF(A1<0.125,0.0005,(IF(A1<0.25,0.00066667,(IF(A1<0.5,0.00075,(IF(A1<0.75,0.000916667,0.0010833))))))) in java. In excel it is easy but i cannot use that. I cannot use any free downloads. Is there any code sample which I can use to start off? I need to parse and evaluate the above expression.

  • Query text and execution plan collection from prod DB Oracle 11g

    Hi all,
    I would like to collect query text, query execution plan and other statistics for all queries (mainly select queries) from production database.
    I am doing this by using OEM by click on Top activity link under performance tab but this gives top sql which is recent.
    This approach is helpful only when I need to debug recent queries only. If I need to know slow running queries and their execution plan at the end of day or sometime later then it’s not helpful for me.
    Anybody who has some better idea to do this will really be helpful.

    we did followings:
    1.Used awrextr.sql to export dmp file from production database.(imported snpashot id from 331 to 560)
    2.transfer file to test database.
    3.Used awrload.sql to import it in test database.
    but when we used OEM and went to Automatic Workload Repository link under Server tab
    its not showing snapshots of production database (which we have imported in test database )
    and showing only snapshot which was already there in test database.
    We did not find any error in import/export.
    do we need to perform something else also to display snapshots of production database in test database.

  • Execution order - group by and order by

    is there any execution order when we use group by and order by together in single query ?

    BOL: "Logical Processing Order of the SELECT statement
    The following steps show the logical processing order, or binding order, for a SELECT statement. This order determines when the objects defined in one step are made available to the clauses in subsequent steps. For example, if the query processor can bind to
    (access) the tables or views defined in the FROM clause, these objects and their columns are made available to all subsequent steps. Conversely, because the SELECT clause is step 8, any column aliases or derived columns defined in that clause cannot be referenced
    by preceding clauses. However, they can be referenced by subsequent clauses such as the ORDER BY clause. Note that the actual physical execution of the statement is determined by the query processor and the order may vary from this list.
    1. FROM
    2. ON
    3. JOIN
    4. WHERE
    5. GROUP BY
    6. WITH CUBE or WITH ROLLUP
    7. HAVING
    8. SELECT
    9. DISTINCT
    10. ORDER BY
    11. TOP"
    http://msdn.microsoft.com/en-us/library/ms189499.aspx
    Kalman Toth Database & OLAP Architect
    IPAD SELECT Query Video Tutorial 3.5 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Query: Module Execution Order

    Hi Experts,
           I've a query on the execution order of module execution. Say my requirement is File (with FCC) to File (with FCC) and I need to write a module at both sender and receiver end due to some business requirement. My questions are
    1. On the sender side, does sender adapter User defined module gets executed before the sender Standard adapter module? If so, what would be the format of data available to the user  defined module (XML data after file content conversion) ??
    2. Similar query, but for the receiver channel (receiver adapter modules execution order).
    Please explain the nature and order of execution of the adapter modules and the format of data available to the User Defined Modules.
    Thanks,
    Hussain.

    Hi Hussain,
    So, does that mean the input data to sender custom module is of Text format ie. the whole text file in text format (not in XML format) ? - Yes
    Also, can the order of the modules be changed i.e Can custom module be specified and executed later Standard module?? I suppose No but still wanted a confirmation. - Only in synch communication, when you have sender comm channel, there if you have a reciver custom module, then in sender comm channel in module tab, it will be specified after standard module..............But for asnych communication, where you will have a reicever comm channel, there you will specify your custom module before standard module.
    Hi Prateek,
    I think that the input to the custom module would be xml created after content conversion. - No, the file will be input as plain text file and in your custom module you will create a XML document for your sender msg.........Similar is the case when you have a excelsheet file as input, in it you have your excel file data as input and you create a XML document in your custom module.
    Regards,
    Rajeev Gupta

  • Parse and output XML document while preserving attribute order

    QUESTION: How can I take in an element with attributes from an XML and output the same element and attributes while preserving the order of those attributes?
    The following code will parse and XML document and generate (practically) unchanged output. However, all attributes are ordered a-z
    Example: The following element
    <work_item_type work_item_db_site="0000000000000000" work_item_db_id="0" work_item_type_code="3" user_tag_ident="Step" name="Work Step" gmt_last_updated="2008-12-31T18:00:00.000000000" last_upd_db_site="0000000000000000" last_upd_db_id="0" rstat_type_code="1">
    </work_item_type>is output as:
    <work_item_type gmt_last_updated="2008-12-31T18:00:00.000000000" last_upd_db_id="0" last_upd_db_site="0000000000000000" name="Work Step" rstat_type_code="1" user_tag_ident="Step" work_item_db_id="0" work_item_db_site="0000000000000000" work_item_type_code="3">
    </work_item_type>As you may notice, there is no difference in these besides order of the attributes!
    I am convened that the problem is not in the stylesheet.xslt but if you are not then it is posted bellow.
    Please, someone help me out with this! I have a feeling the solution is simple
    The following take the XML from source.xml and outputs it to DEST_filename with attributes in a-z order
    Code:
    private void OutputFile(String DEST_filename, String style_filename){
         //StreamSource stylesheet = new StreamSource(style_filename);
         try{
              File dest_file = new File(DEST_filename);
              if(!dest_file.exists())
                  dest_file.createNewFile();
              TransformerFactory tranFactory = TransformerFactory.newInstance();
              Transformer aTransformer = tranFactory.newTransformer();
              aTransformer.setOutputProperty(OutputKeys.ENCODING, "UTF-8");
              Source src = new DOMSource("source.xml");
              Result dest = new StreamResult(dest_file);
              aTransformer.transform(src, dest);
              System.out.println("Finished");
         catch(Exception e){
              System.err.print(e);
              System.exit(-1);
        }

    You can't. The reason is, the XML Recommendation explicitly says the order of attributes is not significant. Therefore conforming XML serializers won't treat it as if it were significant.
    If you have an environment where you think that the order of attributes is significant, your first step should be to reconsider. Possibly it isn't really significant and you are over-reaching in some way. Or possibly someone writing requirements is ignorant of this fact and the requirement can be discarded.
    Or possibly your output is being given to somebody else who has a defective parser which expects the attributes to be in a particular order. You could quote the XML Recommendation to those people but often XML bozos are resistant to change. If you're stuck writing for that parser then you'll have to apply some non-XML processing to your output to fix it up on their behalf.

  • Combined Purchase Order and Sales Order Query. (Including Stand Alone Docs)

    Hi,
    I need some help please!
    I am looking for a query that will show all the Purchase Orders with all their base document Sales Orders. However I also need to show those stand alone Purchase orders and Sales Orders.
    What I have is two queries, one for the PO fields en one for The SO fields. I would like a way to combine these two so I have one query with the relevant PO and SO info next to each other.
    The final query will have a top heading structure like this:
    Status,Purchase No.,Supplier No.,Supplier Name,Week,Month,Del Method,Country,Method,Rep.|Status,Rep.,Sales No.,Customer No.,Customer Name,Customer Order no.,Cust. Del date,Doc total,Del Method
    +(The first part is for the purchase order section, the second part starting at the second 'Status' is the sales order section)
    +Purchase Order Query:
    SELECT
      T0.DocStatus 'Status',
      T0.DocNum'Purchase Order No.',
      T0.CardCode 'Supplier No.',
      T0.CardName'Supplier Name',
      DATEPART(ww,T0.DocDuedate)'Week',
      DATEPART(mm,T0.DocDuedate) 'Month',
      T1.TrnspName 'Delivery Method',
      T2.Country 'Country',
      T0.JrnlMemo 'Method',
      T3.SlpName 'Rep.'
    FROM OPOR T0 
    INNER JOIN OSHP T1 ON T0.TrnspCode = T1.TrnspCode
    INNER JOIN OCRD T2 ON T0.CardCode = T2.CardCode
    INNER JOIN OSLP T3 ON T0.SlpCode = T3.SlpCode
    Sales Order Query:
    SELECT
      T0.DocStatus 'Status',
      T2.SlpName 'Rep.',
      T0.DocNum 'Sales Order No.',
      T0.CardCode 'Customer No.',
      T0.CardName 'Customer Name',
      T0.NumAtCard 'Customer Order no.',
      T0.DocDueDate 'Cust. Delivery date',
      T0.DocTotal ' Doc total',
      T1.TrnspName ' Delivery Method'
    FROM ORDR T0 
    INNER JOIN OSHP T1 ON T0.TrnspCode = T1.TrnspCode
    INNER JOIN OSLP T2 ON T0.SlpCode = T2.SlpCode
    The problem I am having is that if I combine these queries it excludes all those records with blank/empty data fields.
    I do not mind having to go to crystal reports to get what I want.
    Any suggestions?
    Thanks!

    Hi thanks GordonDu,
    I have tried a union before, and yours works great. However my problem is to display the  PO en SO columns next to each other and not underneath each other.
    Something like this:
    Po No          Supplier          SO No           Customer
    1234           Sup 1             9876              Cust 1
    1235           Sup 2             no SO             Cust 2
    1236           Sup 3             9877              Cust 3
    1237           Sup 4             9878              Cust 4
    1238           Sup 5             no SO             Cust 4
    I have this query, but with this it drops all those Purchase Orders made Stand Alone and without a SO number.
    SELECT
      T2.DocNum AS 'Purchase Order No.',
      T0.DocNum AS 'Sales Order No.',
      T2.CardCode AS 'Supplier No.',
      T2.CardName AS 'Supplier Name'
    FROM ORDR T0 
       INNER JOIN RDR1 T1 ON T0.DocEntry = T1.DocEntry
       INNER JOIN OPOR T2 ON T2.DocEntry = T1.PoTrgEntry
    Though using your Union gives me all the information I want and the ability to expand on that, my problem is the "Display" part. I am sure I am doing something stupid somwhere 
    Edited by: Desmond Moll on Mar 7, 2012 8:03 AM

  • I have result after execution of OperationBinding. How can i parse and store in list.

    I have result after execution of operation binding. how can i parse and store in list

    Hi,
    I Thought u have custom method in AM which returns List Data type. What is the requirement exactly, bcz commit operation have void return type.
    Ex: create custom method in AMimple class
    public List sampleData(){
    // code here to set the values in List object.
    now expose the above method in client interface. Then add the Method action binding in pageDef file and execute the below code in managed bean
    OperationBinding operationBinding = (OperationBinding)bindings.getOperationBinding("sampleData");
                List result =(List) operationBinding.execute();
    Thanks
    Nitesh

  • Suggestions for query formulation and parsing

    I'm trying to design a query format that can be used to perform searches across my objects. The query would be allowed to contain predicates with nested binary logical operators (e.g., (((a > 4) AND (b < 5)) OR (c == "hello world")) ).
    I need to design a query language, and need to have a way to parse an incoming expression and generate a parse tree from it that I can execute a search against.
    What is the elegant/correct solution to do this? Are there Java libraries that are equivalents of lex and yacc? What are suggestions for the simplest way to do this, presuming limitations can be put on the query language to make it simple?

    What is the elegant/correct solution to do this? Are
    there Java libraries that are equivalents of lex and
    yacc? Sure. google JavaCC.

  • Low Execute to Parse % and high soft parse %

    Hello Folks
    I am working on oracle 10g release2 on HP-UX
    After going through awr reports observed it have low Execute to Parse % but high soft parse % (Instance Efficiency Percentages)
    so cannot say issue with less use of bind variables,then what is cause of Execute to Parse %
    searched sites like ask.tom,burselon counsulting etc as usual they had given generic/diplomatic(escaping) replies on this
    like due to problem in application code,ineffecient sharing ,due to problem in database parameters etc
    without any clear indication for cause and solution like if some database parameters not set properly then should say which database parameters can be checked,cause due to more parsing and less execution
    please share if you had faced such issue and any suggestions to solve this
    examples why this could happen ,like possibilities in application code
    Thanks

      Load Profile
                                              Per Second       Per Transaction
                   Redo size:             11,685.79              3,660.98
                   Logical reads:             71,445.74             22,382.86
                   Block changes:                 70.89                 22.21
                   Physical reads:                 58.63                 18.37
                   Physical writes:                  2.80                  0.88
                   User calls:                652.93                204.55
                   Parses:                 48.39                 15.16
                   Hard parses:                  0.33                  0.10
                   Sorts:                  6.90                  2.16
                   Logons:                  0.23                  0.07
                   Executes:                 52.71                 16.51
                   Transactions:                  3.19
                % Blocks changed per Read:    0.10    Recursive Call %:    30.48
                Rollback per transaction %:    2.57       Rows per Sort:    29.66
        Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                 Buffer Nowait %:  100.00       Redo NoWait %:  100.00
                 Buffer  Hit   %:   99.92    In-memory Sort %:  100.00
                 Library Hit   %:   98.47        Soft Parse %:   99.32
                 Execute to Parse %:    8.19         Latch Hit %:   99.63
                 Parse CPU to Parse Elapsd %:   89.90     % Non-Parse CPU:   99.62There rdbms performs approximately 48 soft parse per second ,Soft Parse % and Library Hit very close to 99 it means there main part of sql are shared.Also here user calls similar high per second but executions less,however you should be try minimizing soft parsing.I do not know exactly for which interval you get this report but Execute to Parse % indicate that when executing query oracle can not find early cursor handles(open or closed) but it can find sql text and plan information from shared pool according hash values,in this case oracle perform soft parse again.In your case you also investigate shared pool size/fragmentation.To avoid little Execute to Parse % you can increase SESSION_CACHED_CURSORS or implement CURSOR_SPACE_FOR_TIME.So refer documentation and find how to use these parameters.

  • Sql parse and shared pool

    hi friends i have a procedure
    and its have *(AD IS NULL OR NVAD LIKE AD||'%') AND (SOYAD IS NULL OR NVSOYAD LIKE SOYAD||'%')*
    if i use this query and pass different things for AD ; is it become a brand new query for oracle because of ||'%' or its parse and put in shared pool and for a sometime this query don't parse and take from shared pool
       PROCEDURE P_YENI_TALEP_LISTELE(RC_CURSOR OUT SYS_REFCURSOR,TOPLAM_TALEP OUT NUMBER,SAYFA_INDEX IN NUMBER,SAYFA_BUYUKLUK  IN NUMBER,TC_NO IN NVARCHAR2,AD IN NVARCHAR2,SOYAD IN NVARCHAR2,ONAY IN NUMBER,H_TIP_ID IN NUMBER)
        AS 
        BEGIN
          OPEN RC_CURSOR FOR  SELECT TA.NT_ID,TA.NTC_NO,HI.NVHIZMET_TUR AS NVARM_KONU,TA.NVOPRTR_CVP,TA.NVGRSM_SURE,TA.NVGRSM_DRM,
                       TA.NHIZMET,TA.BONAY,TA.DTLP_TRH,TH.NVAD,TH.NVSOYAD,TH.NVILCE ,TA.DALINAN_TRH,
                       TA.DBRKLAN_TRH,TA.NG_ID,GU.NVAD1||' - '||GU.NVAD2 AS GUZERGAH,HI.NH_TIP_ID,HT.NVHIZMET_TIP,
                FROM H_TALEP TA,TNM_HASTA_BILGI TH,TNM_HIZMET HI,SBT_HIZMET_TIP HT,TNM_GUZERGAH GU
          WHERE TA.NTC_NO=TH.NTC_NO AND TA.NH_ID=HI.NH_ID AND HI.NH_TIP_ID=HT.NH_TIP_ID AND(TC_NO IS NULL OR TA.NTC_NO=TO_NUMBER(TC_NO)) AND TA.NG_ID=GU.NG_ID AND
                          *(AD IS NULL OR NVAD LIKE AD||'%') AND (SOYAD IS NULL OR NVSOYAD LIKE SOYAD||'%')*...............

    The code you have posted has no DYNAMIC SQL in it.... Static SQL inside PL/SQL will bind all the variables for you.
    So what you are saying does not compute.
    What is making you think a 'brand new query' is being parsed for each execution?
    Edited by: Tubby on Nov 8, 2008 4:19 PM

  • What is the different type mapping faster execution order ?

    Order is XSLT, JAVA(SAX), JAVA(DOM), Graphical(sax), and ABAP mappings ?
    Is there any Graphical mapping with dom parser ?
    Please send the faster execution order .

    Hi,
    I am not getting exactly what you are looking for,
    I think I already had answered your questions in your previous blog
    Mapping types performance
    Is there any Graphical mapping with dom parser ?
    -->Graphical mapping with DOM parser is not possible, thus you have to go for Java Mapping
    JAVA mapping
    /people/amjad-ali.khoja/blog/2006/02/07/using-dom4j-in-xi--a-more-sophisticated-option-for-xml-processing-than-sap-xml-toolkit
    Mapping Techniques
    Please let me know if any specific thing that you will be looking for, so accordingly we could narrow down the analysis and answers.
    Thanks
    Swarup

  • Higher parse and execute than I expected

    I am trying to diagnose a problem. But I'm a bit confused on how the following could happen. How could I get 25 parses on the same SQL statement? I'd expect 1 parse, 25 execute (which is what I get when I run that statement 25 times during a separate trace).
    SELECT PROJ_ID 
    FROM TIMEX.TD_PROJECTS 
    WHERE PROJ_TYPE = 'LVE'
    call     count       cpu    elapsed       disk      query    current        rows
    Parse       25      0.03       0.02          0          0          0           0
    Execute     25      0.00       0.00          0          0          0           0
    Fetch      265      0.16       0.19         31        935          0         250
    total      315      0.19       0.21         31        935          0         250
    Misses in library cache during parse: 1

    "only slightly more amusing than using DBMS_RANDOM"? Seriously Dan? I've got Cary's book & I've been to several of the Hotsos conferences. Method R doesn't necessarily apply here, or if it does, then please give me your targeted approach to implementing it. In my mind, I have to find out where the problem is before I can implement it. If the latch contention in the library cache is causing every session in the db to slow down, and I know that 4 new applications were introduced, then I need to figure out which of those 4 apps is the biggest contributor to the latching problem. Right?
    Initial Goal:
    Load the system with 1/3 of the upcoming user load, to see if it can handle cutover.
    Implementation:
    30 people entered application A, performing the work I captured which started my questions in this thread.
    12 people in application B, 18 in application C, 5 in application D.
    Result: Due to excessive latches in the library cache, the entire system became virtually unavailable.
    Next step: without involving 65 people for each test, capture an approximation of their work in their respective application, and use a testing harness find out which one is causing the majority of the latch contention. Then focus on fixing that app.
    The concern I have here, is that I want to capture that 1:1 parse-to-execution ratio for that one query, due specifically to the latch contention in the library cache - and it's excessive (and unnecessary) number of executions per user.
    If I can't replicate the problem, how can I find the problem areas and fix them?

  • Query using Table named "Order" is failing

    I'm writing a query to look at orders in a table called Order (I did not name it that).  Unfortunately, this is also a reserved key term in SQL.  In the query in visual studio I put the table name in brackets [Order]. The query is simple
    SELECT DISTINCT O.ProjectNo, O.ReportType
    FROM            [Order] AS O
    The query runs correctly while in VS, but once I upload the report to SSRS server, I get an error
    Query execution failed for dataset 'EDROrders'.  (rsErrorExecutingCommand)
    Invalid object name 'Order'
    Any ideas?  I'm on SQL 2008 R2
    Milissa Hartwell

    Are you sure Order is in the same database which datasource is trying to connect? Also are you using an expression based connectionstring?
    Another scenario where this can occur is Order table not being in dbo schema. It may be that it will in your default schema (say xxx) so that when you just refer it as Order it will work fine whereas when it runs from ReportServer its using another account
    whose default schema is not same as yours. As such it wont find table as its neither in its default schema not dbo
    To check that give it access to your schema (if different from dbo) and make select as below
    SELECT DISTINCT O.ProjectNo, O.ReportType
    FROM [schemaname].[Order] AS O
    and try
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

Maybe you are looking for

  • Report layout distorted

    I am facing a problem that whenever i need some updation in query. I make the change and when i apply it, it removes all the new placeholders added and removes all the formats. How can i prevent from it?

  • Two problems with EEM script

    I have an EEM script (below) and I have two problems. 1) Line action 4.5 gets a "%Error opening bootdisk:blackhole (File or Directory already in use)" 2) Line 5.0 - 5.5 get "Line has invalid autocommand "sh mls cef hardware module 4 | append sup-boot

  • Problem in jdic web browser in linux mandriva

    hi all i m embedding web browser in jframe using jdic ...... its working fine in Windows but in linux mandriva 2008 it giving following error:: Locking assertion failure.  Backtrace: *#0 /usr/lib/libxcb-xlib.so.0 [0x9074a706]* *#1 /usr/lib/libxcb-xli

  • Make flash drive read only with no copying

    Here is the Situation. I have a 2h video of a high school play that i am editing and selling but with final cut you can only export 480 p video on dvd so i am going to sell hd digital copies on flash drives my question is can i make it so that the fl

  • Re-installing Adobe Acrobat X Pro 10.0

    User has become new PC. Want to re-install Adobe Acrobat X Pro 10.0 but have omly the upgrade box with cd, not the fulle version 7, 8 or 9 anymore which I need.