Simple Query Caching Question

I have a .cfm template that is used to render a handful of
pages of my website. Each page is a department, for example, like
"Arts & Entertainment", "Health", "Finance", and so on. On each
of these pages (rendered by this same template) there is a common
element... a list of our top ten articles. I have used the
"cachedwithin" feature to cache the query for a 3-hour period.
My question is this...
Since it is a single template generating these department
pages, the "top articles" query is exactly the same in terms of
query name, datasource... only the SQL statement (which uses a
"WHERE department_id = X" statement) is different. Let's say I have
ten departments rendered by this template... should it be caching
all ten queries, regardless of the names being the same?

Hopefully you only have one cfquery tag and it is located in
the .cfm template and you are using a variable in your where
clause. If not, you are not being efficient with your code.
If you do have just one cfquery tag, and you have a
cachedwithin attribute, cold fusion will cache a query each time
your variable changes.

Similar Messages

  • Simple Query working on 10G and not working on 11gR2 after upgrade

    Hi Folks,
    This is the first time i am posting the query in this Blog.
    I have a small issue which preventing the UAT Sigoff.
    Simple query working fine on 10.2.0.1 and after upgrade to 11.2.0.1 its error out
    10.2.0.4:
    =====
    SQL> SELECT COUNT(*) FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1=1;
    COUNT(*)
    1
    SQL> SELECT COUNT(*) FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1=00001;
    COUNT(*)
    1
    SQL> select ATTRIBUTE1 FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1=1;
    ATTRIBUTE1
    00001
    11.2.0.1:
    =====
    SQL> SELECT COUNT(*) FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1=1
    ERROR at line 1:
    ORA-01722: invalid number
    SQL> SELECT COUNT(*) FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1=00001
    ERROR at line 1:
    ORA-01722: invalid number
    SQL> select ATTRIBUTE1 FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1='1';
    no rows selected
    SQL> SELECT COUNT(*) FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1='00001';
    COUNT(*)
    1
    SQL> select ATTRIBUTE1 FROM APPS.HZ_PARTIES HP WHERE ATTRIBUTE_CATEGORY= 'PROPERTY' AND ATTRIBUTE1='00001';
    ATTRIBUTE1
    00001
    ++++++++++++++++++++++++++++++++++++++++++++++
    SQL > desc APPS.HZ_PARTIES
    Name Type
    ======== ======
    ATTRIBUTE1 VARCHAR2(150)
    ++++++++++++++++++++++++++++++++++++++++++++++
    Changes:
    Recently i upgraded the DB from 10.2.0.4 to 11.2.0.1
    Query:
    1.If the type of that row is VARCHAR,why it is working in 10.2.0.4 and why not working in 11.2.0.1
    2.after upgrade i analyzed the table with "analyze table " query for all AP,AR,GL,HR,BEN,APPS Schemas--Is it got impact if we run analyze table.
    Please provide me the answer for above two questions or refer the document is also well enough to understand.Based on the Answer client will sigoff to-day.
    Thanks,
    P Kumar

    WhiteHat wrote:
    the issue has already been identified: in oracle versions prior to 11, there was an implicit conversion of numbers to characters. your database has a character field which you are attempting to compare to a number.
    i.e. the string '000001' is not in any way equivalent to the number 1. but Oracle 10 converts '000001' to a number because you are asking it to compare to the number you have provided.
    version 11 doesn't do this anymore (and rightly so).
    the issue is with the bad code design. you can either: use characters in the predicate (where field = 'parameter') or you can do a conversion of the field prior to comparing (where to_num(field) = parameter).
    I would suggest that you should fix your code and don't assume that '000001' = 1I don't think that the above is completely correct, and a simple demonstration will show why. First, a simple table on Oracle Database 10.2.0.4:
    CREATE TABLE T1(C1 VARCHAR2(20));
    INSERT INTO T1 VALUES ('1');
    INSERT INTO T1 VALUES ('0001');
    COMMIT;A select from the above table, relying on implicit data type conversion:
    SELECT
    FROM
      T1
    WHERE
      C1=1;
    C1
    1
    0001Technically, the second row should not have been returned as an exact match. Why was it returned, let's take a look at the actual execution plan:
    SELECT
    FROM
      TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,NULL));
    SQL_ID  g6gvbpsgj1dvf, child number 0
    SELECT   * FROM   T1 WHERE   C1=1
    Plan hash value: 3617692013
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
    |*  1 |  TABLE ACCESS FULL| T1   |     2 |    24 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(TO_NUMBER("C1")=1)
    Note
       - dynamic sampling used for this statementNotice that the VARCHAR2 column was converted to a NUMBER, so if there was any data in that column that could not be converted to a number (or NULL), we should receive an error (unless the bad rows are already removed due to another predicate in the WHERE clause). For example:
    INSERT INTO T1 VALUES ('.0001.');
    SELECT
    FROM
      T1
    WHERE
      C1=1;
    SQL> SELECT
      2    *
      3  FROM
      4    T1
      5  WHERE
      6    C1=1;
    ERROR:
    ORA-01722: invalid numberNow the same test on Oracle Database 11.1.0.7:
    CREATE TABLE T1(C1 VARCHAR2(20));
    INSERT INTO T1 VALUES ('1');
    INSERT INTO T1 VALUES ('0001');
    COMMIT;
    SELECT
    FROM
      T1
    WHERE
      C1=1;
    C1
    1
    0001
    SELECT
    FROM
      TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,NULL));
    SQL_ID  g6gvbpsgj1dvf, child number 0
    SELECT   * FROM   T1 WHERE   C1=1
    Plan hash value: 3617692013
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
    |*  1 |  TABLE ACCESS FULL| T1   |     2 |    24 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(TO_NUMBER("C1")=1)
    Note
       - dynamic sampling used for this statement
    INSERT INTO T1 VALUES ('.0001.');
    SELECT
    FROM
      T1
    WHERE
      C1=1;
    SQL> SELECT
      2    *
      3  FROM
      4    T1
      5  WHERE
      6    C1=1;
    ERROR:
    ORA-01722: invalid numberAs you can see, exactly the same actual execution plan, and the same end result.
    The OP needs to determine if non-numeric data now exists in the column. Was the database characterset possibly changed during/after the upgrade?
    Charles Hooper
    Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Simple query takes time to run

    Hi,
    I have a simple query whcih takes about 20 mins to run.. here is the TKPROF forit:
      SELECT
        SY2.QBAC0,
        sum(decode(SALES_ORDER.SDCRCD,'USD', SALES_ORDER.SDAEXP,'CAD', SALES_ORDER.SDAEXP /1.0452))
      FROM
        JDE.F5542SY2  SY2,
        JDE.F42119  SALES_ORDER,
        JDE.F0116  SHIP_TO,
        JDE.F5542SY1  SY1,
       JDE.F4101  PRODUCT_INFO
    WHERE
        ( SHIP_TO.ALAN8=SALES_ORDER.SDSHAN  )
        AND  ( SY1.QANRAC=SY2.QBNRAC and SY1.QAOTCD=SY2.QBOTCD  )
        AND  ( PRODUCT_INFO.IMITM=SALES_ORDER.SDITM  )
        AND  ( SY2.QBSHAN=SALES_ORDER.SDSHAN  )
        AND  ( SALES_ORDER.SDLNTY NOT IN ('H ','HC','I ')  )
        AND  ( PRODUCT_INFO.IMSRP1 Not In ('   ','000','689')  )
        AND  ( SALES_ORDER.SDDCTO IN  ('CO','CR','SA','SF','SG','SP','SM','SO','SL','SR')  )
        AND  (
        ( SY1.QACTR=SHIP_TO.ALCTR  )
        AND  ( PRODUCT_INFO.IMSRP1=SY1.QASRP1  )
      GROUP BY
      SY2.QBAC0
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.07       0.07          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch       10     92.40     929.16     798689     838484          0         131
    total       12     92.48     929.24     798689     838484          0         131
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 62 
    Rows     Row Source Operation
        131  SORT GROUP BY
    3535506   HASH JOIN 
    4026100    HASH JOIN 
        922     TABLE ACCESS FULL OBJ#(187309)
    3454198     HASH JOIN 
      80065      INDEX FAST FULL SCAN OBJ#(30492) (object id 30492)
    3489670      HASH JOIN 
      65192       INDEX FAST FULL SCAN OBJ#(30457) (object id 30457)
    3489936       PARTITION RANGE ALL PARTITION: 1 9
    3489936        TABLE ACCESS FULL OBJ#(30530) PARTITION: 1 9
      97152    TABLE ACCESS FULL OBJ#(187308)
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.07       0.07          0          0          0           0
    Execute      2      0.00       0.00          0          0          0           0
    Fetch       10     92.40     929.16     798689     838484          0         131
    total       13     92.48     929.24     798689     838484          0         131
    Misses in library cache during parse: 1kindly suggest how to resolve this...
    OS is windows and its 9i DB...
    Thanks

    > ... you want to get rid of the IN statements.
    They prevent Oracle from usering the index.
    SQL> create table mytable (id,num,description)
      2  as
      3   select level
      4        , case level
      5          when 0 then 0
      6          when 1 then 1
      7          else 2
      8          end
      9        , 'description ' || to_char(level)
    10     from dual
    11  connect by level <= 10000
    12  /
    Table created.
    SQL> create index i1 on mytable(num)
      2  /
    Index created.
    SQL> exec dbms_stats.gather_table_stats(user,'mytable')
    PL/SQL procedure successfully completed.
    SQL> set autotrace on explain
    SQL> select id
      2       , num
      3       , description
      4    from mytable
      5   where num in (0,1)
      6  /
                                        ID                                    NUM DESCRIPTION
                                         1                                      1 description 1
    1 row selected.
    Execution Plan
    Plan hash value: 2172953059
    | Id  | Operation                    | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |         |  5001 |   112K|     2   (0)| 00:00:01 |
    |   1 |  INLIST ITERATOR             |         |       |       |            |          |
    |   2 |   TABLE ACCESS BY INDEX ROWID| MYTABLE |  5001 |   112K|     2   (0)| 00:00:01 |
    |*  3 |    INDEX RANGE SCAN | I1      |  5001 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - access("NUM"=0 OR "NUM"=1)Regards,
    Rob.

  • How I can delete a row using a simple query?

    SZSLIFE_SPRIDEN_PIDM     SZSLIFE_SGBSTDN_TERM_CODE_EFF     SZSLIFE_SLRRASG_BLDG_CODE     SZSLIFE_SLRRASG_ROOM_NUMBER     SZSLIFE_SLRRASG_BEGIN_DATE     SZSLIFE_SLRRASG_END_DATE
    48547     199890                    
    48547     199990                    
    48547     199990     BLU     205     09/03/1999     12/23/1999
    48547     200010                    
    48547     200010     BLU     205     01/25/2000     05/25/2000
    48547     200090                    
    48547     200090     MOR     406     09/03/2000     12/23/2000
    48547     200110                    
    48547     200110     MOR     406     01/25/2001     05/25/2001
    48547     200190                    
    48547     200210                    
    48547     200290                    
    48547     200310                    
    48547     200390                    
    48547     200410                    
    48547     200610                    
    Here is what a simple question is probably for some of you; I can not get this to work. I need to delete all the rows that are duplicate like row #2 with the same SZSLIFE_SGBSTDN_TERM_CODE_EFF but with not
    SZSLIFE_SLRRASG_BLDG_CODE and SZSLIFE_SLRRASG_ROOM_NUMBER     
    I need to write a code where it counts the SZSLIFE_SGBSTDN_TERM_CODE_EFF and if it have the same (2 times.
    I need to delete the row without SZSLIFE_SLRRASG_BLDG_CODE and SZSLIFE_SLRRASG_ROOM_NUMBER
    The SZSLIFE_SLRRASG_BLDG_CODE NEEDS to be not null, because I do an insert in this table I need to be able to insert null values.
    How I can use a simple query where I can delete all the duplicate records without bldg_code and room number…
    Here is the table description
    SZSLIFE_SPRIDEN_PIDM NUMBER(8)
    SZSLIFE_SPRIDEN_ID VARCHAR2(10)
    SZSLIFE_SPRIDEN_LAST_NAME VARCHAR2(60)
    SZSLIFE_SPRIDEN_FIRST_NAME VARCHAR2(60)
    SZSLIFE_SPRIDEN_MI VARCHAR2(15)
    SZSLIFE_SGBSTDN_TERM_CODE_EFF VARCHAR2(6)
    SZSLIFE_SGBSTDN_STST_CODE VARCHAR2(2)
    SZSLIFE_STVSTST_DESC VARCHAR2(30)
    SZSLIFE_SGBSTDN_STYP_CODE VARCHAR2(2)
    SZSLIFE_STVSTYP_DESC VARCHAR2(30)
    SZSLIFE_SGBSTDN_LEVL_CODE VARCHAR2(2)
    SZSLIFE_STVLEVL_DESC VARCHAR2(30)
    SZSLIFE_SGBSTDN_RESD_CODE VARCHAR2(10)
    SZSLIFE_STVRESD_DESC VARCHAR2(40)
    SZSLIFE_SLRRASG_BLDG_CODE VARCHAR2(10)
    SZSLIFE_SLRRASG_ROOM_NUMBER VARCHAR2(10)
    SZSLIFE_SLRRASG_BEGIN_DATE VARCHAR2(12)
    SZSLIFE_SLRRASG_END_DATE VARCHAR2(12)
    SLRRASG_ASCD_CODE VARCHAR2(2)
    I will appreciate any help!
    SLRRASG_ROLL_IND VARCHAR2(2)

    Thank you very much Sandeep, this works!
    1 DELETE SZSLIFE_TEMP2
    2 WHERE
    3 SZSLIFE_SGBSTDN_TERM_CODE_EFF
    4 IN
    5 (SELECT SZSLIFE_SGBSTDN_TERM_CODE_EFF
    6 FROM SZSLIFE_TEMP2
    7 GROUP BY
    8 SZSLIFE_SGBSTDN_TERM_CODE_EFF
    9 HAVING COUNT(*) > 1)
    10 AND
    11* SZSLIFE_SLRRASG_BLDG_CODE = ' '
    12 /
    4 rows deleted.
    The only thing here is that the SZSLIFE_SLRRASG_BLDG_CODE can not be defined as a a NULL, so I can not use
    where SZSLIFE_SLRRASG_BLDG_CODE is null
    Here is how those two columns are define:
    SZSLIFE_SLRRASG_BLDG_CODE VARCHAR2(10)
    SZSLIFE_SLRRASG_ROOM_NUMBER VARCHAR2(10)
    So,
    my question is it will be safe to do SZSLIFE_SLRRASG_BLDG_CODE = ' ' ?
    Again, it works, it deleted the rfows that I wanted...
    Thank you very much!!!
    Rogelio

  • Continuous Query Caching - Expensive?

    Hello,
    I have had a look at the documentation but I still cannot find a reasonable answer to the following question : How expensive are continuous query caches?
    Is it appropriate to have many of them?
    Is the following example an acceptable usage of Continuous query caching (does it scale?)
    In the context of a web application:
    User logs onto a website
    User performs a "Search" for financial instruments
    A continuous query cache is created with a filter for those instruments returned (say, 50) to listen to price updates.
    If the user pages, or does another search, the query cache is released and a new one, with an updated filter, is created.
    Does it make a difference if we are using the extend client?

    Hi,
    So 100 CQCs is probably not too excessive depending on the configuration of the process instantiating the CQCs and the cluster size etc.
    Each CQC will hold its own set of deserialized keys and values, so yes they are distinct objects, although a CQC of 50 entries would not be very big.
    One query I have - you mention that this is a Web Application but you also mention an Extend Client. Is your Web App and Extend Client of the main cluster? Is there are reason why you did this, most people would make a Web App a storage disabled cluster member so it would perform a bit better. Providing the Web App sits on a server that is very close in network terms to the cluster (i.e. same switch) then I would make it part of the cluster - or is the Web App the thing that is in the "regional environment".
    If you are running CQCs over Extend then there used to be some issues with this if the Extend connection was lost. AFAIK this is supposed to be fixed in later patches so I would get 3.7.1.8 and make sure you test that the Web App continues to work and properly fails over if you kill its Extend connection. When the CQC fails over it will reinitialize all its data so you will need to cope with that if you are pushing changes based on the CQC.
    JK

  • Explain plans - caching question

    I am trying to get some explain plan information for some queries that we have running poorly. Our application uses some alter session statements to set the optimizer_mode to all_rows. I am trying to determine if we need to include the optimizer_index_cost_adj parameter and possibly the optimizer_index_caching parameter.
    Using SQL Developer I can easily run the different alter session statements needed and then get an explain plan for the query in question. However I am seeing some weird results - for example long query times for queries that have a relatively low cost. On the flip side I am seeing some queries that have a higher cost return quicker.
    I am doing this many times over and am wondering what impact using the same query could have. Is this query being cached in Oracle? To get an accurate explain plan would I need to clear a cache each time?
    Also, in the explain plan is the cost figure the most important? How is it that a query can have a low cost and take longer to complete than the same query with a higher cost and some different parameters?

    There are no fixed road map for what to see in explain plan,but some common observation should be made for explain plan i.e full table scans due to poor use of indexing,inappropriate hints uses for forcing indexing where FTS can be faster.unnecessary sorting,cartesian
    joins, outer joins, and anti-joins.
    At last i would say that every database vary to its different hardware platforms and those hardware have different configuration,these things can effect oracle optimizer.
    So, if my statistics are current, what should I be
    taking out of the explain plan? I always assumed
    cost was the main indicator of the performance of a
    query.As justin already told cost is not something which should be considered for tuning query rather access path, but there is more other then explain plan access path e.g TKPROF tool which will address you plan as well as others stuff which should be considered for tuning query.
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96533/sqltrace.htmKhurram

  • Simple query -- tuning

    Hi gurus,
    I have a very simple query
    Select * from emp
    where deptno = 10
    if this query has been executed on 10,000 records .. time taken is little bit slow .. but whereas it has been executed on 10,00,000 records .. its taking lot hell of time..
    the client is complaining about the time tacking .. I really do wonder, how can I tune this query .. Please help
    Regards

    Hi guys,
    I really appreciate from the bottom of my heart, for putting and taking lot of pains, in answering my question .. Well, that question has been asked in an interview .. I dont know whether its a real problem faced by him or his client ..
    He has asked, me, I have given the query "select * from emp where deptno = 10" .. and there is already an index associated with the query that too on deptno .. when it has been tested on a very huge database consisting of 10,00,000 my client has asked me to tune the query .. how can i achieve that ..
    Like some of you people, i have tried, in giving different answers, but he wasnt satisified .. so thought of asking or sharing with you, so that, I can get some different answer ..
    One of the gurus has been asking me .. whether are they same tables of EMP and DEPT which we normally use (dummy tables )... Yes , they are the same tables ..
    Now any suggestions please
    Regards

  • Reducing Database Call Techniques...query caching the only way?

    What's the most efficient way to reuse data that gets called on practically every page of an application?
    For instance, I have a module that handles all my gateways, sub pages and subgateways etc etc.  This will only change whenever a change is made to the page structure in the admin portion of the application.  It's really not necessary to hit the database everytime a page loads.  Is this a good instance to use query caching?  What are the pros, cons and alternatives?  I thought an alternative might be to store in a session, but that doesn't sound too ideal.
    Thanks!
    Paul

    What's the most efficient way to reuse data that gets called on practically every page of an application?
    That sounds like a question from the certification exam. The answer is to store the data in session or applicaton scope, depending on the circumstances. If the data depends on the user, then the answer is session. If the data persists from user to user, then it is application.
    admin portion of the application.
    Suggests users must log in. Otherwise you cannot distinguish admin from non-admin.
    This will only change whenever a change is made to the page structure in the admin portion of the application.
    Then I would go for storing the data in application scope, as the admin determines the value for everybody else. However, the session acope also has something to do with it. Since the changes are only going to occur in the admin portion, I would base everything on a variable, session.role.
    You cache the query by storing it directly in application scope within onApplicationStart in Application.cfc, like this:
    <cfquery name="application.myQueryName">
    </cfquery>
    The best place for the following code is within onSessionStart in Application.cfc.
    <!--- It is assumed here that login has already occurred. Your code checks whether
    session.role is Admin. If so, make the changes. --->
    <cfif session.role is 'admin'>
    <!--- Make changes to the data in application.myQueryName, otherwise do nothing --->
    </cfif>
    Added edit: On second thought, the best place for setting the application variable is in onApplicationStart.

  • A Simpler, More Direct Question About Merge Joins

    This thread is related to Merge Joins Should Be Faster and Merge Join but asks a simpler, more direct question:
    Why does merge sort join choose to sort data that is already sorted? Here are some Explain query plans to illustrate my point.
    SQL> EXPLAIN PLAN FOR
      2  SELECT * FROM spoTriples ORDER BY s;
    PLAN_TABLE_OUTPUT
    |   0 | SELECT STATEMENT |              |   998K|    35M|  5311   (1)| 00:01:04|
    |   1 |  INDEX FULL SCAN | PKSPOTRIPLES |   998K|    35M|  5311   (1)| 00:01:04|
    ---------------------------------------------------------------------------------Notice that the plan does not involve a SORT operation. This is because spoTriples is an Index-Organized Table on the primary key index of (s,p,o), which contains all of the columns in the table. This means the table is already sorted on s, which is the column in the ORDER BY clause. The optimizer is taking advantage of the fact that the table is already sorted, which it should.
    Now look at this plan:
    SQL> EXPLAIN PLAN FOR
      2  SELECT /*+ USE_MERGE(t1 t2) */ t1.s, t2.s
      3  FROM spoTriples t1, spoTriples t2
      4  WHERE t1.s = t2.s;
    Explained.
    PLAN_TABLE_OUTPUT
    |   0 | SELECT STATEMENT       |              |    11M|   297M|       | 13019 (6)| 00:02:37 |
    |   1 |  MERGE JOIN            |              |    11M|   297M|       | 13019 (6)| 00:02:37 |
    |   2 |   SORT JOIN            |              |   998K|    12M|    38M|  6389 (4)| 00:01:17 |
    |   3 |    INDEX FAST FULL SCAN| PKSPOTRIPLES |   998K|    12M|       |  1460 (3)| 00:00:18 |
    |*  4 |   SORT JOIN            |              |   998K|    12M|    38M|  6389 (4)| 00:01:17 |
    |   5 |    INDEX FAST FULL SCAN| PKSPOTRIPLES |   998K|    12M|       |  1460 (3)| 00:00:18 |
    Predicate Information (identified by operation id):
       4 - access("T1"."S"="T2"."S")
           filter("T1"."S"="T2"."S")I'm doing a self join on the column by which the table is sorted. I'm using a hint to force a merge join, but despite the data already being sorted, the optimizer insists on sorting each instance of spoTriples before doing the merge join. The sort should be unnecessary for the same reason that it is unnecessary in the case with the ORDER BY above.
    Is there anyway to make Oracle be aware of and take advantage of the fact that it doesn't have to sort this data before merge joining it?

    Licensing questions are best addressed by visiting the Oracle store, or contacting a salesrep in your area
    But I doubt you can redistribute the product if you aren't licensed yourself.
    Question 3 and 4 have obvious answers
    3: Even if you could this is illegal
    4: if tnsping is not included in the client, tnsping is not included in the client, and there will be no replacement.
    Tnsping only establishes whether a listener is running and shouldn't be called from an application
    Sybrand Bakker
    Senior Oracle DBA

  • SOS!! Simple yes/no question about JPA...

    Hello,
    I have the following environment and requirements:
    -Tomcat 5.5 (no ejb container)
    -Latest version of Hibernate
    -JSF 1.1
    -A requirement to use JPA
    -I must use the query cache and the second-level cache
    My question is as follows:
    What is the best solution?
    Solution 1.
    ONE EntityManagerFactory stored in the ServletContext for use by all of my web app users generating MULTIPLE INSTANCES of EntityManagers. (would this allow me to use the query cache?)
    Solution 2.
    ONE EntityManagerFactory and ONE EntityManager stored in the ServletContext for use by all of my web app users.
    Thanks in advance,
    Julien.

    Regarding caching, what exactly are you referring to
    by "query cache"? Are you saying you
    plan to execute the same query multiple times but
    you'd like the underlying persistence manager
    to avoid trips to the actual database? Whether the
    query is executed by an actual database
    access or is fulfilled through some JVM-local cache
    is not controlled by the spec. Most implementations
    do allow for such caching but the behavior is
    persistence-provider specific.Yes. I am actually using hibernate behind the scenes as my persistence framework.
    I'd suggest looking at the presentation from last
    year's JavaOne called "Persistence In the
    Web Tier"
    http://developers.sun.com/learning/javaoneonline/2006/
    webtier/TS-1887.pdfI am going to have a look at that.
    Thanks again.
    Julien.

  • How to write a simple query.

    I have a table where I have data shown below. Now, I want to write a simple query which lists me the project and the count of the distinct effective dates for which data is existant there.
    Sample data:
    Project Task Effective Date (xx_proj_task_data)
    101 T1 01-Jan-2008
    101 T1 01-Feb-2008
    101 T1 01-Mar-2008
    101 T2 01-Jan-2008
    101 T2 01-Apr-2008
    101 T3 01-Apr-2008
    102 T1 01-Jan-2008
    102 T1 01-Feb-2008
    102 T2 01-Apr-2008
    103 T1 01-Jan-2008
    103 T1 01-Feb-2008
    103 T1 01-Mar-2008
    103 T1 01-Apr-2008
    103 T2 01-May-2008
    103 T3 01-Jun-2008
    103 T1 01-Jan-2008
    103 T1 01-Aug-2008
    103 T2 01-Apr-2008
    Output Reqd:
    Project Count(Distinct Effective Dates)
    101 4
    102 3
    103 7
    I can write a query that says:
    select project_id, count(1)
    from (select distinct project_id, effective_date
    from xx_proj_task_data) x
    group by project_id;
    But, is there a way I can achieve the same by avoiding the inner Query (x) and just by a simple query ?
    Thanks!

    Try below query:
    select project_id
    , count(distinct effective_date)
    from xx_proj_task_data
    group by project_id;
    --venkata                                                                                                                                                                                                                                                                                       

  • SAP BW 3.5 Query Cache - no data for queries

    Dear experts,
    we do have a problem with the SAP BW Query Cache (BW 3.5). Sometimes the
    problem arise that after the queries have been pre-calculated using web templates there are no figures available when running a web-report, or when doing a drilldown or filter navigation. A solution to solve that issue is to delete the cache for that query, however this solution is not reasonable or passable. The problem occurs non-reproducible for different queries.
    Is this a "normal" error of the SAP BW we have to live with, or are there any solutions for it? Any hints are greatly appreciated.
    Thanks in advance & kind regards,
    daniel

    HI Daniel
    Try to wirk without cache for those queries.
    Anyway, you should check how the cache option is configured for those queries .
    You can see that, in RSRV tx
    Hope this help

  • Simple Query in Oracle Linked Table in MS Access causes full table scan.

    I am running a very simple query in MS ACCESS to a linked Oracle table as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > MyDate()
    or
    Select *
    From EXPRESS_SERVICE_EVENTS --(the linked table name refers to EXPRESS.SERVICE_EVENTS)
    Where performed > [Forms]![MyForm]![Date1]
    We have over 50 machines and this query runs fine on over half of these, using an Oracle Index on the "performed" field. Running exactly the same thing on the other machines causes a full table scan, therefore ignoring the Index (all machines access the same Access DB).
    Strangely, if we write the query as follows:
    Select *
    From EXPRESS_SERVICE_EVENTS
    Where performed > #09/04/2009 08:00#
    it works fast everywhere!
    Any help on this 'phenominon' would be appreciated.
    Things we've done:
    Checked regional settings, ODBC driver settings, MS Access settings (as in Tools->Options), we have the latest XP and Office service packs, and re-linked all Access Tables on both the slow and fast machines independantly).

    Primarily, thanks gdarling for your reply. This solved our problem.
    Just a small note to those who may be using this thread.
    Although this might not be the reason, my PC had Oracle 9iR2 installed with Administratiev Tools, where user machines had the same thing installed but using Runtime Installation. For some reason, my PC did not have 'bind date' etc. as an option in the workarounds, but user machines did have this workaround option. Strangely, although I did not have the option, my (ODBC) query was running as expected, but user queries were not.
    When we set the workaround checkbox accordingly, the queries then run as expected (fast).
    Once again,
    Thanks

  • Error in the simple Query

    Dear Experts,
    Not able to Execute this simple query :
    Select T1.JobID , T1.BudgetValue,T1.ActualValue FROM [dbo].[Enprise_JobCost_ActualBudgetView] T1 WHERE T1.TransType = '[%0]'
    Regards

    Hello,
    View - A View in simple terms is a subset of a 'virtual table. It can be used to retrieve data from the tables, Insert, Update or Delete from the tables. The Results of using View are not permanently  stored in the database.
    Stored Procedure -  A stored procedure is a group of SQL statements which can be stored into the database and can be shared over the netwrok with different users.
    http://www.geekinterview.com/question_details/65914
    Better make a UDT for your requirement.
    Thanks
    Manvendra Singh Niranjan

  • BW3.5 - Query Cache - InfoObject decimal formatting

    Hello all,
    I built a bw query, which displays key figures.  Each key figure uses the decimal place formatting from the key figure infoobject (In Query Designer, the properties for each keyfigure for Decimal places, is set to "[From Key Figure 0.00]").
    I decided to change the InfoObject key figure to have 0 decimals places (in BEx formatting tab for the Key Figure).  Now, when I open up query designer, and look at the properties for the Key figure, it still is set to "[From Key Figure 0.00]" (it should be "[From Key Figure 0]" to reflect the Key Figure Infoobject change. 
    I tried to generate the report using RSRT, and deleting the query cache, but it still shows up with two decimal places.  Has anyone encountered this problem before?  I am trying to avoid removing the Key Figure infoobject from the query and readding it to reflect the change.
    Thanks!

    Hello Brendon
    You have changed the KF infoObject to show only 0 decimal( no decimal)..that is okay but in query KF property you have selected with 2 decimal so data will be displayiing in 2 decimal...the query setting is local and have priority over KF InfoObject settings...
    If you notice in KF property in query u will have one option from the field somethjing which means whatever is defined by KF infoObject...just select that now onwards u will get only those many decimal which u have defined in KF InfoObject
    Thanks
    Tripple k

Maybe you are looking for

  • How can i connect IMac21.5 to HP Lazer Jet printer 4000?

    i just purchased a IMac21.5 and want to install my HP Lazer Jet Printer 4000 which works great. Where can i purchase the adapter cable? What software or tip necessary to install. Any help will be greatly appreciated.

  • Accordion Scrolling

    I'm super new to Flex as in I started using it last night. I've done fairly well between Google and the Programming Flex 2 book. I have an Accordion with 6 headers. The 5th header's child and another Accordion. It will be dynamic and may contain anyw

  • Hyperlink to Flash Video Player

    Does anyone know how to push a specific video to a Flash video player playlist? I'm using the following Flash video player/pplaylist application. Any help would be great!!! Creating a Dynamic Playlist for Progressive Flash Video http://www.adobe.com/

  • I am leaving for Japan on business and I have no service since the upgrad, what are you doing to fix this problem Apple.

    What am I supposed to do. I am leaving for Japan on business on Monday and your upgrade has screwed up my IPad. "No Service", how can I rely on your products for business!  Apple more than anyone should know what happened to Blackberry. did anyone te

  • Presentation printers using preference

    Hello, We are using Server 2012 Standard for presenting a desktop to users using rds. We also supply printers using preferences to our users. We use the item-level targeting for presenting the printers depending on AD printer groups. We use the "repl