SQL quick challenge

Hello All,
I have two tables:
Positions_tbl (Start_Position, End_Position)
X_Positions (X_Name, X_Position)
I need to find how many X_Position lies between each (Start_Position, End_Position)
Thank you!
Edited by: OMD on Feb 21, 2013 9:24 AM

>
If you don't want to answer my question then keep your judgment for yourself, No need to post my info and and then as smart person say "Now I understand why". You definitely don't know why. I don't know who you are and you don't know me. So, just be respectful to the others.
>
Your forum history is public information that any registered user can see.
As a forum user you are expected to mark your questions ANSWERED when they have been. When you don't do that it wastes the time of people that start reading the thread to try to help.
When someone sees an unanswered thread and thinks they can help it is very frustrating to read the entire thread only to find out that it appears to have already been answered by one of the previous responders. The problem is that since you didn't mark the thread ANSWERED there is no way to tell if you are just being lazy or it really hasn't been answered and you need more help.
Then when you cop an attitude like you just did in this response it only reinforces that maybe you are just being lazy and selfish; asking for help from others but not willing to acknowledge the help when you get it. That will cause some people to just not want to try to help you at all. So in addition to potentially wasting the time of others you are also really hurting yourself.
So, if you really are a good steward of the forums and want to get help in the future I suggest that you revisit those previous threads and mark then ANSWERED if they have been.

Similar Messages

  • Good book for learning pl/sql quickly

    Please suggest me a good book that covers all aspects of pl/sql in depth.
    Thanks in advance. :-)

    A good book for learning PL/SQL quickly?
    That is not a function of the book. That is a function of your ability to learn and understand new concepts. No book can make you learn "faster".
    Also, even before trying to learn PL/SQL, there are concepts and fundamentals to understand. Specifically:
    - [url http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/toc.htm]Oracle® Database Concepts
    - [url http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14251/toc.htm]Oracle® Database Application Developer's Guide - Fundamentals
    - [url http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14261/toc.htm]Oracle® Database PL/SQL User's Guide and Reference
    And after that, also have a look at Application Developer's Guide - Large Objects, Application Developer's Guide - Object-Relational Features. Not to mention the SQL Reference and the PL/SQL Packages and Types Reference guides.
    All available via http://tahiti.oracle.com

  • SQL Query (challenge)

    Hello,
    I have 2 tables of events E1 and E2
    E1: (Time, Event), E2: (Time, Event)
    Where the columns Time in both tables are ordered.
    Ex.
       E1: ((1, a) (2, b) (4, d) (6, c))
       E2: ((2, x) (3, y) (6, z))
    To find the events of both tables at the same time it is obvious to do & join between E1 and E2
    Q1 -> select e1.Time, e1.Event, e2.Event from E1 e1, E2 e2 where e1.Time=e2.Time;
    The result of the query is:
    ((2, b, x) (6, c, z))
    Given that there is no indexes for this tables, an efficient execution plan can be a hash join (under conditions mentioned in Oracle Database Performance Tuning Guide Ch 14).
    Now, the hash join suffers from locality problem if the hash table is large and does not fit in memory; it may happen that one block of data is read in memory and swaped out frequently.
    Given that the Time columns are sorted is ascending order, I find the following algorithm, known idea in the literature, apropriate to this problem; The algorithm is in pseudocode close to pl/sql, for simplicity (I home the still is clear):
    -- start algorithm
    open cursors for e1 and e2
    loop
      if e1.Time = e2.Time then
         pipe row (e1.Time, e1.Event, e2.Event);
         fetch next e1 record
         exit when notfound
         fetch next e2 record
          exit when notfound
      else
         if e1.Time < e2.Time then
            fetch next e1 record
            exit when notfound
         else
            fetch next e2 record
            exit when notfound
         end if;
      end if;
    end loop
    -- end algorithm
    As you can see the algorithm does not suffer from locality issue since it iterates sequentially over the arrays.
    Now the problem: The algorithm shown below hints the use of pipelined function to implement it in pl/sql, but it is slow compared to hash join in the implicit cursor of the query shown above (Q1).
    Is there an implicit SQL query to implement this algorithm? The objective is to beat the hash join of the query (Q1), so queries that use sorting are not accepted.
    A difficulty I foound is that the explicit cursor are much slower that implict ones (SQL queries)
    Example: for a large table (2.5 million records)
    create table mytable (x number);
    declare
    begin
    open c for 'select 1 from mytable';
    fetch c bulk collect into l_data;
    close c;
    dbms_output.put_line('couont = '||l_data.count);
    end;
    is 5 times slower then
    select count(*) from mytable;
    I do not understand why it should be the case, I read that this may be explained because pl/sql is interpreted, but I think this does not explain the whole issue. May be because the fetch copies data from one space to your space and this takes a long time.

    Hi
    A correction in the algorithm:
    -- start algorithm
    open cursors for e1 and e2
    fetch next e1 record
    fetch next e2 record
    loop
      exit when e1%notfound
      exit when e2%notfound
      if e1.Time = e2.Time then
         pipe row (e1.Time, e1.Event, e2.Event);
         fetch next e1 record
         fetch next e2 record
      else
         if e1.Time < e2.Time then
            fetch next e1 record
         else
            fetch next e2 record
         end if;
      end if;
    end loop
    -- end algorithm
    Best regards
    Taoufik

  • SQL *Loader (Challenging)

    Hi all,
    I am using sql loader to load data into my database. For one field i want to data split into two line i.e.
    abc cde into
    abc
    cde
    What could i add between 'abc cde' to provide my expected result?
    I did try 'abc chr(10) cde' but failed
    Thank you for your kind attentiion!
    Regards,
    David

    Hi,
    But, what is the row terminator and how ur flat file looks like (test.unl). The Column Terminator is OK & in ur case it is ','.
    See one of my Control File....and i will try to clarify what i mean:
    load data
    infile 'BANKDT.dat' "str '<er>'"
    into table BANKDT
    fields terminated by '<ec>' trailing nullcols
    (BDT ,
    BLF ,
    BW ,
    AMT )
    Pls. notice that column terminator is <ec> and the row terminator used as <er>. and my flat file i.e. BANKDT.DAT looks like
    ....<ec>....<ec>...<ec>....<er>....<ec>.......<ec>....<ec>...<ec>....<er>
    Where ... represent the field value.
    This way i hope it must be clear to you and you able to get it throuhg.
    Regards,
    Kamesh Rastogi

  • Munky's SQL performance challenge [APEX Chart]

    Hi,
    I have a chart currently being used but it takes around 45 seconds to load (or worse). One of the tables (corcon01_hour_chg_totals) has almost 20 million rows. Currently I'm doing them as separate queries but I assume they could be placed into a single query with multiple values (?). This means that I have 4 queries, all relating to the same x-axis values but different y-axis values (1 uses left, the other 3 use right). Perhaps there is a way to make use of the 2 previous parts (15 and 16) rather than doing a new query to discover the total of the 2? Thanks for any help. Here's my current code:
    Series 1 using left y-axis: http://www.pastebin.ca/1585574
    Series 2 using right axis -- cpu usage of one node: http://www.pastebin.ca/1585575
    Series 3 using right axis -- cpu usage of other node: http://www.pastebin.ca/1585577
    Series 4 using right axis -- total cpu usage of both nodes: http://www.pastebin.ca/1585578
    The following indexes are specified:
    corcon01_hour_chg_totals indexes: http://i36.tinypic.com/296die0.png
    corcon01_db_perform indexes: http://i34.tinypic.com/nff8g2.png
    The function based ones relate to the part of query used e.g. IDX_ACTUAL_WEEK's expression is: TO_CHAR(TO_DATE(RPAD("TD_HR",10),'yyyy-mm-dd'),'iw')Lets see some magic :)
    Mike

    Mike
    Be a happy boy!
    If you want SQL tuning advice, then throw it on the PL/SQL and SQL forum and I'm sure we can help you - Mr Kulash, Alex, Karthick, BluShadow and especially Boneist shall be your friend. Just be sure that you have read about SQL tuning and can provide the DDL (I mean the creation scripts, not just DESCRIBE output - if you don't have SQL developer or Toad or wotnot then look up DBMS_METADATA.GET_DDL) for your tables and indexes and the execution plan of the queries, a 10046 trace file might be handy too (google it, 10053 could be required later if necessary).
    You've had expert input from 2 Aces, neither of whom I would be comfortable in a debate with, I'm just a SQL monkey (just cause I'm not happy with a 10 sec load does not mean that you should not be). So you've already had good advice from the best - you can't beat that sir!
    Oh, and you have a select max in one of your select clauses that doesn't reference owt in the actual query (the 'actual count' bit). It's always gonna be 1 row with one result given the nature of the sub-query so you might want to WITH blah that, materialize (just to test) and then select from the sub-factored temporary table... Don't forget to always run twice to test so that you can disregard parsing time as the execution plan is sitting pretty in the shared pool by the second time...
    Cheers
    Ben
    Edited by: Munky on Sep 30, 2009 10:26 PM

  • Need help in a SQL quick suggestion will highly appreciated

    Hi
    I need to dynamically generate the following query
    ALTER TABLE AAA.TAB1 ADD SUPPLEMENTAL LOG GROUP t_l_g (COL1,COL2,COL3) ALWAYS;
    I have like 30 tables in 100 clients and I need to generate this query for all the columns that are in a unique index within a list of tables and with in a list of users. the issue that i am facing is that the columns are comming in rows. Any help will be highly appreciated.

    You did not post your query so here is a general approach:
    Write the query that returns the columns and pivot it to give you a concatenated list then add in the rest of the statement which is a constant except for the table_name which could come from a query.
    There have been numerous posts in the past on pivoting rows into columns. You should be able to find some via a search of the archives. Version 11g even comes with a new command to pivot data.
    HTH -- Mark D Powell --

  • ACCESS SQL Statement Challenge!

    Hi All,
    Does anyone have a clue how to write this query in Oracle?
    UPDATE Counties INNER JOIN (Table2 INNER JOIN Table1 ON Table2.HABID = Table1.HABID) ON Counties.CountyName = Table2.County SET Table1.County = counties.countyID;
    It works in Access but I would like to learn how to write it in Oracle.
    Table and query explanation is long, but let me know if you need it. Suffice to say it works in Access and it works in Oracle if I use linked tables in Access.
    Thanks
    John

    or
    UPDATE Table1 t1
    SET t1.County = (
    select counties.countyID
    from counties inner join table2
    on Counties.CountyName = Table2.County
    where Table2.HABID = t1.HABID
    );

  • Sql certification

    Hi All,
    I would like to get certified in sql, so I want you friends to help me out with it.
    can anyone help me to get free materials for it.
    exam: 1Z0-047 Oracle Database SQL Expert Exam
    books: Oracle Database 10g: SQL Fundamentals I
    AND
    Oracle Database 10g: SQL Fundamentals II
    AND
    Oracle Database 11g: Introduction to SQL
    OR
    Oracle Database 10g: Introduction to SQL
    I checked for this in all kinds of ebook sites , but no use. so please help me out.

    Hello,
    You can't find better than this for free, this is all for 10g and you can find similar books (oracle documentation) for 11g.
    http://download.oracle.com/docs/cd/B19306_01/appdev.102/b31695/toc.htm
    http://asktom.oracle.com (search for your specific questions)
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14195/toc.htm (Sql Quick Reference)
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14356/toc.htm
    http://forums.oracle.com/forums (always free)
    You can get hold of this from some public library http://www.amazon.com/Oracle-Database-Expert-Guide-1Z0-047/dp/0071614214/ref=pd_bbs_1?ie=UTF8&s=books&qid=1231991713&sr=8-1
    You might be already doing this and if not, download relevant oracle software version and sql developer and start playing.
    Regards
    Edited by: OrionNet on Jan 14, 2009 10:55 PM

  • How to  write file excel format xlsx/xlsb to pl/sql?

    Dear supporter,
    I built the xml report output excel file. However, the reported data and about 30 columns, line 200000 write and loading file slow, large file size
    How can write data directly to the format xlsb / xlsx from pl / sql. quickly and efficiently.
    Please, help me!
    Tks,
    Mr.T

    Check this thread.
    Re: 5. How do I read or write an Excel file?

  • Pupbld.sql

    hai
    when i tried to connect to a user i am getting the following error, what exactly happening here, i tried to run the pupbld.sql as system but no use.
    and where pupbld.sql reside. i found it is not there in $ORACLE_HOME/sqlpus/admin
    SQL> conn u1/u1
    Error accessing PRODUCT_USER_PROFILE
    Warning: Product user profile information not loaded!
    You may need to run PUPBLD.SQL as SYSTEM
    Connected.
    SQL>
    quick response will be highly appreciated
    Edited by: sr_d on May 18, 2009 3:39 AM

    and where pupbld.sql reside. i found it is not there in $ORACLE_HOME/admin/demoIts under $OH/sqlplus/admin
    i tried to run the pupbld.sql as system but no use.If you could not find it, what have you run as SYSTEM then?

  • 4 signs to know that your lead scoring program is working

    Getting the science of how your buyers engage with you is key for making revenue generation more predictable. We have worked with many sales and marketing organizations who look to putting in a lead scoring system to better qualify which leads should go to sales, validate that online campaigns are influencing MQL production and to make sales people more efficient. Regardless of whether you are testing the scoring methodology within marketing or have a system that is in place, I wanted to share 4 tips to highlight how you can determine if your current system is working before you make any changes.
    Eloqua RPIs are moving in the right direction. Value, Reach, Conversion, Velocity, Return are Eloqua's revenue performance indicators and there is a connection between the output of your scoring system and each indicator. Ensuring that you have a process for scoring leads in your marketing database (reach), monitoring the speed (velocity) at which they move through funnel (conversion), will give you a sense of how well your system is performing and its impact on revenue generation. Value and Return will become clear once you have seen enough scored contacts flow through buying cycles to become customers in order for those to become clear.
    Your top ranked leads outperform the other categories. Measuring fit and engagement are among the best practices for building a sound lead scoring system. Fit includes all of the firmographic pieces of information that your buyer gives you or can be appended via Eloqua's AppCloud. Engagement speaks to Digital Body Language or their online buying behavior. You should look at a conversion matrix for A1s down to D4s and look at the % of those MQLs become accepted (Lead Acceptance Rate by Lead Score report via CRM). You should also look at what happens when your target buyer, scores highly, and then becomes part of an opportunity - are there other patterns that are worth noting: opportunity close rates for A leads is x time faster than any other score? Having your own benchmark for what happens when you get an A1 lead will build confidence in your scoring model and drive the right behaviour.
    Lead Acceptance rates rise. A good system will invoke sales' trust in marketing and the leads they generate. McAfee, the 2011 Markie Winner in our Clean House category sought after Eloqua's AppCloud tools for data.com and Demandbase when they saw a dual decline in Lead Scores and fewer leads going over to CRM over time. Missing data impacts, routing, lead scoring, segmentation, reporting and trust with your sales team. McAfee addressed the problem and saw an overall WW increase in lead acceptance and a 25% increase in acceptance in one of their global regions. On the behavior side, to ensure that you have the right patterns, you should look at any closed deals that were touched by marketing and look for the campaigns that influenced the buyers (first campaign, last campaign before opp creation) and chat with sales about any tactics that are offline that can be looped into your scoring model. Either way, if sales has trust in the data - both from a fit and engagement perspective, they will work the leads and any hesitation to accept should go away.
    Average / Total Opportunity Value rises. Being able to model your buyer's actions into a science may give your sales team an opportunity to cross sell or deepen that initial conversation. 2011 Markie Winner for Integration Innovation, Progress Software experienced a drop in lead creation, but a rise in total opportunity creation and closed won revenue by leveraging Eloqua's AppCloud to pull in buyer attributes to support scoring. If you can instill confidence in your buyer early on, you are more likely to get their trust and be able to create a dialogue that leads to a superior buying experience and more revenue throughout that relationship.
    Let me know your thoughts and your experiences with measuring the impact of your lead scoring system. I hope that you continue to look to Topliners for inspiration and to take your usage of marketing automation to the next level!
    -Chang

    Nice post Adrian.
    I think another great indicator is your SAL to SQL rate increases as well. Ultimately, sales wants opportunities from Marketing not qualified leads. Another great indicator is time spent in the Qualifiy stage. Looking at a pipeline aging report, if your lead scored SALs are moving from SAL to SQL quicker than other channels, you know your sales team is having an easy time qualifiying these leads  - an indicator that your lead scoring program is doing a good job of finding the right person at the right time.

  • Things that are very slow in 11g

    I am finding 11g forms much slower than I remember v10 forms and of course much much slower than
    6i client server forms. Thus far I found one thing in particular that takes an excessive amount of time IMHO:
    find_item
    I have a loop that goes through every item on a block that has a lot of items and if you do a find_item in there
    that is very very slow.
    I'd appreciate it if other peole shared things they found that were excessively slow. I know this is not the only one.

    Since most of the tables and indexes are either partitioned or subpartitioned, the above SQL quickly gives us a sense that there are many segments in the system.
    If you are interested, here is the breakdown. Anyway, the problem is that we have a lot of tables which has many partitions/subpartitions, with only 1 index or even no index on it. The truncate table or drop table operation is very very slow :-(
    select segment_type, count(*) segment_count, sum(extents) extent_count
    from dba_segments
    group by segment_type
    order by 2 desc;
        SEGMENT_TYPE     SEGMENT#   EXTENT#
    TABLE SUBPARTITION     1506441        34429790                 this is a lot, isn't it?
       TABLE PARTITION     1000799        41428518                 so is this
       INDEX PARTITION      104062        1440000
    INDEX SUBPARTITION       74342        1499875
                 TABLE     11160        4885061
                 INDEX     6249        2023874
            TYPE2 UNDO       5647        115525
    ...

  • Alphanumeric counter

    Hi,
    I need to write a query / function for an alphanumeric counter that would work as below:
    A000
    A001
    A002
    A003
    A004
    A005
    A006
    A007
    A008
    A009
    A00A
    A00B
    A00C
    A00Y
    A00Z
    A010
    A011
    A019
    A01A
    A01B
    This would mean that the next value after Z999 would be Z99A
    Is it possible to get this in a single query?
    Regards

    David_Aldridge wrote:
    Too often these forums looks like a SQL obfuscation challenge.Well, not sure about Karthick's solution, but I thought mine was fairly straightforward. It could be simplified to...
    SQL> ed
    Wrote file afiedt.buf
      1  with t as (select rownum rn from dual connect by rownum <= 2000)
      2      ,x as (select '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ' as str from dual)
      3  --
      4  -- end of test data
      5  --
      6  select t.rn
      7        ,substr(str,trunc(mod(rn+466559,1679616)/46656)+1,1)||
      8         substr(str,trunc(mod(rn+466559,46656)/1296)+1,1)||
      9         substr(str,trunc(mod(rn+466559,1296)/36)+1,1)||
    10         substr(str,mod(rn+466559,36)+1,1)
    11* from t, x
    SQL> /
            RN SUBS
             1 A000
             2 A001
             3 A002
             4 A003
             5 A004
             6 A005
             7 A006
             8 A007
             9 A008
            10 A009
            11 A00A
            12 A00B
            13 A00C
            14 A00D
            15 A00E
            16 A00F
    .but it seemed more readable to me to include all the "36*36..." so that it was clear where the values came from.
    Edit: Or even this?
    SQL> ed
    Wrote file afiedt.buf
      1  with t as (select rownum rn from dual connect by rownum <= 2000)
      2      ,x as (select '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ' as str from dual)
      3  --
      4  -- end of test data
      5  --
      6  select t.rn
      7        ,substr(str,trunc(mod(rn+(10*power(36,3))-1,power(36,4))/power(36,3))+1,1)||
      8         substr(str,trunc(mod(rn+(10*power(36,3))-1,power(36,3))/power(36,2))+1,1)||
      9         substr(str,trunc(mod(rn+(10*power(36,3))-1,power(36,2))/36)+1,1)||
    10         substr(str,mod(rn+(10*power(36,3))-1,36)+1,1)
    11* from t, x
    SQL> /
            RN SUBS
             1 A000
             2 A001
             3 A002
             4 A003
             5 A004
             6 A005
             7 A006
             8 A007
             9 A008
            10 A009
            11 A00A
            12 A00B
            13 A00C
            14 A00D
            15 A00E
            16 A00F
            17 A00G
            18 A00H
            19 A00I
            20 A00J
            21 A00K
            22 A00L
            23 A00M
            24 A00N
    .

  • Ye olde "delete archivelogs in standby" question

    11.2.0.2.0
    According to note
    Configure RMAN to purge archivelogs after applied on standby [ID 728053.1]
    oracle should remove archive logs if we have deletion policy set to "delete applied on standby" which I have set on primary and standby. Monitoring the alert log we see
    "Deleted Oracle managed file /path/to/an/archive/log"
    so we think cool, its removing them automatically. Question is, when does it delete them? Right now my standby is in sync, yet my alert log is only sporadically showing my logs being deleted which were applied hours ago which I can visibly verify.

    Hemant K Chitale wrote:
    Oracle automatically deletes archivelogs only when it needs to clear space in the FRA (db_recovery_file_dest).
    Hemant K ChitaleI had thought this so tested.
    SQL> SELECT
      2  substr(name, 1, 30) name,
      3  space_limit/(1073741824) AS Quota_GB,
      4  space_used/(1073741824) AS Used_GB,
      5  space_reclaimable/(1073741824) AS Reclaimable_GB,
      6  number_of_files AS files
      7  FROM
      8  v$recovery_file_dest ;
    NAME                             QUOTA_GB    USED_GB RECLAIMABLE_GB      FILES
    /u00/oracle/flash_recovery_are        310 19.7361012     18.1420259         46
    -- bring the db_recovery_dest_size to 20gb to so we know we're over 90%.  according to that first note I posted, the FRA deems anything over 85% as space pressure
    SQL>  alter system set db_recovery_file_dest_size=20g scope=both;
    System altered.
    SQL> SELECT
      2   substr(name, 1, 30) name,
      3   space_limit/(1073741824) AS Quota_GB,
      4   space_used/(1073741824) AS Used_GB,
      5   space_reclaimable/(1073741824) AS Reclaimable_GB,
      6   number_of_files AS files
      7   FROM
      8   v$recovery_file_dest ;
    NAME                             QUOTA_GB    USED_GB RECLAIMABLE_GB      FILES
    /u00/oracle/flash_recovery_are         20 19.7437358     18.1420259         47
    SQL> So i waited a good ten minutes but still no extra logs cleared out.
    So I switched the logs a couple of times in production as well to see would that help and we did get some deleted then to bring the space used down to 17.5.gb which is just below the 85% mark.
    SQL> SELECT
      2   substr(name, 1, 30) name,
      3   space_limit/(1073741824) AS Quota_GB,
      4   space_used/(1073741824) AS Used_GB,
      5   space_reclaimable/(1073741824) AS Reclaimable_GB,
      6   number_of_files AS files
      7   FROM
      8   v$recovery_file_dest ;
    NAME                             QUOTA_GB    USED_GB RECLAIMABLE_GB      FILES
    /u00/oracle/flash_recovery_are         20 17.5075302     15.6462598         47
    SQL> quick experiment, dropped to 15gb which is below the 17gb currently used. And yes immediately, I can see the files being deleted.
    so what I take from this is that if there is any space available in the 15% free then Oracle waits to be woken up by the receipt of a log from primary, sees that theres less than 15% free and deletes logs to below that limit. however, if the space free is less than the space used then oracle wakes up itself and deletes the logs. Cool.
    Thanks hemant.
    Edited to fix coding

  • Why does SQL bring back all values in a single condition quicker?

    Hi All,
    I've stumbled across an interesting yet frustrating problem with one of our Discoverer Reports. The problem unfortunately is related to performance, which can be a never-ending struggle, as we all know.
    Scenario:
    We have a report that contains multiple outer joins that retrieves information from other sub ledgers. The type of report is tabular and it has a single page item called period name. A parameter has been setup on the period name column, which allows users to specify which period they want to report on. This is a real time report transaction report.
    Problem:
    The report takes various times to run depending on the amount of periods selected via the parameters. Timings are shown below –
    All Periods                         –  2 min 22 sec     (36 Periods selected – 7 have data)
    Apr                              -  0 min 43 sec           (01 Periods)
    Apr + May                         -  1 min 06 sec      (02 Periods)
    Apr + May + Jun                         -  1 min 36 sec      (03 Periods)
    Apr + May + Jun + Jul                     -  2 min 29 sec      (04 Periods)
    Apr + May + Jun + Jul + Aug               -  3 min 15 sec      (05 Periods)
    Question:
    I’m finding it very hard to understand why the time taken to run the report for 5 periods IS NOT quicker than running the report for all periods because the condition/where clause would have been refined?
    I can confirm that the Period column is indexed. I’m not going to post the execution plans, as each are 117 rows long. I’m rather hoping some of you might have experienced something similar, and therefore able to point me in the right direction.
    Thanks,
    Lance

    Hi Rod,
    Looking at the SQL generated in Discoverer this is what I get for All Periods:
    Where ‘…….’
    AND (o227124.i227127 IN ('MAR-07','FEB-07','JAN-07','DEC-06','NOV-06','OCT-06','SEP-06','AUG-06','JUL-06','JUN-06','MAY-06','APR-06','MAR-06','FEB-06','JAN-06','DEC-05','NOV-05','OCT-05','SEP-05','AUG-05','JUL-05','JUN-05','MAY-05','APR-05','MAR-05','FEB-05','JAN-05','DEC-04','NOV-04','OCT-04','SEP-04','AUG-04','JUL-04','JUN-04','MAY-04','APR-04','MAR-04','FEB-04','JAN-04','DEC-03','NOV-03','OCT-03','SEP-03','AUG-03','JUL-03','JUN-03','MAY-03','APR-03','MAR-03','FEB-03','JAN-03','DEC-02','NOV-02','OCT-02','SEP-02','AUG-02','JUL-02','JUN-02','MAY-02','APR-02','MAR-02'))
    Statistics For All Periods
    351 recursive calls
    0 db block gets
         32301 consistent gets
    61 physical reads
    0 redo size
    500 bytes sent via SQL*Net to client
    13147 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    6 sorts (memory)
    0 sorts (disk)
    0 rows processed
    And this is what I get for 6 Periods
    AND (o227124.i227127 IN ('AUG-06','SEP-06','JUL-06','JUN-06','MAY-06','APR-06'))
    Statistics For 6 Periods
    351 recursive calls
    0 db block gets
    32301 consistent gets
    7 physical reads
    0 redo size
    509 bytes sent via SQL*Net to client
    11736 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    6 sorts (memory)
    0 sorts (disk)
    0 rows processed
    Looking at the statistics the ‘All Periods’ query should take longer than the 6 periods, however this is not the case.
    Thanks,
    Lance
    Message was edited by:
    Lance Botha

Maybe you are looking for

  • Inventroy Material Distribution Summary

    Hi everyone, I have query which shows stock value as on date here is my query: SELECT itm.segment1 || '.' || itm.segment2 || '.' || itm.segment3 || '.' || itm.segment4 item_code, ct.organization_code, itm.primary_unit_of_measure primary_uom, itm.desc

  • How to Schedule a Job Chain to start automatically on SAP CPS.

    Hi, I did a job chain and i want to run automatically on sap cps Tuesday thru Saturday at 6:00 a.m., i make a calendar on sap cps with this specific options but the job chain doesn't start running.  I don't know if i need to do something more, so if

  • How can I invoke the Email-Client (Forms 6i and 10g)

    To Call the Email from 6i I use the host - command. I guess this will be possible with webforms and webutil, too. But: Is there a better way? I mean there are some built-ins ...

  • OUTER JOIN 의 GUIDE LINE

    제품 : ORACLE SERVER 작성날짜 : 2002-04-10 OUTER JOIN 에 대하여 ==================== Purpose Outer join의 효과과 이용방법에 대해 이해한다. Explanation 1. 개념 다음의 용어에 대해 우선 살펴보자 : 1) outer-join column - symbol(+) 을 사용하는 column 이다 . 예를 들어 EMPNO(+) ,DEPT.DEPTNO(+) 는 outer join c

  • Everything opening in new tab(s) in all browsers

    Development environment: Visual Studio 2012 .Net Version: 4.5.51209 CR Version: 13.0.2000.0 Application type: Website application Language: C# Issue: The application consists of a main page with the following layout...      <frameset rows="59, *, 30"