Why simple scrept takes time to run.....

hello
i write simple block as
select consumer_number from receipts where consumer_number not in (select consumer_number from consumer_master);
in that receipts,consumer_master r table and both having consumer_number field
before some days i deleted same records from table receipts,consumer_master from that it takes time.
i checked index on both table but its ok.
plz help me.

790948 wrote:
RCI5142 @ msedcl> set autotrace TRACEONLY EXPLAIN
RCI5142 @ msedcl> select consumer_number from receipts where consumer_number not in (select consumer_number from consumer_master);
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2903788 Card=18126
5 Bytes=2175180)
1 0 FILTER
2 1 TABLE ACCESS (FULL) OF 'RECEIPTS' (TABLE) (Cost=1153 Car
d=181272 Bytes=2175264)
3 1 INDEX (FAST FULL SCAN) OF 'PK_CON_NUMBER' (INDEX (UNIQUE
)) (Cost=20 Card=1 Bytes=12)
now what have to do this?You're doing a fast full scan on the index, which is probably better than a full table scan. CARD=1 means the subquery only expects to find one row - are your statistics current?
Try variations on the query - different ways of coding it to get the same result set. The first thing I think of is a correlated NOT EXISTS subquery (untested)
select consumer_number
  from receipts
where not exists (
             select 0
              from consumer_master cm
            where cm.consumter_number = r.consumer_number
);

Similar Messages

  • Simple query takes time to run

    Hi,
    I have a simple query whcih takes about 20 mins to run.. here is the TKPROF forit:
      SELECT
        SY2.QBAC0,
        sum(decode(SALES_ORDER.SDCRCD,'USD', SALES_ORDER.SDAEXP,'CAD', SALES_ORDER.SDAEXP /1.0452))
      FROM
        JDE.F5542SY2  SY2,
        JDE.F42119  SALES_ORDER,
        JDE.F0116  SHIP_TO,
        JDE.F5542SY1  SY1,
       JDE.F4101  PRODUCT_INFO
    WHERE
        ( SHIP_TO.ALAN8=SALES_ORDER.SDSHAN  )
        AND  ( SY1.QANRAC=SY2.QBNRAC and SY1.QAOTCD=SY2.QBOTCD  )
        AND  ( PRODUCT_INFO.IMITM=SALES_ORDER.SDITM  )
        AND  ( SY2.QBSHAN=SALES_ORDER.SDSHAN  )
        AND  ( SALES_ORDER.SDLNTY NOT IN ('H ','HC','I ')  )
        AND  ( PRODUCT_INFO.IMSRP1 Not In ('   ','000','689')  )
        AND  ( SALES_ORDER.SDDCTO IN  ('CO','CR','SA','SF','SG','SP','SM','SO','SL','SR')  )
        AND  (
        ( SY1.QACTR=SHIP_TO.ALCTR  )
        AND  ( PRODUCT_INFO.IMSRP1=SY1.QASRP1  )
      GROUP BY
      SY2.QBAC0
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.07       0.07          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch       10     92.40     929.16     798689     838484          0         131
    total       12     92.48     929.24     798689     838484          0         131
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 62 
    Rows     Row Source Operation
        131  SORT GROUP BY
    3535506   HASH JOIN 
    4026100    HASH JOIN 
        922     TABLE ACCESS FULL OBJ#(187309)
    3454198     HASH JOIN 
      80065      INDEX FAST FULL SCAN OBJ#(30492) (object id 30492)
    3489670      HASH JOIN 
      65192       INDEX FAST FULL SCAN OBJ#(30457) (object id 30457)
    3489936       PARTITION RANGE ALL PARTITION: 1 9
    3489936        TABLE ACCESS FULL OBJ#(30530) PARTITION: 1 9
      97152    TABLE ACCESS FULL OBJ#(187308)
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.07       0.07          0          0          0           0
    Execute      2      0.00       0.00          0          0          0           0
    Fetch       10     92.40     929.16     798689     838484          0         131
    total       13     92.48     929.24     798689     838484          0         131
    Misses in library cache during parse: 1kindly suggest how to resolve this...
    OS is windows and its 9i DB...
    Thanks

    > ... you want to get rid of the IN statements.
    They prevent Oracle from usering the index.
    SQL> create table mytable (id,num,description)
      2  as
      3   select level
      4        , case level
      5          when 0 then 0
      6          when 1 then 1
      7          else 2
      8          end
      9        , 'description ' || to_char(level)
    10     from dual
    11  connect by level <= 10000
    12  /
    Table created.
    SQL> create index i1 on mytable(num)
      2  /
    Index created.
    SQL> exec dbms_stats.gather_table_stats(user,'mytable')
    PL/SQL procedure successfully completed.
    SQL> set autotrace on explain
    SQL> select id
      2       , num
      3       , description
      4    from mytable
      5   where num in (0,1)
      6  /
                                        ID                                    NUM DESCRIPTION
                                         1                                      1 description 1
    1 row selected.
    Execution Plan
    Plan hash value: 2172953059
    | Id  | Operation                    | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |         |  5001 |   112K|     2   (0)| 00:00:01 |
    |   1 |  INLIST ITERATOR             |         |       |       |            |          |
    |   2 |   TABLE ACCESS BY INDEX ROWID| MYTABLE |  5001 |   112K|     2   (0)| 00:00:01 |
    |*  3 |    INDEX RANGE SCAN | I1      |  5001 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - access("NUM"=0 OR "NUM"=1)Regards,
    Rob.

  • Why does ext HDD / time machine run slow?

    Apologies if this is answered else where, but I can't find any answers.
    I have a MBP running OSX 10.6.8. Connected to it as a Time Machine is a Freecom 400GB 28147 uk, external hardrive connected over USB 2.0. I also have a Mac Mini running 10.6.8.
    When connected to my MBP, I have found that it is extremely slow to read files. I gave up using it for Time Machine a while ago as I thought there was a problem Time Machine software causing it to run slow. I have also tried to read the time machine backups through finder and they are still slow to read.
    Today (don't know why i didn't do it before now) I plugged it in to my Mac Mini with USB 2.0 and found that I can read all the time machine files at lightning speed, far quicker than my MBP which can take 2-3 mins to open a folder. I have noticed that the hard drive doesn't make its usual whurring noise when im trying to open a folder using the MBP, right up until the last minute before the folder opens.
    Does any one have any idea about what might be causing the problem when the the ext HDD is connected to my MBP?
    Any thoughts or suggestions greatly appreciated.

    larryfromjackson wrote:
    I'm backing up to a Drobo with four 2-Gig drives.
    How are you connected to it? Network problems could explain at least some of what you're seeing.
    More often then not I don't know how long the backups take because when it becomes to annoying I stop the backup.
    That guarantees that Time Machine will have to do a "deep scan" the next time, comparing everything on your system to the backups.   That does take a while, but if there are no other problems shouldn't have a major impact on performance.
    All of that said, I can't say for certain that there isn't something else going on.  Generally the way I notice the slowdown is when my keyboard inputs come out screwy -- ingB instead of Bring, pplAication for Applicaton for example.  When I know that this is happening I can work "around" it by typing the first letter, pausing for a second or two, then typing the remander.  Does any of this ring an bells?
    I've not heard of that specifically, but if you're connected to the Drobo wirelessly, and using a Bluetooth keyboard, there are reports of interference between the two. Try a wired keyboard temporarily, and/or different WIFI channels.

  • Why my mac takes time to startup or shutdown

    since i have got my macBook Pro (nearly 2 yeas a go) i had this problem I thought it was normal (as I had HP befor) but when I compair it with my freinds macBook Pro it is much slower to both start nd shutdown. Do you have any advise on what has to be done?
    by the way the apple shope in Abu Dhabi dose not know what to do.

    since i have got my macBook Pro (nearly 2 yeas a go) i had this problem I thought it was normal (as I had HP befor) but when I compair it with my freinds macBook Pro it is much slower to both start nd shutdown. Do you have any advise on what has to be done?
    by the way the apple shope in Abu Dhabi dose not know what to do.

  • Sharepoint 365 simple workflow takes to much time

    I got a workflow that is quite short but it takes way to much time.
    All it does is in the current list item set a value to 1 and in another list on 2 specific rows do a calculation.
    The calculation is for the first row1 = row1 +1 and the other is row2 = row2 -1
    Pretty simple but it takes about on average 20 sec to run.
    While in production it should work like this :
    - people in the list click on the 3 dots ( ... ) column to start the workflow, on the workflow page start the workflow, then when the page flips back / then it should be refreshed.
    The problem is, if it doesn't refresh quickly enough, people hit the workflow again (thinking it never fired).
    So,  I like this workflow run way faster (its so simple why 20 sec, why not 20 ms???),
    Or else, after starting it, some sort of wait until the list has really updated. (so don't jump back to the list view until the lists are processed).
    Are there ways to solve this performance problem on sharepoint 365 ? (a wait refresh previous page, or speedup method)

    The sharepoint workflows I make are always of the type 2013, not 2010.
    Since I did some operations over 2 lists, I've changed the workflow now.
    After retrieving variables and perform calculations.
    I use a parallel block to split the updating of each list table. (its down a little in time but not much still.)
    I'm not sure if that is the general way as to improve workflows for speed?.
    My reasoning here was, that if updating a lists takes time the other updates have to wait.
    So.. well its not yet as fast as I like it to be.
    I'm beginning to questioning the performance Microsoft offers in the cloud.
    (as to loging cases with MS support, my experiences with that lately is not that good
     mostly a redirect to 06-India, people who don't understand the language,
     Often unable to understand situations, even if typed out multiple times.
     The support here from TechNet web members is a lot better.)

  • I am trying to install a program on my Mac Book Pro Retina from a Apple SuperDrive. Every time I run the installer, it says I do not have enough memory to install. However, I definitely have enough memory (186.74 GB). Any ideas why?

    I am trying to install a program on my Mac Book Pro Retina from a Apple SuperDrive. Every time I run the installer, it says I do not have enough memory to install. However, I definitely have enough memory (186.74 GB). The program takes only 550 MB. Any ideas why?

    Thanks for the clarification on the terms. I consider myself proficent at my computer, but not exactly in its inner-workings! I have 8 GB of memory installed.
    I am still confused as to why I do not have any space for the program. This is the exact error message I get: "There is not enough free space on Macintosh HD disk. The Print Shop 2 application requires about 550 MB to be installed. Free some disk space and try again."
    Thank you again for your help.

  • Recently loads of .tmp files are created and left on exit in TEMP folder. CC Cleaner cleans them but takes time because so many - why?

    recently loads of .tmp files are created and left on exit in TEMP folder. CC Cleaner cleans them but takes time because so many - why?
    == found when running CC Cleaner and it took a longtime

    recently loads of .tmp files are created and left on exit in TEMP folder. CC Cleaner cleans them but takes time because so many - why?
    == found when running CC Cleaner and it took a longtime

  • Oracle9i reports take longer time while running in web

    Hi,
    I have developed few reports in Oracle9i and I am trying to run the reports in web. Running a report through report builder takes lesser time compare to running the same report in web using web.show_document. This also depends on the file size. If my report file size(.jsp file) is less than 100KB then it takes 1 minute to show the parameter form and another 1 minute to show the report output. If my file size is around 190KB then the system takes atleast 15 minutes to show the parameter form. Another 10 to 15 minutes to show the report output. I don't understand why the system takes long time to show the parameter form.
    I have a similar problem while opening the file in reports builder also. If my file size is more than 150KB then it takes more than 15 minutes to open the file.
    Could anyone please help me on this.
    Thanks, Radha

    This problem exists only with .jsp reports. I saved the reports in .rdf format and they run faster on web now. Opening a .jsp report takes longer time(file with 600KB takes atleast 2 hours) but the same report in .rdf format takes few seconds to get opened in reports builder.

  • I mean that i phone take time to open apps ,contact ,etc why? i close all apps and the problem still?

    i mean that i phone take time to open apps ,contact ,etc why? i close all apps and the problem still?

    Try a reset by pressing and holding the home and power buttons for 15-20 seconds until the white Apple logo appears.

  • When a process flow activity takes a long time to run.

    Hello,
    I have a process flow activity that sometimes takes a long time to run so that the process flow never ends. Is it possible that I can set the activity so that during a certain time that this has not been executed automatically flow process to continue?
    I'm Using Oracle Warehouse Builder 11g on Windows Server 2003 R1.
    Greetings and thanks.

    What I've done in the past is just use a small polling procedure (PL/SQL) and a conditional transition in the process flow, depending on what the proc returns (Continue or Fail) then the flow of control in the Process is changed.
    I think the key also is to not have too much paralellism going on too, that way it's easy to streamline the PF and remove slow moving processes to a more linear path (stopping bottle necks on your logical AND).
    Edited by: NSNO on Apr 29, 2010 2:31 PM

  • The 0co_om_opa_6 ip in the process chains takes long time to run

    Hi experts,
    The 0co_om_opa_6 ip in the process chains takes long time to run around 5 hours in production
    I have checked the note 382329,
    -> where the indexes 1 and 4 are active
    -> index 4 was not "Index does not exist in database system ORACLE"- i have assgined to " Indexes on all database systems and ran the delta load in development system, but guess there are not much data in dev it took 2-1/2 hrs to run as it was taking earlier. so didnt find much differnce in performance.
    As per the note Note 549552 - CO line item extractors: performance, i have checked in the table BWOM_SETTINGS these are the settings that are there in the ECC system.
    -> OLTPSOURCE -  is blank
       PARAM_NAME - OBJSELSIZE
       PARAM_VALUE- is blank
    -> OLTPSOURCE - is blank
       PARAM_NAME - NOTSSELECT
       PARAM_VALUE- is blank
    -> OLTPSOURCE- 0CO_OM_OPA_6
       PARAM_NAME - NOBLOCKING
       PARAM_VALUE- is blank.
    Could you please check if any other settings needs to be done .
    Also for the IP there is selction criteris for FISCALYEAR/PERIOD from 2004-2099, also an inti is done for the same period as a result it becoming difficult for me to load for a single year.
    Please suggest.

    The problem was the index 4 was not active in the database level..it was recommended by the SAP team to activate it in se14..however while doing so we face few issues se14 is a very sensitive transaction should be handled carefully ... it should be activate not created.
    The OBJSELSIZE in the table BWOM_SETTINGS has to be Marked 'X' to improve the quality as well as the indexe 4 should be activate at the abap level i.e in the table COEP -> INDEXES-> INDEX 4 -> Select the  u201Cindex on all database systemu201D in place of u201CNo database indexu201D, once it is activated in the table abap level you can activate the same indexes in the database level.
    Be very carefull while you execute it in se14 best is to use db02 to do the same , basis tend to make less mistake there.
    Thanks Hope this helps ..

  • [SOLVED] systemd-tmpfiles-clean takes a very long time to run

    I've been having an issue for a while with systemd-tmpfiles-clean.service taking a very long time to run. I've tried to just ignore it, but it's really bothering me now.
    Measuring by running:
    # time systemd-tmpfiles --clean
    systemd-tmpfiles --clean 11.63s user 110.37s system 10% cpu 19:00.67 total
    I don't seem to have anything funky in any tmpfiles.d:
    # ls /usr/lib/tmpfiles.d/* /run/tmpfiles.d/* /etc/tmpfiles.d/* | pacman -Qo -
    ls: cannot access /etc/tmpfiles.d/*: No such file or directory
    error: No package owns /run/tmpfiles.d/kmod.conf
    /usr/lib/tmpfiles.d/gvfsd-fuse-tmpfiles.conf is owned by gvfs 1.20.1-2
    /usr/lib/tmpfiles.d/lastlog.conf is owned by shadow 4.1.5.1-9
    /usr/lib/tmpfiles.d/legacy.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/libvirt.conf is owned by libvirt 1.2.4-1
    /usr/lib/tmpfiles.d/lighttpd.conf is owned by lighttpd 1.4.35-1
    /usr/lib/tmpfiles.d/lirc.conf is owned by lirc-utils 1:0.9.0-71
    /usr/lib/tmpfiles.d/mkinitcpio.conf is owned by mkinitcpio 17-1
    /usr/lib/tmpfiles.d/nscd.conf is owned by glibc 2.19-4
    /usr/lib/tmpfiles.d/postgresql.conf is owned by postgresql 9.3.4-1
    /usr/lib/tmpfiles.d/samba.conf is owned by samba 4.1.7-1
    /usr/lib/tmpfiles.d/slapd.conf is owned by openldap 2.4.39-1
    /usr/lib/tmpfiles.d/sudo.conf is owned by sudo 1.8.10.p2-1
    /usr/lib/tmpfiles.d/svnserve.conf is owned by subversion 1.8.8-1
    /usr/lib/tmpfiles.d/systemd.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/systemd-nologin.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/tmp.conf is owned by systemd 212-3
    /usr/lib/tmpfiles.d/uuidd.conf is owned by util-linux 2.24.1-6
    /usr/lib/tmpfiles.d/x11.conf is owned by systemd 212-3
    How do I debug why it is taking so long? I've looked in man 8 systemd-tmpfiles and on google, hoping to find some sort of --dubug option, but there seems to be none.
    Is it some how possible to get a list of the directories that it looks at when it runs?
    Anyone have any suggestions on how else to fix this.
    Anyone else have this issue?
    Thanks,
    Gary
    Last edited by garyvdm (2014-05-08 18:57:43)

    Thank you very much falconindy. SYSTEMD_LOG_LEVEL=debug helped my find my issue.
    The cause of the problem was thousands of directories in /var/tmp/ created by a test suite with a broken clean up method. systemd-tmpfiles-clean was recursing through these, but not deleting them.

  • TS3276 why does it take a lot of time to fetch my mails from yahoo server whenever i open the mail icon?

    why does it take a lot of time whenever i open mail icon from yahoo server?

    Go to Mail Preferences > General Tab > and the second line should say how often it will check for new messages

  • Simple APD is taking too much time in running

    Hi All,
    We have one APD created on our developement system which is taking too much time in running.
    This APD is fetching data from a Query having only 1200 records and directly putting into a master attribute.
    The Query is running fine in RSRT transaction and giving output within 5 seconds but in APD if I do display data over Query it is taking too much time.
    The APD is taking arrount 1.20 hours in running.
    Thanks in advance!!

    Hi,
    When a query runs in APD it normally takes much, much longer than it takes in RSRT. Run times such as what you are saying (5secs in RSRT and >1.5 hrs in APD) are quite normal; I've seen them in some of my queries running for several hours in APD as well.
    You just have to wait for it to complete.
    Regards,
    Suhas

  • The time i run icloud on my iphone cammera doesnt work any more even i delet my acc on icloud .when i push on camera to take photo after a second it goes to home page,can u help me pls

    the time i run icloud on my phone,camera doesnt work any more,when i push on camera to take photo after a second it goes to home page .i turn off photo stream,rest my phone,even restore again still camera doesnt work,what shall i do?

    Dude or hot-spur,
    Not fishy at all!!!! You really think I would have taken any more of my time to post a bogus rant? We'll no, it is unfortunately all true. I want to hear from anyone who is experiencing such problems not anyone else that wants to put there two cents in. I really don't need anymore hate or negativity. Just people's experiences because I feel like I am the only one having so many problems. Thank you
    Yes, I have had some unfortunate bad luck to have so many problems at the same time, but some of them are just things you cannot do on the new OS.
    Thank you again

Maybe you are looking for

  • I am trying to work on Live Cycle Mosaic 9.5 and I have downloaded FlashBuilder 4 and trying to con

    I am trying to work on Live Cycle Mosaic 9.5 and I have downloaded FlashBuilder 4 and trying to configure the mosaic plugin "LiveCycle Mosaic ES2 Plugin for Flash Builder 4" in Flash Builder as per following directions given on adobe site:      "Add

  • Error in transaction code F-04, table T043G

    Hi experts, In executing transaction code F-04, I receive the following error:  "Entry for Company XXXX not defined in table T043G" I have checked table T043G and the company is not listed.  What config steps are necessary to correct this? Thanks for

  • Why can't firewire carry audio DIRECTLY between 2 macs?

    Hi, My situation.....(I'll simplify) Imagine having 2 macs: 1 G5 (runs Logic Audio) 1 Mac Mini (used as slave to gain access to more gigs of RAM for orchestra) Now....in order to pipe 8 channels of digital audio from MAc_Mini to the G5, it is neccess

  • 1.4.2 and 1.5 serialization portability

    Hello, I came across the problem of reading class previously serialized w/ jdk 1.4.2_02 using jdk 1.5. I have embedded object database and everything works fine with the jdk1.4. The problem is happening when I am trying to run the program with jre 1.

  • Windows XP on boot camp. FAT32 or NTFS?

    I am going to install Windows XP with boot camp. (Until Win7 support comes out). What should I use? FAT32 or NTFS? I tried both but I don't like FAT32 because of limited storage space (32gb). I tried NTFS but It did not read my USB or Disc.