Takes long time to drpo tables with large numbers of partitions

11.2.0.3
This is for a build. We are still in development. No risk of data loss. As part of the build, I drop the user,re-create it, re-create the objects. Allows us to test the build all the way through. Its our process.
This user has some tables with several 1000 partitions. I ran a 10046 trace and oracle is using pl/sql to do loops to do DML against the data dictionary. Anyway to speed this up? I am going to turn off the recyclebin during the build and turn it back on.
anything else I can do? Right now I just issue 'drop user cascade'. Part of is the weak hardware we have in the development/environment. Takes about 20 minutes just to run through this part of the script (the script has alot more pieces than this) and we do fairly frequent builds.
I can't change the build process. My only option is to try to make this run a little faster. I can't do anything about the hardware (lots of VMs crammed onto too few servers).
This is not a production issue. Its more of a hassle.

Support Note 798586.1 shows that DROP USER CASCADE was slower than dropping individual objects -- at least in 10.2    Not sure if it still the case in 11.2
Hemant K Chitale

Similar Messages

  • Analyze a Query which takes longer time in Production server with ST03 only

    Hi,
    I want to Analyze a Query which takes longer time in Production server with ST03 t-code only.
    Please provide me with detail steps as to perform the same with ST03
    ST03 - Expert mode- then I need to know the steps after this. I have checked many threads. So please don't send me the links.
    Write steps in detail please.
    <REMOVED BY MODERATOR>
    Regards,
    Sameer
    Edited by: Alvaro Tejada Galindo on Jun 12, 2008 12:14 PM

    Then please close the thread.
    Greetings,
    Blag.

  • How to tune this SQL (takes long time to come up with results)

    Dear all,
    I have sum SQL which takes long time ... can any one help me to tune this.... thank You
    SELECT SUM (n_amount)
    FROM (SELECT DECODE (v_payment_type,
    'D', n_amount,
    'C', -n_amount
    ) n_amount, v_vou_no
    FROM vouch_det a, temp_global_temp b
    WHERE a.v_vou_no = TO_CHAR (b.n_column2)
    AND b.n_column1 = :b5
    AND b.v_column1 IN (:b4, :b3)
    AND v_desc IN (SELECT v_trans_source_code
    FROM benefit_trans_source
    WHERE v_income_tax_app = :b6)
    AND v_lob_code = DECODE (:b1, :b2, v_lob_code, :b1)
    UNION ALL
    SELECT DECODE (v_payment_type,
    'D', n_amount,
    'C', -n_amount
    * -1 AS n_amount,
    v_vou_no
    FROM vouch_details a, temp_global_temp b
    WHERE a.v_vou_no = TO_CHAR (b.n_column2)
    AND b.n_column1 = :b5
    AND b.v_column1 IN (:b12, :b11, :b10, :b9, :b8, :b7)
    AND v_desc IN (SELECT v_trans_source_code
    FROM benefit_trans_source
    WHERE income_tax_app = :b6)
    AND v_lob_code = DECODE (:b1, :b2, v_lob_code, :b1));
    Thank You.....

    Thanks a lot,
    i did change the SQL it works fine but slows down my main query.... actually my main query is calling a function which does the sum......
    here is the query.....?
    select A.* from (SELECT a.n_agent_no, a.v_agent_code, a.n_channel_no, v_iden_no, a.n_cust_ref_no, a.v_agent_type, a.v_company_code,
    a.v_company_branch, a.v_it_no, bfn_get_agent_name(a.n_agent_no) agentname,
    PKG_AGE__TAX.GET_TAX_AMT(:P_FROM_DATE,:P_TO_DATE,:P_LOB_CODE,A.N_AGENT_NO)  comm,
    c.v_ird_region
    FROM agent_master a, agent_lob b, agency_region c
    WHERE a.n_agent_no = b.n_agent_no
    AND a.v_agency_region = c.v_agency_region
    AND :p_lob_code = DECODE(:p_lob_code,'ALL', 'ALL',b.v_line_of_business)
    AND :p_channel_no = DECODE(:p_channel_no,1000, 1000,a.n_channel_no)
    AND :p_agency_group = DECODE(:p_agency_group,'ALL', 'ALL',c.v_ird_region)
    group by a.n_agent_no, a.v_agent_code, a.n_channel_no, v_iden_no, a.n_cust_ref_no, a.v_agent_type, a.v_company_code, a.v_company_branch, a.v_it_no, bfn_get_agent_name(a.n_agent_no) ,
    BPG_AGENCY_GEN_ACL_TAX.BFN_GET_TAX_AMOUNT(:P_FROM_DATE,:P_TO_DATE,:P_LOB_CODE,A.N_AGENT_NO),
    c.v_ird_region
    ORDER BY c.v_ird_region, a.v_agent_code DESC)
    A
    WHERE (COMM < :P_VAL_IND OR      COMM >=:P_VAL_IND1);
    Any idea to make this faster....
    Thank You...

  • Handling tables with large numbers of fields

    Hi
    What is the best practice to deal with tables having large numbers of fields? Ideally, I would like to create folders under a Presentation Table and group fields into folders (and leave fields that may be needed rarely in a folder named 'Other Information').
    Is there a way to do this in Oracle BI? Any alternatives?
    Thanks

    Answering my own question:
    http://oraclebizint.wordpress.com/2008/01/31/oracle-bi-ee-10133-nesting-folders-in-presentation-layer-and-answers/
    This is definitely a working solution (creating multiple tables and entering '->' in their description in order for them to act as subfolders). Definitely not intuitive and extremely ugly, especially since reordering tables and columns isn't possible (or is it? in another non-obvious way? )
    Anyway it seems we have to live with this.

  • DrawImage takes long time for images created with Photoshop

    Hello,
    I created a simple program to resize images using the drawImage method and it works very well for images except images which have either been created or modified with Photoshop 8.
    The main block of my code is
    public static BufferedImage scale(  BufferedImage image,
                                          int targetWidth, int targetHeight) {
       int type = (image.getTransparency() == Transparency.OPAQUE) ?
                        BufferedImage.TYPE_INT_RGB :
                        BufferedImage.TYPE_INT_RGB;
       BufferedImage ret = (BufferedImage) image;
       BufferedImage temp = new BufferedImage(targetWidth, targetHeight, type);
       Graphics2D g2 = temp.createGraphics();
       g2.setRenderingHint
             RenderingHints.KEY_INTERPOLATION, 
             RenderingHints.VALUE_INTERPOLATION_BICUBIC
       g2.drawImage(ret, 0, 0, targetWidth, targetHeight, null);
       g2.dispose();
       ret = temp;
       return ret;
    }The program is a little longer, but this is the gist of it.
    When I run a jpg through this program (without Photoshop modifications) , I get the following trace results (when I trace each line of the code) telling me how long each step took in milliseconds:
    Temp BufferedImage: 16
    createGraphics: 78
    drawimage: 31
    dispose: 0
    However, the same image saved in Photoshop (no modifications except saving in Photohop ) gave me the following results:
    Temp BufferedImage: 16
    createGraphics: 78
    drawimage: 27250
    dispose: 0
    The difference is shocking. It took the drawImage process 27 seconds to resize the file in comparison to 0.78 seconds!
    My questions:
    1. Why does it take so much longer for the drawImage to process the file when the file is saved in Photoshop?
    2. Are there any code improvements which will speed up the image drawing?
    Thanks for your help,
    -Rogier

    You saved the file in PNG format. The default PNGImagReader in core java has a habit of occasionally returning TYPE_CUSTOM buffered images. Photoshop 8 probably saves the png file in such a way that TYPE_CUSTOM pops up more.
    And when you draw a TYPE_CUSTOM buffered image onto a graphics context it almost always takes an unbearably long time.
    So a quick fix would be to load the file with the Toolkit instead, and then scale that image.
    Image img = Toolkit.getDefaultToolkit().createImage(/*the file*/);
    new ImageIcon(img);
    //send off image to be scaled A more elaborate fix involves specifying your own type of BufferedImage you want the PNGImageReader to use
    ImageInputStream in = ImageIO.createImageInputStream(/*file*/);
    ImageReader reader = ImageIO.getImageReaders(in).next();
    reader.setInput(in,true,true);
    ImageTypeSpecifier sourceImageType = reader.getImageTypes(0).next();
    ImageReadParam readParam = reader.getDefaultReadParam();
    //to implement
    configureReadParam(sourceImageType, readParam);
    BufferedImage img = reader.read(0,readParam);
    //clean up
    reader.dispose();
    in.close(); The thing that needs to be implemented is the method I called configureReadParam. In this method you would check the color space, color model, and BufferedImage type of the supplied ImageTypeSpecifier and set a new ImageTypeSpecifier if need be. The method would essentially boil down to a series of if statements
    1) If the image type specifier already uses a non-custom BufferedImage, then all is well and we don't need to do anything to the readParam
    2) If the ColorSpace is gray then we create a new ImageTypeSpecifier based on a TYPE_BYTE_GRAY BufferedImage.
    3) If the ColorSpace is gray, but the color model includes alpha, then we need to do the above and also call seSourceBands on the readParam to discard the alpha channel.
    3) If the ColorSpace is RGB and the color model includes alpha, then we create a new ImageTypeSpecifier based on an ARGB BufferedImage.
    4) If the ColorSpace if RGB and the color model doesn't include alpha, then we create a new ImageTypeSpecifier based on TYPE_3BYTE_BGR
    5) If the ColorSpace is not Gray or RGB, then we do nothing to the readParam and ColorConvertOp the resulting image to an RGB image.
    If this looks absolutely daunting to you, then go with the Toolkit approach mentioned first.

  • How to tune this smiple SQL (takes long time to come up with results)

    the following SQL is very slow as it takes one day to complete...
    select A.* from (SELECT a.n_agent_no, a.v_agent_code, a.n_channel_no, v_iden_no, a.n_cust_ref_no, a.v_agent_type, a.v_company_code,
    a.v_company_branch, a.v_it_no, bfn_get_agent_name(a.n_agent_no) agentname,
    PKG_AGE__TAX.GET_TAX_AMT(:P_FROM_DATE,:P_TO_DATE,:P_LOB_CODE,A.N_AGENT_NO) comm,
    c.v_ird_region
    FROM agent_master a, agent_lob b, agency_region c
    WHERE a.n_agent_no = b.n_agent_no
    AND a.v_agency_region = c.v_agency_region
    --AND :p_lob_code = DECODE(:p_lob_code,'ALL', 'ALL',b.v_line_of_business)
    --AND :p_channel_no = DECODE(:p_channel_no,1000, 1000,a.n_channel_no)
    --AND :p_agency_group = DECODE(:p_agency_group,'ALL', 'ALL',c.v_ird_region)
    group by a.n_agent_no, a.v_agent_code, a.n_channel_no, v_iden_no, a.n_cust_ref_no, a.v_agent_type, a.v_company_code, a.v_company_branch, a.v_it_no, bfn_get_agent_name(a.n_agent_no) ,
    BPG_AGENCY_GEN_ACL_TAX.BFN_GET_TAX_AMOUNT(:P_FROM_DATE,:P_TO_DATE,:P_LOB_CODE,A.N_AGENT_NO),
    c.v_ird_region
    ORDER BY c.v_ird_region, a.v_agent_code DESC)
    A
    WHERE (COMM < :P_VAL_IND OR COMM >=:P_VAL_IND1);
    . .it should return all the agents with commission based on the date parameter... data is less then 50 K inside all
    the tables...
    the version is Oracle9i Enterprise Edition Release 9.2.0.5.0
    SQL>  explain plan for
      2   select A.* from (SELECT a.n_agent_no, a.v_agent_code, a.n_channel_no, v_iden_no, a.n_cust_ref_
    no, a.v_agent_type, a.v_company_code,
      3  a.v_company_branch, a.v_it_no, bfn_get_agent_name(a.n_agent_no) agentname,
      4  BPG_AGENCY_GEN_ACL_TAX.BFN_GET_TAX_AMOUNT(:P_FROM_DATE,:P_TO_DATE,:P_LOB_CODE,A.N_AGENT_NO) com
    m,
      5  c.v_ird_region
      6  FROM ammm_agent_master a, ammt_agent_lob b, gnlu_agency_region c
      7  WHERE a.n_agent_no = b.n_agent_no
      8  AND a.v_agency_region = c.v_agency_region
      9  --AND :p_lob_code = DECODE(:p_lob_code,'ALL', 'ALL',b.v_line_of_business)
    10  --AND :p_channel_no = DECODE(:p_channel_no,1000, 1000,a.n_channel_no)
    11  --AND :p_agency_group = DECODE(:p_agency_group,'ALL', 'ALL',c.v_ird_region)
    12  group by a.n_agent_no, a.v_agent_code, a.n_channel_no, v_iden_no, a.n_cust_ref_no, a.v_agent_ty
    pe, a.v_company_code, a.v_company_branch, a.v_it_no, bfn_get_agent_name(a.n_agent_no) ,
    13  BPG_AGENCY_GEN_ACL_TAX.BFN_GET_TAX_AMOUNT(:P_FROM_DATE,:P_TO_DATE,:P_LOB_CODE,A.N_AGENT_NO),
    14  c.v_ird_region
    15  ORDER BY c.v_ird_region, a.v_agent_code DESC)
    16  A
    17  WHERE (COMM < :P_VAL_IND OR COMM >=:P_VAL_IND1);
    Explained.
    SQL>  select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id  | Operation                       |  Name               | Rows  | Bytes |TempSpc| Cost (%CPU)|
    |   0 | SELECT STATEMENT                |                     | 13315 |    27M|       |   859  (63)|
    |   1 |  VIEW                           |                     | 13315 |    27M|       |            |
    |   2 |   SORT GROUP BY                 |                     | 13315 |   936K|  2104K|   859  (63)|
    |   3 |    HASH JOIN                    |                     | 13315 |   936K|       |   641  (81)|
    |   4 |     MERGE JOIN                  |                     |  3118 |   204K|       |   512  (86)|
    |   5 |      TABLE ACCESS BY INDEX ROWID| AGENCY_REGION  |     8 |   152 |       |     3  (34)|
    |   6 |       INDEX FULL SCAN           | SYS_C004994         |     8 |       |       |     2  (50)|
    |   7 |      SORT JOIN                  |                     |  3142 |   147K|       |   510  (86)|
    |   8 |       TABLE ACCESS FULL         | AGENT_MASTER   |  3142 |   147K|       |   506  (86)|
    |   9 |     TABLE ACCESS FULL           | AGENT_LOB      |   127K|   623K|       |   102  (50)|
    Note: PLAN_TABLE' is old version
    17 rows selected.
    ..This is the only information i can get as i cannot access over database server (user security limitation)...
    Thank You

    Try to remove this:
    ORDER BY c.v_ird_region, a.v_agent_code DESCOr move it to the end of entire query.
    Edited by: Random on Jun 19, 2009 1:01 PM

  • CV04N takes long time to process select query on DRAT table

    Hello Team,
    While using CV04N to display DIR's, it takes long time to process select query on DRAT table. This query includes all the key fields. Any idea as to how to analyse this?
    Thanks and best regards,
    Bobby
    Moderator message: please read the sticky threads of this forum, there is a lot of information on what you can do.
    Edited by: Thomas Zloch on Feb 24, 2012

    Be aware that XP takes approx 1gb of your RAM leaving you with 1gb for whatever else is running. MS Outlook is also a memory hog.
    To check Virtual Memory Settings:
    Control Panel -> System
    System Properties -> Advanced Tab -> Performance Settings
    Performance Options -> Adavanced Tab - Virtual Memory section
    Virtual Memory -
    what are
    * Initial Size
    * Maximum Size
    In a presentation at one of the Hyperion conferences years ago, Mark Ostroff suggested that the initial be set to the same as Max. (Max is typically 2x physical RAM)
    These changes may provide some improvement.

  • INSERT INTO TABLE using SELECT takes long time

    Hello Friends,
    --- Oracle version 10.2.0.4.0
    --- I am trying to insert around 2.5 lakhs records in a table using INSERT ..SELECT. The insert takes long time and seems to be hung.
    --- When i try to SELECT the query fetches the rows in 10 seconds.
    --- Any clue why it is taking so much time

    vishalrs wrote:
    Hello Friends,hello
    >
    >
    --- Oracle version 10.2.0.4.0
    alright
    --- I am trying to insert around 2.5 lakhs records in a table using INSERT ..SELECT. The insert takes long time and seems to be hung.
    I don't know how a lakh is, but it sounds like a lot...
    --- When i try to SELECT the query fetches the rows in 10 seconds.
    how did you test this? and did you fetch the last record, or just the first couple of hundred.
    --- Any clue why it is taking so much timeWithout seeing anything, it's impossible to tell the reason.
    Search the forum for "When your query takes too long"

  • I have the current Mac Pro the entry level with the default specification and i feel some slow performance when applying after effects on my videos using final cut pro and also rendering a video takes long time ? what upgrades do you guys suggest?

    i have the current Mac Pro the entry level with the default configuration   and i feel lack of  performance when applying after effects on my videos using final cut pro and also rendering a video takes long time ? what upgrades do you guys suggest i could do on my Mac Pro ?

    256GB SSD  it shipped with will run low and one of the things to watch.
    Default memory is 12GB  also something to think about.
    D500 and FCP-X 10.1+
    http://macperformanceguide.com/index_topics.html#MacPro2013Performance
    Five models of 2013 Mac Pro running Resolve, FCPX, After Effects, Photoshop, and Aperture

  • Why outlook2011 mac version* takes long time to boot with OSX

    why outlook2011 mac version* takes long time to boot with OSX

    Okay, so after doing all of the above, the computer still takes between 40 seconds and 1 minute or so to boot up, and the VersionCue messages still appear. However I discovered that the "kdcmond cannot retrieve..." messages disappeared after I disabled my ethernet connections. So at least I know that that had nothing to do with the extended boot-up time.
    I have heard that the more RAM you have, the longer it takes to boot due to the RAM count. Since I have 10 GB, maybe this is why?
    I've included the most recent Console messages below:
    22/4/08 9:56:16 AM com.apple.launchctl.System[2] launchctl: Please convert the following to launchd: /etc/mach_init.d/dashboardadvisoryd.plist
    22/4/08 9:56:16 AM com.apple.launchd[1] (com.adobe.versioncueCS3) Unknown key: ServiceDescription
    22/4/08 9:56:16 AM com.apple.launchd[1] (org.cups.cups-lpd) Unknown key: SHAuthorizationRight
    22/4/08 9:56:16 AM com.apple.launchd[1] (org.cups.cupsd) Unknown key: SHAuthorizationRight
    22/4/08 9:56:16 AM com.apple.launchd[1] (org.ntp.ntpd) Unknown key: SHAuthorizationRight
    22/4/08 9:56:39 AM com.apple.SystemStarter[28] Starting Aladdin USB daemon
    22/4/08 9:56:39 AM org.ntp.ntpd[25] Error : nodename nor servname provided, or not known
    22/4/08 9:56:39 AM com.apple.launchd[1] (com.apple.UserEventAgent-LoginWindow[74]) Exited: Terminated
    22/4/08 9:56:39 AM com.apple.launchctl.Aqua[90] launchctl: Please convert the following to launchd: /etc/machinit_peruser.d/com.adobe.versioncueCS3.monitor.plist
    22/4/08 9:56:42 AM com.apple.launchd[82] (0x1011e0.VersionCueCS3monitor) Failed to check-in!

  • MIRO take long time when enter invoice for PO with GR-based invoice

    Hi,
    In my client system, system take long time for extracting PO data while booking invoice via. MIRO for the purchase order which have GR-based IV is marked. However system take few second for extracting PO data when enter invoice via. MIRO for the PO which don't have GR-based IV. Please note following point while providing the solution:
    - this problem is exist only for the purchase order related to one company code. However system working perfectly for other company code in the same client. Hence we assuming that some company code level cofiguration is missing.
    - the problem is exist for po with account assignment K.
    - we have one to one mapping for purchase organization to company code to plant.
    Appreciate for you quick respond. Thanks in advance.
    Regards,
    sp sahu

    Hi,
    Please check with FI guy for GL A/c and Cost centers which you are using to create the PO with item category K.
    Still problem permits check with your ABAP person.
    Regards,
    Mohd Ali.

  • MVIEW refresh takes long time

    Materialized view takes long time to refresh but when i tried select & insert into table it's very fast.
    i executed SQL and it takes ust 1min ( total rows is 447 )
    but while i refresh the MVIEW it takes 1.5 hrs ( total rows is 447 )
    MVIEW configration :-
    CREATE MATERIALIZED VIEW EVAL.EVALSEARCH_PRV_LWC
    TABLESPACE EVAL_T_S_01
    NOCACHE
    NOLOGGING
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    REFRESH FORCE ON DEMAND
    WITH PRIMARY KEY
    Not sure why so much diffrence

    infant_raj wrote:
    Materialized view takes long time to refresh but when i tried select & insert into table it's very fast.
    i executed SQL and it takes ust 1min ( total rows is 447 )
    but while i refresh the MVIEW it takes 1.5 hrs ( total rows is 447 )A SELECT does a consistent read.
    A MV refresh does that and also writes database data.
    These are not the same thing and cannot be directly compared.
    So instead of pointing at the SELECT execution time and asking why the MV refresh is not as fast, look instead WHAT the refresh is doing and HOW it is doing that.
    Is the execution plan sane? What events are the top ones for the MV refresh? What are the wait states that contributes most to the processing time of the refresh?
    You cannot use the SELECT statement's execution time as a direct comparison metric. The work done by the refresh is more than the work done by the SELECT. You need to determine exactly what work is done by the refresh and whether that work is done in a reasonable time, and how other sessions are impacting the refresh (it could very well be blocked by another session).

  • Takes Long time for Data Loading.

    Hi All,
    Good Morning.. I am new to SDN.
    Currently i am using the datasource 0CRM_SRV_PROCESS_H and it contains 225 fields. Currently i am using around 40 fields in my report.
    Can i hide the remaining fields in the datasource level itself (TCODE : RSA6)
    Currently data loading takes more time to load the data from PSA to ODS (ODS 1).
    And also right now i am pulling some data from another ODS(ODS 2)(LookUP). It takes long time to update the data in Active data table of the ODS.
    Can you please suggest how to improve the performance of dataloading on this Case.
    Thanks & Regards,
    Siva.

    Hi....
    Yes...u can hide..........just Check the hide box for those fields.......R u in BI 7.0 or BW...........whatever ........is the no of records is huge?
    If so u can split the records and execute............I mean use the same IP...........just execute it with different selections.........
    Check in ST04............is there are any locks or lockwaits..........if so...........Go to SM37 >> Check whether any Long running job is there or not.........then check whether that job is progressing or not............double click on the Job >> From the Job details copy the PID..............go to ST04 .....expand the node............and check whether u r able to find that PID there or not.........
    Also check System log in SM21............and shortdumps in ST04........
    Now to improve performance...........u can try to increase the virtual memory or servers.........if possiblr........it will increase the number of work process..........since if many jobs run at a time .then there will be no free Work prrocesses to proceed........
    Regards,
    Debjani......

  • SELECT statement takes long time

    Hi All,
    In the following code, if the T_QMIH-EQUNR contains blank or space values ,SELECT statement takes longer time to acess the data from OBJK table. If it T_QMIH-EQUNR contains values other than blank, performance is good and it fetches data very fast.
    Already we have indexes for EQUNR in OBJK table.
    Only for blank entries , it takes much time.Can anybody tell why it behaves for balnk entries?
    if not T_QMIH[] IS INITIAL.
            SORT T_QMIH BY EQUNR.
            REFRESH T_OBJK.
            SELECT EQUNR OBKNR
              FROM OBJK INTO TABLE T_OBJK
              FOR ALL ENTRIES IN T_QMIH
              WHERE OBJK~TASER = 'SER01' AND
             OBJK~EQUNR = T_QMIH-EQUNR.
    Thanks
    Ajay

    Hi
    You can use the field QMIH-QMNUM with OBJK-IHNUM
    in QMIH table, EQUNR is not primary key, it will have multiple entries
    so to improve the performance use one dummy internal table for QMIH  and sort it on EQUNR
    delete adjacent duplicates from d_qmih and use the same in for all entries
    this will improve the performance.
    Also use the fields in sequence of the index and primary keys also in select
    if not T_QMIH[] IS INITIAL.
    SORT T_QMIH BY EQUNR.
    REFRESH T_OBJK.
    SELECT EQUNR OBKNR
    FROM OBJK INTO TABLE T_OBJK
    FOR ALL ENTRIES IN T_QMIH
    WHERE  IHNUM =  T_QMIH-QMNUM
    OBJK~TASER = 'SER01' AND
    OBJK~EQUNR = T_QMIH-EQUNR.
    try this and let me know
    regards
    Shiva

  • Procedure takes long time to execute...

    Hi all
    i wrote the proxcedure but it takes long time to execute.
    The INterdata table contains 300 records.
    Here is the procedure:
    create or replace procedure inter_filter
    is
         /*v_sessionid interdata.sessionid%type;
         v_clientip interdata.clientip%type;
         v_userid interdata.userid%type;
         v_logindate interdata%type;
         v_createddate interdata%type;
         v_sourceurl interdata%type;
         v_destinationurl interdata%type;*/
         v_sessionid filter.sessionid%type;
         v_filterid filter.filterid%type;
         cursor c1 is
         select sessionid,clientip,browsertype,userid,logindate,createddate,sourceurl,destinationurl
         from interdata;
         cursor c2 is
         select sessionid,filterid
         from filter;
    begin
         open c2;
         loop
              fetch c2 into v_sessionid,v_filterid;
              for i in c1 loop
                   if i.sessionid = v_sessionid then
                        insert into filterdetail(filterdetailid,filterid,sourceurl,destinationurl,createddate)
                        values (filterdetail_seq.nextval,v_filterid,i.sourceurl,i.destinationurl,i.createddate);
                   else
                        insert into filter (filterid,sessionid,clientip,browsertype,userid,logindate,createddate)
                        values (filter_seq.nextval,i.sessionid,i.clientip,i.browsertype,i.userid,i.logindate,i.createddate);
                        insert into filterdetail(filterdetailid,filterid,sourceurl,destinationurl,createddate)
                        values (filterdetail_seq.nextval,filter_seq.currval,i.sourceurl,i.destinationurl,i.createddate);
                   end if;
              end loop;
         end loop;
         commit;
    end
    Please Help!
    Prathamesh

    i wrote the proxcedure but it takes long time to execute.Please define "long time". How long does it take? What are you expecting it to take?
    The INterdata table contains 300 records.But how many records are there in the FILTER table? As this is the one you are driving off this is going to determine the length of time it takes to complete. Also, this solution inserts every row in the INTERDATA table for each row in the FILTER table - in other words, if the FILTER table has twenty rows to start with you are going to end up with 6000 rows in FILTERDETAIL. No wonder it takes a long time. Is that want you want?
    Also of course, you are using PL/SQL cursors when you ought to be using set operations. Did you try the solution I posted in Re: Confusion in this  scenario>>>>>>> on this topic?
    Cheers, APC

Maybe you are looking for