Create very big table

Hi ,
call_fact table contain about 300 milion tables.
exceptions table contain about 150 milion tables.
Both tables have an uptodate statistics.
The machine have 8 CPUs
The statment already run 48 hours.
Can one suggest a faster way to do it ?
create table abc parallel
as
select /*+ parallel(t,32) */ *
from STARQ.CALL_FACT t
where rowid NOT IN (select /*+ parallel(ex,32) */ row_id
from starq.exceptions ex );
The plan is:
Plan
CREATE TABLE STATEMENT ALL_ROWSCost: 1,337,556,446,040                                                        
15 PX COORDINATOR                                                   
14 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ30001 :Q3001Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832                                              
13 LOAD AS SELECT PARALLEL_COMBINED_WITH_PARENT :Q3001                                        
12 BUFFER SORT PARALLEL_COMBINED_WITH_CHILD :Q3001                                   
11 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q3001Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832                               
10 PX SEND ROUND-ROBIN PARALLEL_FROM_SERIAL SYS.:TQ30000 Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832                          
9 FILTER                     
4 PX COORDINATOR                
3 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ20000 :Q2000Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832           
2 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q2000Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832 Partition #: 10 Partitions accessed #1 - #46     
1 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STARQ.CALL_FACT :Q2000Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832 Partition #: 10 Partitions accessed #1 - #46
8 PX COORDINATOR                
7 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ10000 :Q1000Cost: 4,743 Bytes: 10 Cardinality: 1           
6 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1000Cost: 4,743 Bytes: 10 Cardinality: 1      
5 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STARQ.EXCEPTIONS :Q1000Cost: 4,743 Bytes: 10 Cardinality: 1

> When in doubt, I use exists. Here it is clear to me that exists will be faster
If the row_id column is declared not null, this is not true: exactly the same path is chosen as can be seen below.
select /* with primary key */
  from call_fact t
where rowid not in
       ( select row_id
           from exceptions ex
call     count       cpu    elapsed       disk      query    current        rows
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch     1001      0.46       0.46          0      32467          0       15000
total     1003      0.46       0.47          0      32467          0       15000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61 
Rows     Row Source Operation
  15000  NESTED LOOPS ANTI (cr=32467 pr=0 pw=0 time=600105 us)
  30000   TABLE ACCESS FULL CALL_FACT (cr=1466 pr=0 pw=0 time=120050 us)
  15000   INDEX UNIQUE SCAN EX_PK (cr=31001 pr=0 pw=0 time=297574 us)(object id 64376)
select /* with primary key */
  from call_fact t
where not exists
       ( select 'same rowid'
           from exceptions ex
          where ex.row_id = t.rowid
call     count       cpu    elapsed       disk      query    current        rows
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch     1001      0.51       0.46          0      32467          0       15000
total     1003      0.51       0.47          0      32467          0       15000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61 
Rows     Row Source Operation
  15000  NESTED LOOPS ANTI (cr=32467 pr=0 pw=0 time=585099 us)
  30000   TABLE ACCESS FULL CALL_FACT (cr=1466 pr=0 pw=0 time=120048 us)
  15000   INDEX UNIQUE SCAN EX_PK (cr=31001 pr=0 pw=0 time=298198 us)(object id 64376)
********************************************************************************Note that the tables, scaled down to 30,000 and 15,000 rows, are created like this:
SQL> create table call_fact (col1, col2)
  2  as
  3   select level
  4        , lpad('*',100,'*')
  5     from dual
  6  connect by level <= 30000
  7  /
Tabel is aangemaakt.
SQL> create table exceptions (row_id, col)
  2  as
  3  select rowid
  4       , lpad('*',100,'*')
  5    from call_fact
  6   where mod(col1,2) = 0
  7  /
Tabel is aangemaakt.
SQL> alter table exceptions add constraint ex_pk primary key (row_id)
  2  /
Tabel is gewijzigd.
SQL> exec dbms_stats.gather_table_stats(user,'call_fact')
PL/SQL-procedure is geslaagd.
SQL> exec dbms_stats.gather_table_stats(user,'exceptions',cascade=>true)
PL/SQL-procedure is geslaagd.Without declaring row_id not null, I've test exists to be definitely much faster, as the not in variant cannot do an antijoin anymore.
Regards,
Rob.

Similar Messages

  • Very Big Table (36 Indexes, 1000000 Records)

    Hi
    I have a very big table (76 columns, 1000000 records), these 76 columns include 36 foreign key columns , each FK has an index on the table, and only one of these FK columns has a value at the same time while all other FK have NULL value. All these FK columns are of type NUMBER(20,0).
    I am facing performance problem which I want to resolve taking in consideration that this table is used with DML (Insert,Update,Delete) along with Query (Select) operations, all these operations and queries are done daily. I want to improve this table performance , and I am facing these scenarios:
    1- Replace all these 36 FK columns with 2 columns (ID, TABLE_NAME) (ID for master table ID value, and TABLE_NAME for master table name) and create only one index on these 2 columns.
    2- partition the table using its YEAR column, keep all FK columns but drop all indexes on these columns.
    3- partition the table using its YEAR column, and drop all FK columns, create (ID,TABLE_NAME) columns, and create index on (TABLE_NAME,YEAR) columns.
    Which way has more efficiency?
    Do I have to take "master-detail" relations in mind when building Forms on this table?
    Are there any other suggestions?
    I am using Oracle 8.1.7 database.
    Please Help.

    Hi everybody
    I would like to thank you for your cooperation and I will try to answer your questions, but please note that I am a developer in the first place and I am new to oracle database administration, so please forgive me if I did any mistakes.
    Q: Have you gathered statistics on the tables in your database?
    A: No I did not. And if I must do it, must I do it for all database tables or only for this big table?
    Q:Actually tracing the session with 10046 level 8 will give some clear idea on where your query is waiting.
    A: Actually I do not know what you mean by "10046 level 8".
    Q: what OS and what kind of server (hardware) are you using
    A: I am using Windows2000 Server operating system, my server has 2 Intel XEON 500MHz + 2.5GB RAM + 4 * 36GB Hard Disks(on RAID 5 controller).
    Q: how many concurrent user do you have an how many transactions per hour
    A: I have 40 concurrent users, and an average 100 transaction per hour, but the peak can goes to 1000 transaction per hour.
    Q: How fast should your queries be executed
    A: I want the queries be executed in about 10 to 15 seconds, or else every body here will complain. Please note that because of this table is highly used, there is a very good chance to 2 or more transaction to exist at the same time, one of them perform query, and the other perform DML operation. Some of these queries are used in reports, and it can be long query(ex. retrieve the summary of 50000 records).
    Q:please show use the explain plan of these queries
    A: If I understand your question, you ask me to show you the explain plan of those queries, well, first, I do not know how , an second, I think it is a big question because I can not collect all kind of queries that have been written on this table (some of them exist in server packages, and the others performed by Forms or Reports).

  • Improve the performance in stored procedure using sql server 2008 - esp where clause in very big table - Urgent

    Hi,
    I am looking for inputs in tuning stored procedure using sql server 2008. l am new to performance tuning in sql,plsql and oracle. currently facing issue in stored procedure - need to increase the performance by code optmization/filtering the records using where clause in larger table., the requirement is Stored procedure generate Audit Report which is accessed by approx. 10 Admin Users typically 2-3 times a day by each Admin users.
    It has got CTE ( common table expression ) which is referred 2  time within SP. This CTE is very big and fetches records from several tables without where clause. This causes several records to be fetched from DB and then needed processing. This stored procedure is running in pre prod server which has 6gb of memory and built on virtual server and the same proc ran good in prod server which has 64gb of ram with physical server (40sec). and the execution time in pre prod is 1min 9seconds which needs to be reduced upto 10secs or so will be the solution. and also the exec time differs from time to time. sometimes it is 50sec and sometimes 1min 9seconds..
    Pl provide what is the best option/practise to use where clause to filter the records and tool to be used to tune the procedure like execution plan, sql profiler?? I am using toad for sqlserver 5.7. Here I see execution plan tab available while running the SP. but when i run it throws an error. Pl help and provide inputs.
    Thanks,
    Viji

    You've asked a SQL Server question in an Oracle forum.  I'm expecting that this will get locked momentarily when a moderator drops by.
    Microsoft has its own forums for SQL Server, you'll have more luck over there.  When you do go there, however, you'll almost certainly get more help if you can pare down the problem (or at least better explain what your code is doing).  Very few people want to read hundreds of lines of code, guess what's it's supposed to do, guess what is slow, and then guess at how to improve things.  Posting query plans, the results of profiling, cutting out any code that is unnecessary to the performance problem, etc. will get you much better answers.
    Justin

  • Creating Very Big images from Flex

    Hi,
    I am trying to create a small image manipulator. In which I am having Some image in background and then I am adding other
    images, text etc. to it and capturing bitmapdata of the entire canvas and saving the image at the server side. Now my issue is that I want to convert those images (created from flex bitmapdata via byteArray ) to very large images i.e. in the dimenstions of the feets.Can any body suggest how to acheive this.As i tried to scale the bitmapdata the quality gets down repildly. And again there is the limiations of the size in bitmap data,and hence my am far enough from my
    actual requirement.
    Thnx in Advance,
    Shardul Singh

    This code works:
    public class ImagePanel extends JPanel
    Image bufferedImage;
    Image grass;
    //grass image is loaded up here;
    public void paint(Graphics g)
              super.paintComponent(g);
              bufferedImage = createImag(15*increm,14*increm);
              Graphics bufferedGraphics = bufferedImage.getGraphics();
              bufferedGraphics.drawImage(grass, 0, 0, this);
              bufferedGraphics.drawImage(grass, 32,32, this);
              bufferedGraphics.drawImage(grass, 64,64, this);
              g.drawImage(bufferedImage, 0, 0, this);
    However this code does not:
    public class ImagePanel extends JPanel
    Image bufferedImage;
    Image grass;
    //grass image is loaded up here;
    public void setUpImage()
    bufferedImage = createImag(15*increm,14*increm);
              Graphics bufferedGraphics = bufferedImage.getGraphics();
              bufferedGraphics.drawImage(grass, 0, 0, this);
              bufferedGraphics.drawImage(grass, 32,32, this);
              bufferedGraphics.drawImage(grass, 64,64, this);
    public void paint(Graphics g)
    setUpImage();
              super.paintComponent(g);
              g.drawImage(bufferedImage, 0, 0, this);
    I want to have a method that sets the image up so I can move it around, add to it, take away from it, etc. that is what I want setUpImage() to do. I can't figure out why it works if I do it the first way and not the second way.

  • Very big table to delete :)

    Hi all!
    I have tablespace with 6 datafile and each has 4GB, lately that tablespace is increasing to fast so we decided to delete some data from table which is the largest.
    That table has around 10 million records, and when I run query to delete it by date:
    delete from TABLE_NAME where dt_start < to_date('09/07/16', 'YY/MM/DD');
    all records which are older than 3 months.
    After running to execute that query I see it in "Session" for about 2-3 hours and then disappears! but query still has executing status.
    What happened to this query? why it disappeared?

    Is there any chance you could partition the table by date so that you could simply drop the older partitions?
    What fraction of the data are you trying to delete? If you are deleting a substantial fraction of the data in the table, it is likely more efficient to write the data you want to keep to a different table, and then either truncate the existing table and move the saved data back or drop the existing table and rename the table you saved the data into.
    Justin

  • Print very big JTable

    Hi all,
    i have to print a very big table with 100000 rows and 6 columns, i have put the System.gc() at the end of the print method but when i print the table the print process become too big (more or less 700 kB for page and there are 1048 pages).
    It is possible to make a pdf of my table and this solution is better like the first?
    When i make the preview this take a lot of time for the size of the table, because first i have to create the table and then i preview it.
    There is a way to reduce the time lost for the table generation?
    N.B.: the data in the table is always the same.
    Thanks a lot!!!

    There is a way to reduce the time lost for the table
    generation? Write a table model, extending AbstractTableModel.
    The model is queried for each cell. Usually all the columns
    of one row are retrieved before getting next row. You may cache
    one row in the model: not the whole table!

  • How do i open a VERY big file?

    I hope someone can help.
    I did some testing using a LeCroy LT342 in segment mode. Using the
    Labview driver i downloaded the data over GPIB and saved it to a
    spreadsheet file. Unfortunately it created very big files (ranging from
    200MB to 600MB). I now need to process them but Labview doesn't like
    them. I would be very happy to split the files into an individual file
    for each row (i can do this quite easily) but labview just sits there
    when i try to open the file.
    I don't know enough about computers and memory (my spec is 1.8GHz
    Pentium 4, 384MB RAM) to figure out whether if i just leave it for long
    enough it will do the job or not.
    Has anyone any experience or help they could offer?
    Thanks,
    Phil

    When you open (and read) a file you usually move it from your hard disk (permanent storage) to ram.  This allows you to manipulate it in high speeds using fast RAM memory, if you don't have enough memory (RAM) to read the whole file,  you will be forced to use virtual memory (uses swap space on the HD as "virtual" RAM) which is very slow.  Since you only have 384 MB of RAM and want to process Huge files (200MB-600MB) you could easily and inexpensively upgrade to 1GB of RAM and see large speed increases.  A better option is to lode the file in chunks looking at some number of lines at a time and processing this amount of data and repeat until the file is complete, this will be more programming but will allow you to use much lass RAM at any instance.
    Paul
    Paul Falkenstein
    Coleman Technologies Inc.
    CLA, CPI, AIA-Vision
    Labview 4.0- 2013, RT, Vision, FPGA

  • Excution of a PL/SQL procedure with CURSOR for big tables

    I have prepared a proceudre that uses CURSOR to make a complex query for tables with big number of records, something like 900'000. And the execution failed; ORA-01652:impossible to extend the temporary segment of 64 in the space of storage TEMP.
    Any sugestion.

    This brings us to the following question: How could I calculate the bytes required by a cursor?. It is a selection of certain fields of very big tables. Let's say that the fields are NUMBER(4), NUMBER(8) and CHAR(2). The fields are in 2 relational tables of 900'000 each. What size is required for a procedure like this.
    Your help is really appreciated.

  • What is the easiest way to create and manage very big forms?

    I need to create a form that will contain few hundred questions. Could you please give me some advise on what is the easiest way to do that? I mean for example is it easier to create everything in Word (since it is easier to manage) and than create a form based on that?
    My concern is that when I will have a very big form, containing different kinds of questions and with many scripts, managing it during work will be slow and difficult, for example adding a question in the middle of the form which would require moving half of the questions down which could smash the layout etc.
    What is the best practise for that?
    Thanks in advance

    Try using Table and Rows for this kind of forms. These forms will have the same look throught with a question and and answer section..
    In the future if you want to add a new section, you can simply add rows in between..
    Thanks
    Srini

  • Optimize delete in a very big database table

    Hi,
    For delete entries in database table i use instruction:
    Delete from <table> where <zone> = 'X'.
    The delete take seven hours (the table is very big and  <zone> isn't an index)
    How can i optimize for reduce the delete time.
    Thanks in advance for your response.
    Regards.

    what is the size of the table and how many lines are you going to delete?
    I would recommend you to delete only up to 5000 or 10000 records in one step.
    do 100 times.
    select *
              from
              into table itab.
              where
              up to 10.000 records.
    if ( itab is initial )
      exit.
    endif.
    delete ... from table itab.
    commit work.
    If this is still too slow, than you should create a secondary index with zone.
    You can drop the index after the deletion is finished.
    Siegfried

  • No checkboxes and very big comboboxen when creating own L&F

    Hello,
    I wanted to create my own look and feel bij extending from BasicLookAndFeel. M class:
    import javax.swing.plaf.basic.*;
    import javax.swing.*;
    public class PlanonLookAndFeel extends BasicLookAndFeel
    //~ Constructors ................................................
    * @return The name for this look-and-feel.
    public String getName()
    return "Planon";
    * We are not a simple extension of an existing
    * look-and-feel, so provide our own ID.
    * <p>
    * @return The ID for this look-and-feel.
    public String getID()
    return "Planon";
    * @return A short description of this look-and-feel.
    public String getDescription()
    return "The Planon Look and Feel";
    * This is not a native look and feel on any platform.
    * <p>
    * @return false, this isn't native on any platform.
    public boolean isNativeLookAndFeel()
    return false;
    * This look and feel is supported on all platforms.
    * <p>
    * @return true, this L&F is supported on all platforms.
    public boolean isSupportedLookAndFeel()
    return true;
    protected void initSystemColorDefaults(UIDefaults table)
    String[] defaultSystemColors = {
    /* Color of the desktop background */
    "desktop", "#005C5C",
    /* Color for captions (title bars) when they are active. */
    "activeCaption", "#000080",
    /* Text color for text in captions (title bars). */
    "activeCaptionText", "#FFFFFF",
    /* Border color for caption (title bar) window borders. */
    "activeCaptionBorder", "#D4D0C8",
    /* Color for captions (title bars) when not active. */
    "inactiveCaption", "#808080",
    /* Text color for text in inactive captions (title bars). */
    "inactiveCaptionText", "#D4D0C8",
    /*Border color for inactive caption (title bar) window borders.*/
    "inactiveCaptionBorder", "#D4D0C8",
    /* Default color for the interior of windows */
    "window", "#FFFFFF",
    "windowBorder", "#000000",
    "windowText", "#000000",
    /* Background color for menus */
    "menu", "#D4D0C8",
    /* Text color for menus */
    "menuText", "#000000",
    /* Text background color */
    "text", "#D4D0C8",
    /* Text foreground color */
    "textText", "#000000",
    /* Text background color when selected */
    "textHighlight", "#0A246A",
    /* Text color when selected */
    "textHighlightText", "#FFFFFF",
    /* Text color when disabled */
    "textInactiveText", "#808080",
    /* Default color for controls (buttons, sliders, etc) */
    "control", "#D4D0C8",
    /* Default color for text in controls */
    "controlText", "#000000",
    "controlHighlight", "#D4D0C8",
    /* Highlight color for controls */
    "controlLtHighlight", "#FFFFFF",
    /* Shadow color for controls */
    "controlShadow", "#808080",
    /* Dark shadow color for controls */
    "controlDkShadow", "#000000",
    /* Scrollbar background (usually the "track") */
    "scrollbar", "#E0E0E0",
    "info", "#FFFFE1",
    "infoText", "#000000",
    /* color for planon application */
    "planonMainColor", "#6787BA"
    loadSystemColors(table, defaultSystemColors, isNativeLookAndFeel());
    This works oke, but when i try to display a combox, it looks really ugly (a very big arrow and no borders). The checkbox isn't even displayed. Does anyone have a clue about this?
    Thanks in advance
    Hugo Hendriks

    are there docu/tutorials which discribe how to create your own look and feel???????????????

  • How does table SMW3_BDOC become very big?

    Hi,
    The table SMW3_BDOC which store BDocs in my system becomes very big with several million records. Some BDocs in this table are sent several month ago. I'm very strange that why those BDocs were not processed?
    If I want to clean this table, will inconsistancy occurrs in system? And how can I clean this table for those very old BDocs?
    Thanks a lot for your help!

    Hi Long,
    I have faced the same issue recently on our Production system and this created a huge performance issue and completely blocked the system with TimeOut errors.
    I was able to clean up the same by running the report SMO8_FLOW_REORG in SE38.
    If you are very sure about cleaning up, first delete all the unnecessary Bdocs and then run this report.
    At the same time, check any CSA* queue is stuck in CRM inbound queue SMQ2. If yes, select it, manually unlock it, activate and then refresh. Also check any unnecessary queues stuck up there.
    Hope this could help you.
    regards,
    kalyan

  • VERY big files (!!) created by QuarkXPress 7

    Hi there!
    I have a "problem" with QuarkXPress 7.3 and I don't know if this is the right forum to ask...
    Anyway, I have createed a document, about 750 pages, with 1000 pictures placed in it. I have divided it in 3 layouts.
    I'm saving the file and the file created is 1,20GB !!!
    Isn't that a very big file for QuarkXPress??
    In that project there are 3 layouts. I tried to make a copy of that file and delete 2 of 3 layouts and the project's file size is still the same!!
    (Last year, I had created (almost) the same document and as I checked that document now, its size is about 280 MB!!)
    The problem is that I have "autosave" on (every 5 or 10 minutes) and it takes some time to save it !
    Can anyone help me with that??
    Why does Quark has made SO big file???
    Thank you all for your time!

    This is really a Quark issue and better asked in their forum areas. However, have you tried to do a Save As and see how big the resultant document is?

  • Hello! Can't open an IDML file. ID file was created in CC (10). It is a 100 page (50 spreads) doc that is one big table. It was created in CC (10) and saved as an IDML file. I have CS6 and when I try to open it, it shuts down ID almost instantly. The file

    Hello! Can't open an IDML file. ID file was created in CC (10). It is a 100 page (50 spreads) doc that is one big table. It was created in CC (10) and saved as an IDML file. I have CS6 and when I try to open it, it shuts down ID almost instantly. The file was created on a MAC and I am trying to open it on a MAC. Any/all advice is greatly appreciated as I am up against a deadline with this unopened file! Many thanks in advance, Diane

    There's a good chance the file is corrupt. As whomever sent it to you to verify it opens on their end.

  • SAPSR3DB   XMII_TRANSACTION table LOG column is very big

    Hi,
    We have a problem about MII server.
    SAPSR3DB   XMII_TRANSACTION table LOG column is very big data in it.
    How can it be decrease the size of data in this column?
    Regards.

    In 12.1 its XMII Administration Menu (Menu.jsp) --> System Management --> DefaultTransactionPersistance.
    In production I recommend setting this to 'ONERROR'
    There is also the TransactionPersistenceLifetime which determines how long entries will stay in the log table.
    We set this to 8 hours.

Maybe you are looking for