Very Big Table (36 Indexes, 1000000 Records)

Hi
I have a very big table (76 columns, 1000000 records), these 76 columns include 36 foreign key columns , each FK has an index on the table, and only one of these FK columns has a value at the same time while all other FK have NULL value. All these FK columns are of type NUMBER(20,0).
I am facing performance problem which I want to resolve taking in consideration that this table is used with DML (Insert,Update,Delete) along with Query (Select) operations, all these operations and queries are done daily. I want to improve this table performance , and I am facing these scenarios:
1- Replace all these 36 FK columns with 2 columns (ID, TABLE_NAME) (ID for master table ID value, and TABLE_NAME for master table name) and create only one index on these 2 columns.
2- partition the table using its YEAR column, keep all FK columns but drop all indexes on these columns.
3- partition the table using its YEAR column, and drop all FK columns, create (ID,TABLE_NAME) columns, and create index on (TABLE_NAME,YEAR) columns.
Which way has more efficiency?
Do I have to take "master-detail" relations in mind when building Forms on this table?
Are there any other suggestions?
I am using Oracle 8.1.7 database.
Please Help.

Hi everybody
I would like to thank you for your cooperation and I will try to answer your questions, but please note that I am a developer in the first place and I am new to oracle database administration, so please forgive me if I did any mistakes.
Q: Have you gathered statistics on the tables in your database?
A: No I did not. And if I must do it, must I do it for all database tables or only for this big table?
Q:Actually tracing the session with 10046 level 8 will give some clear idea on where your query is waiting.
A: Actually I do not know what you mean by "10046 level 8".
Q: what OS and what kind of server (hardware) are you using
A: I am using Windows2000 Server operating system, my server has 2 Intel XEON 500MHz + 2.5GB RAM + 4 * 36GB Hard Disks(on RAID 5 controller).
Q: how many concurrent user do you have an how many transactions per hour
A: I have 40 concurrent users, and an average 100 transaction per hour, but the peak can goes to 1000 transaction per hour.
Q: How fast should your queries be executed
A: I want the queries be executed in about 10 to 15 seconds, or else every body here will complain. Please note that because of this table is highly used, there is a very good chance to 2 or more transaction to exist at the same time, one of them perform query, and the other perform DML operation. Some of these queries are used in reports, and it can be long query(ex. retrieve the summary of 50000 records).
Q:please show use the explain plan of these queries
A: If I understand your question, you ask me to show you the explain plan of those queries, well, first, I do not know how , an second, I think it is a big question because I can not collect all kind of queries that have been written on this table (some of them exist in server packages, and the others performed by Forms or Reports).

Similar Messages

  • Improve the performance in stored procedure using sql server 2008 - esp where clause in very big table - Urgent

    Hi,
    I am looking for inputs in tuning stored procedure using sql server 2008. l am new to performance tuning in sql,plsql and oracle. currently facing issue in stored procedure - need to increase the performance by code optmization/filtering the records using where clause in larger table., the requirement is Stored procedure generate Audit Report which is accessed by approx. 10 Admin Users typically 2-3 times a day by each Admin users.
    It has got CTE ( common table expression ) which is referred 2  time within SP. This CTE is very big and fetches records from several tables without where clause. This causes several records to be fetched from DB and then needed processing. This stored procedure is running in pre prod server which has 6gb of memory and built on virtual server and the same proc ran good in prod server which has 64gb of ram with physical server (40sec). and the execution time in pre prod is 1min 9seconds which needs to be reduced upto 10secs or so will be the solution. and also the exec time differs from time to time. sometimes it is 50sec and sometimes 1min 9seconds..
    Pl provide what is the best option/practise to use where clause to filter the records and tool to be used to tune the procedure like execution plan, sql profiler?? I am using toad for sqlserver 5.7. Here I see execution plan tab available while running the SP. but when i run it throws an error. Pl help and provide inputs.
    Thanks,
    Viji

    You've asked a SQL Server question in an Oracle forum.  I'm expecting that this will get locked momentarily when a moderator drops by.
    Microsoft has its own forums for SQL Server, you'll have more luck over there.  When you do go there, however, you'll almost certainly get more help if you can pare down the problem (or at least better explain what your code is doing).  Very few people want to read hundreds of lines of code, guess what's it's supposed to do, guess what is slow, and then guess at how to improve things.  Posting query plans, the results of profiling, cutting out any code that is unnecessary to the performance problem, etc. will get you much better answers.
    Justin

  • Load big table (almost 1 bilion record)

    Hello everybody,
    I have a little problem: I'm working in a PRE-PRODUCTION environment (banking sector) and I have created a table with daily situation of all accounts from bank. I had to construct this table for the first 8 months of 2010 (since 09.2010 this table exists).
    In our DB's, there is a protocol (which I believe is not only in our case) that re-deploy and overwrite the PRE-PRODUCTION environment (including DB's) with the PRODUCTION environment. Weekly! That mean that everything I done in this PRE-PRODUCTION DB (package, table, etc.) it will be overwritten in the week-end.
    The package that load this table it's done, but the execution time it's huge (almost 24 hours).
    This table is used in another reporting applications (to be deployed), which takes more than 2-3 weeks to be developed. That mean that (after I have created&loaded this table) I have to export it and import it at the beginning of every week, in order to use it in the other applications. The problem is, due the HUGE number of records in this table, that the loading time it's almost bigger like the execution time of the loading procedure. So, in this moment I'm in this situation: at the beginning of every week (until the other application it's developed&tested&approved) I have to:
    1. load the table by executing the package
    or
    2. import the table
    Both variants take the same time: almost 24 hours.
    There is other possibilities that can help me to import or load this table faster? Something like PIPELINE functions?

    Create table Daily_Movement --(contain many records/bank/account/day)
    (data_contabile date,
    bank_ID varchar2(5),
    account_ID number,
    acc_DO_balance number(20,3),
    acc_EU_balance number(20,3),
    currency_code varchar2(3));
    -- account 11
    -- 4 january: total daily movement = 103
    insert into Daily_Movement values (to_date('20100104','yyyymmdd'), 'Bank1', 11, 50, 50, 'EUR');
    insert into Daily_Movement values (to_date('20100104','yyyymmdd'), 'Bank1', 11, 25, 25, 'EUR');
    insert into Daily_Movement values (to_date('20100104','yyyymmdd'), 'Bank1', 11, 28, 28, 'EUR');
    -- 5 january: total daily movement = 33
    insert into Daily_Movement values (to_date('20100105','yyyymmdd'), 'Bank1', 11, 11, 11, 'EUR');
    insert into Daily_Movement values (to_date('20100105','yyyymmdd'), 'Bank1', 11, 22, 22, 'EUR');
    -- 6 january: total daily movement = 44
    insert into Daily_Movement values (to_date('20100106','yyyymmdd'), 'Bank1', 11, 33, 33, 'EUR');
    insert into Daily_Movement values (to_date('20100106','yyyymmdd'), 'Bank1', 11, -44, -44, 'EUR');
    insert into Daily_Movement values (to_date('20100106','yyyymmdd'), 'Bank1', 11, 55, 55, 'EUR');
    -- 7 january: total daily movement = 231
    insert into Daily_Movement values (to_date('20100107','yyyymmdd'), 'Bank1', 11, 66, 66, 'EUR');
    insert into Daily_Movement values (to_date('20100107','yyyymmdd'), 'Bank1', 11, 77, 77, 'EUR');
    insert into Daily_Movement values (to_date('20100107','yyyymmdd'), 'Bank1', 11, 88, 88, 'EUR');
    -- 8 january: total daily movement = 10
    insert into Daily_Movement values (to_date('20100108','yyyymmdd'), 'Bank1', 11, 99, 99, 'EUR');
    insert into Daily_Movement values (to_date('20100108','yyyymmdd'), 'Bank1', 11, -100, -100, 'EUR');
    insert into Daily_Movement values (to_date('20100108','yyyymmdd'), 'Bank1', 11, -11, -11, 'EUR');
    -- 11 january: total daily movement = 33
    insert into Daily_Movement values (to_date('20100111','yyyymmdd'), 'Bank1', 11, 22, 22, 'EUR');
    insert into Daily_Movement values (to_date('20100111','yyyymmdd'), 'Bank1', 11, -33, -33, 'EUR');
    insert into Daily_Movement values (to_date('20100111','yyyymmdd'), 'Bank1', 11, 44, 44, 'EUR');
    -- 12 january: total daily movement = 88
    insert into Daily_Movement values (to_date('20100112','yyyymmdd'), 'Bank1', 11, -55, -55, 'EUR');
    insert into Daily_Movement values (to_date('20100112','yyyymmdd'), 'Bank1', 11, 66, 66, 'EUR');
    insert into Daily_Movement values (to_date('20100112','yyyymmdd'), 'Bank1', 11, 77, 77, 'EUR');
    -- 13 january: total daily movement = 89
    insert into Daily_Movement values (to_date('20100113','yyyymmdd'), 'Bank1', 11, 88, 88, 'EUR');
    insert into Daily_Movement values (to_date('20100113','yyyymmdd'), 'Bank1', 11, -99, -99, 'EUR');
    insert into Daily_Movement values (to_date('20100113','yyyymmdd'), 'Bank1', 11, 100, 100, 'EUR');
    -- 14 january: total daily movement = 22
    insert into Daily_Movement values (to_date('20100114','yyyymmdd'), 'Bank1', 11, 11, 11, 'EUR');
    insert into Daily_Movement values (to_date('20100114','yyyymmdd'), 'Bank1', 11, -22, -22, 'EUR');
    insert into Daily_Movement values (to_date('20100114','yyyymmdd'), 'Bank1', 11, 33, 33, 'EUR');
    -- 15 january: total daily movement = -33
    insert into Daily_Movement values (to_date('20100115','yyyymmdd'), 'Bank1', 11, -44, -44, 'EUR');
    insert into Daily_Movement values (to_date('20100115','yyyymmdd'), 'Bank1', 11, -55, -55, 'EUR');
    insert into Daily_Movement values (to_date('20100115','yyyymmdd'), 'Bank1', 11, 66, 66, 'EUR');
    -- 18 january: total daily movement = -110
    insert into Daily_Movement values (to_date('20100118','yyyymmdd'), 'Bank1', 11, 77, 77, 'EUR');
    insert into Daily_Movement values (to_date('20100118','yyyymmdd'), 'Bank1', 11, -88, -88, 'EUR');
    insert into Daily_Movement values (to_date('20100118','yyyymmdd'), 'Bank1', 11, -99, -99, 'EUR');
    -- 19 january: total daily movement = 111
    insert into Daily_Movement values (to_date('20100119','yyyymmdd'), 'Bank1', 11, 100, 100, 'EUR');
    insert into Daily_Movement values (to_date('20100119','yyyymmdd'), 'Bank1', 11, -11, -11, 'EUR');
    insert into Daily_Movement values (to_date('20100119','yyyymmdd'), 'Bank1', 11, 22, 22, 'EUR');
    -- 20 january: total daily movement = 132
    insert into Daily_Movement values (to_date('20100120','yyyymmdd'), 'Bank1', 11, 33, 33, 'EUR');
    insert into Daily_Movement values (to_date('20100120','yyyymmdd'), 'Bank1', 11, 44, 44, 'EUR');
    insert into Daily_Movement values (to_date('20100120','yyyymmdd'), 'Bank1', 11, 55, 55, 'EUR');
    -- 21 january: total daily movement = 77
    insert into Daily_Movement values (to_date('20100121','yyyymmdd'), 'Bank1', 11, 66, 66, 'EUR');
    insert into Daily_Movement values (to_date('20100121','yyyymmdd'), 'Bank1', 11, -77, -77, 'EUR');
    insert into Daily_Movement values (to_date('20100121','yyyymmdd'), 'Bank1', 11, 88, 88, 'EUR');
    -- 22 january: total daily movement = 210
    insert into Daily_Movement values (to_date('20100122','yyyymmdd'), 'Bank1', 11, 99, 99, 'EUR');
    insert into Daily_Movement values (to_date('20100122','yyyymmdd'), 'Bank1', 11, 100, 100, 'EUR');
    insert into Daily_Movement values (to_date('20100122','yyyymmdd'), 'Bank1', 11, 11, 11, 'EUR');
    -- 25 january: total daily movement = 55
    insert into Daily_Movement values (to_date('20100125','yyyymmdd'), 'Bank1', 11, -22, -22, 'EUR');
    insert into Daily_Movement values (to_date('20100125','yyyymmdd'), 'Bank1', 11, 33, 33, 'EUR');
    insert into Daily_Movement values (to_date('20100125','yyyymmdd'), 'Bank1', 11, 44, 44, 'EUR');
    -- 26 january: total daily movement = 66
    insert into Daily_Movement values (to_date('20100126','yyyymmdd'), 'Bank1', 11, 55, 55, 'EUR');
    insert into Daily_Movement values (to_date('20100126','yyyymmdd'), 'Bank1', 11, -66, -66, 'EUR');
    insert into Daily_Movement values (to_date('20100126','yyyymmdd'), 'Bank1', 11, 77, 77, 'EUR');
    -- 27 january: total daily movement = 87
    insert into Daily_Movement values (to_date('20100127','yyyymmdd'), 'Bank1', 11, 88, 88, 'EUR');
    insert into Daily_Movement values (to_date('20100127','yyyymmdd'), 'Bank1', 11, 99, 99, 'EUR');
    insert into Daily_Movement values (to_date('20100127','yyyymmdd'), 'Bank1', 11, -100, -100, 'EUR');
    -- 28 january: total daily movement = 44
    insert into Daily_Movement values (to_date('20100128','yyyymmdd'), 'Bank1', 11, -11, -11, 'EUR');
    insert into Daily_Movement values (to_date('20100128','yyyymmdd'), 'Bank1', 11, 22, 22, 'EUR');
    insert into Daily_Movement values (to_date('20100128','yyyymmdd'), 'Bank1', 11, 33, 33, 'EUR');
    -- 29 january: total daily movement = 55
    insert into Daily_Movement values (to_date('20100129','yyyymmdd'), 'Bank1', 11, 44, 44, 'EUR');
    insert into Daily_Movement values (to_date('20100129','yyyymmdd'), 'Bank1', 11, -55, -55, 'EUR');
    insert into Daily_Movement values (to_date('20100129','yyyymmdd'), 'Bank1', 11, 66, 66, 'EUR');
    -- total january 1347
    -- 01 february: total daily movement = 264
    insert into Daily_Movement values (to_date('20100201','yyyymmdd'), 'Bank1', 11, 77, 77, 'EUR');
    insert into Daily_Movement values (to_date('20100201','yyyymmdd'), 'Bank1', 11, 88, 88, 'EUR');
    insert into Daily_Movement values (to_date('20100201','yyyymmdd'), 'Bank1', 11, 99, 99, 'EUR');
    -- 02 february: total daily movement = 111
    insert into Daily_Movement values (to_date('20100202','yyyymmdd'), 'Bank1', 11, 100, 100, 'EUR');
    insert into Daily_Movement values (to_date('20100202','yyyymmdd'), 'Bank1', 11, -11, -11, 'EUR');
    insert into Daily_Movement values (to_date('20100202','yyyymmdd'), 'Bank1', 11, 22, 22, 'EUR');
    -- 03 february: total daily movement = -66
    insert into Daily_Movement values (to_date('20100203','yyyymmdd'), 'Bank1', 11, 33, 33, 'EUR');
    insert into Daily_Movement values (to_date('20100203','yyyymmdd'), 'Bank1', 11, -44, -44, 'EUR');
    insert into Daily_Movement values (to_date('20100203','yyyymmdd'), 'Bank1', 11, -55, -55, 'EUR');
    -- 04 february: total daily movement = 99
    insert into Daily_Movement values (to_date('20100204','yyyymmdd'), 'Bank1', 11, -66, -66, 'EUR');
    insert into Daily_Movement values (to_date('20100204','yyyymmdd'), 'Bank1', 11, 77, 77, 'EUR');
    insert into Daily_Movement values (to_date('20100204','yyyymmdd'), 'Bank1', 11, 88, 88, 'EUR');
    -- 05 february: total daily movement = 10
    insert into Daily_Movement values (to_date('20100205','yyyymmdd'), 'Bank1', 11, 99, 99, 'EUR');
    insert into Daily_Movement values (to_date('20100205','yyyymmdd'), 'Bank1', 11, -100, -100, 'EUR');
    insert into Daily_Movement values (to_date('20100205','yyyymmdd'), 'Bank1', 11, 11, 11, 'EUR');
    -- 08 february: total daily movement = 99
    insert into Daily_Movement values (to_date('20100208','yyyymmdd'), 'Bank1', 11, 22, 22, 'EUR');
    insert into Daily_Movement values (to_date('20100208','yyyymmdd'), 'Bank1', 11, 33, 33, 'EUR');
    insert into Daily_Movement values (to_date('20100208','yyyymmdd'), 'Bank1', 11, 44, 44, 'EUR');
    -- 09 february: total daily movement = 66
    insert into Daily_Movement values (to_date('20100209','yyyymmdd'), 'Bank1', 11, 55, 55, 'EUR');
    insert into Daily_Movement values (to_date('20100209','yyyymmdd'), 'Bank1', 11, -66, -66, 'EUR');
    insert into Daily_Movement values (to_date('20100209','yyyymmdd'), 'Bank1', 11, 77, 77, 'EUR');
    -- 10 february: total daily movement = 287
    insert into Daily_Movement values (to_date('20100210','yyyymmdd'), 'Bank1', 11, 88, 88, 'EUR');
    insert into Daily_Movement values (to_date('20100210','yyyymmdd'), 'Bank1', 11, 99, 99, 'EUR');
    insert into Daily_Movement values (to_date('20100210','yyyymmdd'), 'Bank1', 11, 100, 100, 'EUR');
    -- 11 february: total daily movement = 22
    insert into Daily_Movement values (to_date('20100211','yyyymmdd'), 'Bank1', 11, 11, 11, 'EUR');
    insert into Daily_Movement values (to_date('20100211','yyyymmdd'), 'Bank1', 11, -22, -22, 'EUR');
    insert into Daily_Movement values (to_date('20100211','yyyymmdd'), 'Bank1', 11, 33, 33, 'EUR');
    -- 12 february: total daily movement = 77
    insert into Daily_Movement values (to_date('20100212','yyyymmdd'), 'Bank1', 11, -44, -44, 'EUR');
    insert into Daily_Movement values (to_date('20100212','yyyymmdd'), 'Bank1', 11, 55, 55, 'EUR');
    insert into Daily_Movement values (to_date('20100212','yyyymmdd'), 'Bank1', 11, 66, 66, 'EUR');
    -- 15 february: total daily movement = 66
    insert into Daily_Movement values (to_date('20100215','yyyymmdd'), 'Bank1', 11, 77, 77, 'EUR');
    insert into Daily_Movement values (to_date('20100215','yyyymmdd'), 'Bank1', 11, 88, 88, 'EUR');
    insert into Daily_Movement values (to_date('20100215','yyyymmdd'), 'Bank1', 11, -99, -99, 'EUR');
    -- 16 february: total daily movement = 133
    insert into Daily_Movement values (to_date('20100216','yyyymmdd'), 'Bank1', 11, 100, 100, 'EUR');
    insert into Daily_Movement values (to_date('20100216','yyyymmdd'), 'Bank1', 11, 11, 11, 'EUR');
    insert into Daily_Movement values (to_date('20100216','yyyymmdd'), 'Bank1', 11, 22, 22, 'EUR');
    -- 17 february: total daily movement = 66
    insert into Daily_Movement values (to_date('20100217','yyyymmdd'), 'Bank1', 11, -33, -33, 'EUR');
    insert into Daily_Movement values (to_date('20100217','yyyymmdd'), 'Bank1', 11, 44, 44, 'EUR');
    insert into Daily_Movement values (to_date('20100217','yyyymmdd'), 'Bank1', 11, 55, 55, 'EUR');
    -- 18 february: total daily movement = 77
    insert into Daily_Movement values (to_date('20100218','yyyymmdd'), 'Bank1', 11, 66, 66, 'EUR');
    insert into Daily_Movement values (to_date('20100218','yyyymmdd'), 'Bank1', 11, -77, -77, 'EUR');
    insert into Daily_Movement values (to_date('20100218','yyyymmdd'), 'Bank1', 11, 88, 88, 'EUR');
    -- 19 february: total daily movement = 210
    insert into Daily_Movement values (to_date('20100219','yyyymmdd'), 'Bank1', 11, 99, 99, 'EUR');
    insert into Daily_Movement values (to_date('20100219','yyyymmdd'), 'Bank1', 11, 100, 100, 'EUR');
    insert into Daily_Movement values (to_date('20100219','yyyymmdd'), 'Bank1', 11, 11, 11, 'EUR');
    -- 22 february: total daily movement = 99
    insert into Daily_Movement values (to_date('20100222','yyyymmdd'), 'Bank1', 11, 22, 22, 'EUR');
    insert into Daily_Movement values (to_date('20100222','yyyymmdd'), 'Bank1', 11, 33, 33, 'EUR');
    insert into Daily_Movement values (to_date('20100222','yyyymmdd'), 'Bank1', 11, 44, 44, 'EUR');
    -- 23 february: total daily movement = -44
    insert into Daily_Movement values (to_date('20100223','yyyymmdd'), 'Bank1', 11, -55, 55, 'EUR');
    insert into Daily_Movement values (to_date('20100223','yyyymmdd'), 'Bank1', 11, -66, 66, 'EUR');
    insert into Daily_Movement values (to_date('20100223','yyyymmdd'), 'Bank1', 11, 77, 77, 'EUR');
    -- 24 february: total daily movement = -111
    insert into Daily_Movement values (to_date('20100224','yyyymmdd'), 'Bank1', 11, 88, 88, 'EUR');
    insert into Daily_Movement values (to_date('20100224','yyyymmdd'), 'Bank1', 11, -99, 99, 'EUR');
    insert into Daily_Movement values (to_date('20100224','yyyymmdd'), 'Bank1', 11, -100, 100, 'EUR');
    -- 25 february: total daily movement = 22
    insert into Daily_Movement values (to_date('20100225','yyyymmdd'), 'Bank1', 11, 11, 11, 'EUR');
    insert into Daily_Movement values (to_date('20100225','yyyymmdd'), 'Bank1', 11, -22, 22, 'EUR');
    insert into Daily_Movement values (to_date('20100225','yyyymmdd'), 'Bank1', 11, 33, 33, 'EUR');
    -- 26 february: total daily movement = 55
    insert into Daily_Movement values (to_date('20100226','yyyymmdd'), 'Bank1', 11, 44, 44, 'EUR');
    insert into Daily_Movement values (to_date('20100226','yyyymmdd'), 'Bank1', 11, -55, 55, 'EUR');
    insert into Daily_Movement values (to_date('20100226','yyyymmdd'), 'Bank1', 11, 66, 66, 'EUR');
    -- total february 1542
    -- account 12
    -- 4 january: total daily movement = 88
    insert into Daily_Movement values (to_date('20100104','yyyymmdd'), 'Bank1', 12, 77, 77, 'EUR');
    insert into Daily_Movement values (to_date('20100104','yyyymmdd'), 'Bank1', 12, -88, 88, 'EUR');
    insert into Daily_Movement values (to_date('20100104','yyyymmdd'), 'Bank1', 12, 99, 99, 'EUR');
    -- 5 january: total daily movement = 89
    insert into Daily_Movement values (to_date('20100105','yyyymmdd'), 'Bank1', 12, 100, 100, 'EUR');
    insert into Daily_Movement values (to_date('20100105','yyyymmdd'), 'Bank1', 12, -11, 11, 'EUR');
    -- 6 january: total daily movement = 99
    insert into Daily_Movement values (to_date('20100106','yyyymmdd'), 'Bank1', 12, 22, 22, 'EUR');
    insert into Daily_Movement values (to_date('20100106','yyyymmdd'), 'Bank1', 12, 33, 33, 'EUR');
    insert into Daily_Movement values (to_date('20100106','yyyymmdd'), 'Bank1', 12, 44, 44, 'EUR');
    -- 7 january: total daily movement = -88
    insert into Daily_Movement values (to_date('20100107','yyyymmdd'), 'Bank1', 12, 55, 55, 'EUR');
    insert into Daily_Movement values (to_date('20100107','yyyymmdd'), 'Bank1', 12, -66, 66, 'EUR');
    insert into Daily_Movement values (to_date('20100107','yyyymmdd'), 'Bank1', 12, -77, 77, 'EUR');
    -- 8 january: total daily movement = 87
    insert into Daily_Movement values (to_date('20100108','yyyymmdd'), 'Bank1', 12, 88, 88, 'EUR');
    insert into Daily_Movement values (to_date('20100108','yyyymmdd'), 'Bank1', 12, 99, 99, 'EUR');
    insert into Daily_Movement values (to_date('20100108','yyyymmdd'), 'Bank1', 12, -100, 100, 'EUR');
    -- 11 january: total daily movement = 66
    insert into Daily_Movement values (to_date('20100111','yyyymmdd'), 'Bank1', 12, 11, 11, 'EUR');
    insert into Daily_Movement values (to_date('20100111','yyyymmdd'), 'Bank1', 12, 22, 22, 'EUR');
    insert into Daily_Movement values (to_date('20100111','yyyymmdd'), 'Bank1', 12, 33, 33, 'EUR');
    -- 12 january: total daily movement = 55
    insert into Daily_Movement values (to_date('20100112','yyyymmdd'), 'Bank1', 12, 44, 44, 'EUR');
    insert into Daily_Movement values (to_date('20100112','yyyymmdd'), 'Bank1', 12, -55, 55, 'EUR');
    insert into Daily_Movement values (to_date('20100112','yyyymmdd'), 'Bank1', 12, 66, 222, 'EUR');
    -- 13 january: total daily movement = 88
    insert into Daily_Movement values (to_date('20100113','yyyymmdd'), 'Bank1', 12, 77, 222, 'EUR');
    insert into Daily_Movement values (to_date('20100113','yyyymmdd'), 'Bank1', 12, -88, 222, 'EUR');
    insert into Daily_Movement values (to_date('20100113','yyyymmdd'), 'Bank1', 12, 99, 222, 'EUR');
    -- 14 january: total daily movement = 67
    insert into Daily_Movement values (to_date('20100114','yyyymmdd'), 'Bank1', 12, 100, 222, 'EUR');
    insert into Daily_Movement values (to_date('20100114','yyyymmdd'), 'Bank1', 12, -11, 222, 'EUR');
    insert into Daily_Movement values (to_date('20100114','yyyymmdd'), 'Bank1', 12, -22, 22, 'EUR');
    -- 15 january: total daily movement = 132
    insert into Daily_Movement values (to_date('20100115','yyyymmdd'), 'Bank1', 12, 33, 33, 'EUR');
    insert into Daily_Movement values (to_date('20100115','yyyymmdd'), 'Bank1', 12, 44, 44, 'EUR');
    insert into Daily_Movement values (to_date('20100115','yyyymmdd'), 'Bank1', 12, 55, 55, 'EUR');
    -- 18 january: total daily movement = 99
    insert into Daily_Movement values (to_date('20100118','yyyymmdd'), 'Bank1', 12, -66, 66, 'EUR');
    insert into Daily_Movement values (to_date('20100118','yyyymmdd'), 'Bank1', 12, 77, 77, 'EUR');
    insert into Daily_Movement values (to_date('20100118','yyyymmdd'), 'Bank1', 12, 88, 88, 'EUR');
    -- 19 january: total daily movement = -188
    insert into Daily_Movement values (to_date('20100119','yyyymmdd'), 'Bank1', 12, -99, 99, 'EUR');
    insert into Daily_Movement values (to_date('20100119','yyyymmdd'), 'Bank1', 12, -100, 100, 'EUR');
    insert into Daily_Movement values (to_date('20100119','yyyymmdd'), 'Bank1', 12, 11, 11, 'EUR');
    -- 20 january: total daily movement = 11
    insert into Daily_Movement values (to_date('20100120','yyyymmdd'), 'Bank1', 12, 22, 22, 'EUR');
    insert into Daily_Movement values (to_date('20100120','yyyymmdd'), 'Bank1', 12, 33, 33, 'EUR');
    insert into Daily_Movement values (to_date('20100120','yyyymmdd'), 'Bank1', 12, -44, 44, 'EUR');
    -- 21 january: total daily movement = 198
    insert into Daily_Movement values (to_date('20100121','yyyymmdd'), 'Bank1', 12, 55, 55, 'EUR');
    insert into Daily_Movement values (to_date('20100121','yyyymmdd'), 'Bank1', 12, 66, 66, 'EUR');
    insert into Daily_Movement values (to_date('20100121','yyyymmdd'), 'Bank1', 12, 77, 77, 'EUR');
    -- 22 january: total daily movement = 111
    insert into Daily_Movement values (to_date('20100122','yyyymmdd'), 'Bank1', 12, -88, 88, 'EUR');
    insert into Daily_Movement values (to_date('20100122','yyyymmdd'), 'Bank1', 12, 99, 99, 'EUR');
    insert into Daily_Movement values (to_date('20100122','yyyymmdd'), 'Bank1', 12, 100, 100, 'EUR');
    -- 25 january: total daily movement = -22
    insert into Daily_Movement values (to_date('20100125','yyyymmdd'), 'Bank1', 12, -11, 11, 'EUR');
    insert into Daily_Movement values (to_date('20100125','yyyymmdd'), 'Bank1', 12, 22, 22, 'EUR');
    insert into Daily_Movement values (to_date('20100125','yyyymmdd'), 'Bank1', 12, -33, 33, 'EUR');
    -- 26 january: total daily movement = 33
    insert into Daily_Movement values (to_date('20100126','yyyymmdd'), 'Bank1', 12, 44, 44, 'EUR');
    insert into Daily_Movement values (to_date('20100126','yyyymmdd'), 'Bank1', 12, 55, 55, 'EUR');
    insert into Daily_Movement values (to_date('20100126','yyyymmdd'), 'Bank1', 12, -66, 66, 'EUR');
    -- 27 january: total daily movement = 264
    insert into Daily_Movement values (to_date('20100127','yyyymmdd'), 'Bank1', 12, 77, 77, 'EUR');
    insert into Daily_Movement values (to_date('20100127','yyyymmdd'), 'Bank1', 12, 88, 88, 'EUR');
    insert into Daily_Movement values (to_date('20100127','yyyymmdd'), 'Bank1', 12, 99, 99, 'EUR');
    -- 28 january: total daily movement = -111
    insert into Daily_Movement values (to_date('20100128','yyyymmdd'), 'Bank1', 12, -100, 100, 'EUR');
    insert into Daily_Movement values (to_date('20100128','yyyymmdd'), 'Bank1', 12, -11, 11, 'EUR');
    insert into Daily_Movement values (to_date('20100128','yyyymmdd'), 'Bank1', 12, 22, 22, 'EUR');
    -- 29 january: total daily movement = 132
    insert into Daily_Movement values (to_date('20100129','yyyymmdd'), 'Bank1', 12, 33, 33, 'EUR');
    insert into Daily_Movement values (to_date('20100129','yyyymmdd'), 'Bank1', 12, 44, 44, 'EUR');
    insert into Daily_Movement values (to_date('20100129','yyyymmdd'), 'Bank1', 12, 55, 55, 'EUR');
    -- total january: 1210
    -- 01 february: total daily movement = 77
    insert into Daily_Movement values (to_date('20100201','yyyymmdd'), 'Bank1', 12, 66, 66, 'EUR');
    insert into Daily_Movement values (to_date('20100201','yyyymmdd'), 'Bank1', 12, -77, 77, 'EUR');
    insert into Daily_Movement values (to_date('20100201','yyyymmdd'), 'Bank1', 12, 88, 88, 'EUR');
    -- 02 february: total daily movement = -12
    insert into Daily_Movement values (to_date('20100202','yyyymmdd'), 'Bank1', 12, 99, 99, 'EUR');
    insert into Daily_Movement values (to_date('20100202','yyyymmdd'), 'Bank1', 12, -100, 100, 'EUR');
    insert into Daily_Movement values (to_date('20100202','yyyymmdd'), 'Bank1', 12, -11, 11, 'EUR');
    -- 03 february: total daily movement = 33
    insert into Daily_Movement values (to_date('20100203','yyyymmdd'), 'Bank1', 12, 22, 22, 'EUR');
    insert into Daily_Movement values (to_date('20100203','yyyymmdd'), 'Bank1', 12, -33, 33, 'EUR');
    insert into Daily_Movement values (to_date('20100203','yyyymmdd'), 'Bank1', 12, 44, 44, 'EUR');
    -- 04 february: total daily movement = 66
    insert into Daily_Movement values (to_date('20100204','yyyymmdd'), 'Bank1', 12, 55, 55, 'EUR');
    insert into Daily_Movement values (to_date('20100204','yyyymmdd'), 'Bank1', 12, -66, 66, 'EUR');
    insert into Daily_Movement values (to_date('20100204','yyyymmdd'), 'Bank1', 12, 77, 77, 'EUR');
    -- 05 february: total daily movement = 111
    insert into Daily_Movement values (to_date('20100205','yyyymmdd'), 'Bank1', 12, -88, 88, 'EUR');
    insert into Daily_Movement values (to_date('20100205','yyyymmdd'), 'Bank1', 12, 99, 99, 'EUR');
    insert into Daily_Movement values (to_date('20100205','yyyymmdd'), 'Bank1', 12, 100, 100, 'EUR');
    -- 08 february: total daily movement = 66
    insert into Daily_Movement values (to_date('20100208','yyyymmdd'), 'Bank1', 12, 11, 11, 'EUR');
    insert into Daily_Movement values (to_date('20100208','yyyymmdd'), 'Bank1', 12, 22, 22, 'EUR');
    insert into Daily_Movement values (to_date('20100208','yyyymmdd'), 'Bank1', 12, 33, 33, 'EUR');
    -- 09 february: total daily movement = -33
    insert into Daily_Movement values (to_date('20100209','yyyymmdd'), 'Bank1', 12, -44, 44, 'EUR');
    insert into Daily_Movement values (to_date('20100209','yyyymmdd'), 'Bank1', 12, -55, 55, 'EUR');
    insert into Daily_Movement values (to_date('20100209','yyyymmdd'), 'Bank1', 12, 66, 66, 'EUR');
    -- 10 february: total daily movement = 264
    insert into Daily_Movement values (to_date('20100210','yyyymmdd'), 'Bank1', 12, 77, 77, 'EUR');
    insert into Daily_Movement values (to_date('20100210','yyyymmdd'), 'Bank1', 12, 88, 88, 'EUR');
    insert into Daily_Movement values (to_date('20100210','yyyymmdd'), 'Bank1', 12, 99, 99, 'EUR');
    -- 11 february: total daily movement = -67
    insert into Daily_Movement values (to_date('20100211','yyyymmdd'), 'Bank1', 12, -100, 100, 'EUR');
    insert into Daily_Movement values (to_date('20100211','yyyymmdd'), 'Bank1', 12, 11, 11, 'EUR');
    insert into Daily_Movement values (to_date('20100211','yyyymmdd'), 'Bank1', 12, 22, 22, 'EUR');
    -- 12 february: total daily movement = -66
    insert into Daily_Movement values (to_date('20100212','yyyymmdd'), 'Bank1', 12, 33, 33, 'EUR');
    insert into Daily_Movement values (to_date('20100212','yyyymmdd'), 'Bank1', 12, -44, 44, 'EUR');
    insert into Daily_Movement values (to_date('20100212','yyyymmdd'), 'Bank1', 12, -55, 55, 'EUR');
    -- 15 february: total daily movement = -77
    insert into Daily_Movement values (to_date('20100215','yyyymmdd'), 'Bank1', 12, -66, 66, 'EUR');
    insert into Daily_Movement values (to_date('20100215','yyyymmdd'), 'Bank1', 12, 77, 77, 'EUR');
    insert into Daily_Movement values (to_date('20100215','yyyymmdd'), 'Bank1', 12, -88, 88, 'EUR');
    -- 16 february: total daily movement = 210
    insert into Daily_Movement values (to_date('20100216','yyyymmdd'), 'Bank1', 12, 99, 99, 'EUR');
    insert into Daily_Movement values (to_date('20100216','yyyymmdd'), 'Bank1', 12, 100, 100, 'EUR');
    insert into Daily_Movement values (to_date('20100216','yyyymmdd'), 'Bank1', 12, 11, 11, 'EUR');
    -- 17 february: total daily movement = 99
    insert into Daily_Movement values (to_date('20100217','yyyymmdd'), 'Bank1', 12, 22, 22, 'EUR');
    insert into Daily_Movement values (to_date('20100217','yyyymmdd'), 'Bank1', 12, 33, 33, 'EUR');
    insert into Daily_Movement values (to_date('20100217','yyyymmdd'), 'Bank1', 12, 44, 44, 'EUR');
    -- 18 february: total daily movement = 66
    insert into Daily_Movement values (to_date('20100218','yyyymmdd'), 'Bank1', 12, 55, 55, 'EUR');
    insert into Daily_Movement values (to_date('20100218','yyyymmdd'), 'Bank1', 12, -66, 66, 'EUR');
    insert into Daily_Movement values (to_date('20100218','yyyymmdd'), 'Bank1', 12, 77, 77, 'EUR');
    -- 19 february: total daily movement = 287
    insert into Daily_Movement values (to_date('20100219','yyyymmdd'), 'Bank1', 12, 88, 88, 'EUR');
    insert into Daily_Movement values (to_date('20100219','yyyymmdd'), 'Bank1', 12, 99, 99, 'EUR');
    insert into Daily_Movement values (to_date('20100219','yyyymmdd'), 'Bank1', 12, 100, 100, 'EUR');
    -- 22 february: total daily movement = -66
    insert into Daily_Movement values (to_date('20100222','yyyymmdd'), 'Bank1', 12, -11, 11, 'EUR');
    insert into Daily_Movement values (to_date('20100222','yyyymmdd'), 'Bank1', 12, -22, 22, 'EUR');
    insert into Daily_Movement values (to_date('20100222','yyyymmdd'), 'Bank1', 12, -33, 33, 'EUR');
    -- 23 february: total daily movement = 165
    insert into Daily_Movement values (to_date('20100223','yyyymmdd'), 'Bank1', 12, 44, 44, 'EUR');
    insert into Daily_Movement values (to_date('20100223','yyyymmdd'), 'Bank1', 12, 55, 55, 'EUR');
    insert into Daily_Movement values (to_date('20100223','yyyymmdd'), 'Bank1', 12, 66, 66, 'EUR');
    -- 24 february: total daily movement = -110
    insert into Daily_Movement values (to_date('20100224','yyyymmdd'), 'Bank1', 12, 77, 77, 'EUR');
    insert into Daily_Movement values (to_date('20100224','yyyymmdd'), 'Bank1', 12, -88, -88, 'EUR');
    insert into Daily_Movement values (to_date('20100224','yyyymmdd'), 'Bank1', 12, -99, -99, 'EUR');
    -- 25 february: total daily movement = -133
    insert into Daily_Movement values (to_date('20100225','yyyymmdd'), 'Bank1', 12, -100, -100, 'EUR');
    insert into Daily_Movement values (to_date('20100225','yyyymmdd'), 'Bank1', 12, -11, -11, 'EUR');
    insert into Daily_Movement values (to_date('20100225','yyyymmdd'), 'Bank1', 12, -22, -22, 'EUR');
    -- 26 february: total daily movement = 44
    insert into Daily_Movement values (to_date('20100226','yyyymmdd'), 'Bank1', 12, 33, 33, 'EUR');
    insert into Daily_Movement values (to_date('20100226','yyyymmdd'), 'Bank1', 12, -44, 44, 'EUR');
    insert into Daily_Movement values (to_date('20100226','yyyymmdd'), 'Bank1', 12, 55, 55, 'EUR');
    -- total february: 924
    commit;

  • Very big table to delete :)

    Hi all!
    I have tablespace with 6 datafile and each has 4GB, lately that tablespace is increasing to fast so we decided to delete some data from table which is the largest.
    That table has around 10 million records, and when I run query to delete it by date:
    delete from TABLE_NAME where dt_start < to_date('09/07/16', 'YY/MM/DD');
    all records which are older than 3 months.
    After running to execute that query I see it in "Session" for about 2-3 hours and then disappears! but query still has executing status.
    What happened to this query? why it disappeared?

    Is there any chance you could partition the table by date so that you could simply drop the older partitions?
    What fraction of the data are you trying to delete? If you are deleting a substantial fraction of the data in the table, it is likely more efficient to write the data you want to keep to a different table, and then either truncate the existing table and move the saved data back or drop the existing table and rename the table you saved the data into.
    Justin

  • Create very big table

    Hi ,
    call_fact table contain about 300 milion tables.
    exceptions table contain about 150 milion tables.
    Both tables have an uptodate statistics.
    The machine have 8 CPUs
    The statment already run 48 hours.
    Can one suggest a faster way to do it ?
    create table abc parallel
    as
    select /*+ parallel(t,32) */ *
    from STARQ.CALL_FACT t
    where rowid NOT IN (select /*+ parallel(ex,32) */ row_id
    from starq.exceptions ex );
    The plan is:
    Plan
    CREATE TABLE STATEMENT ALL_ROWSCost: 1,337,556,446,040                                                        
    15 PX COORDINATOR                                                   
    14 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ30001 :Q3001Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832                                              
    13 LOAD AS SELECT PARALLEL_COMBINED_WITH_PARENT :Q3001                                        
    12 BUFFER SORT PARALLEL_COMBINED_WITH_CHILD :Q3001                                   
    11 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q3001Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832                               
    10 PX SEND ROUND-ROBIN PARALLEL_FROM_SERIAL SYS.:TQ30000 Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832                          
    9 FILTER                     
    4 PX COORDINATOR                
    3 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ20000 :Q2000Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832           
    2 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q2000Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832 Partition #: 10 Partitions accessed #1 - #46     
    1 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STARQ.CALL_FACT :Q2000Cost: 26,234 Bytes: 43,994,469,792 Cardinality: 282,015,832 Partition #: 10 Partitions accessed #1 - #46
    8 PX COORDINATOR                
    7 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ10000 :Q1000Cost: 4,743 Bytes: 10 Cardinality: 1           
    6 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1000Cost: 4,743 Bytes: 10 Cardinality: 1      
    5 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT STARQ.EXCEPTIONS :Q1000Cost: 4,743 Bytes: 10 Cardinality: 1

    > When in doubt, I use exists. Here it is clear to me that exists will be faster
    If the row_id column is declared not null, this is not true: exactly the same path is chosen as can be seen below.
    select /* with primary key */
      from call_fact t
    where rowid not in
           ( select row_id
               from exceptions ex
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch     1001      0.46       0.46          0      32467          0       15000
    total     1003      0.46       0.47          0      32467          0       15000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 61 
    Rows     Row Source Operation
      15000  NESTED LOOPS ANTI (cr=32467 pr=0 pw=0 time=600105 us)
      30000   TABLE ACCESS FULL CALL_FACT (cr=1466 pr=0 pw=0 time=120050 us)
      15000   INDEX UNIQUE SCAN EX_PK (cr=31001 pr=0 pw=0 time=297574 us)(object id 64376)
    select /* with primary key */
      from call_fact t
    where not exists
           ( select 'same rowid'
               from exceptions ex
              where ex.row_id = t.rowid
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch     1001      0.51       0.46          0      32467          0       15000
    total     1003      0.51       0.47          0      32467          0       15000
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 61 
    Rows     Row Source Operation
      15000  NESTED LOOPS ANTI (cr=32467 pr=0 pw=0 time=585099 us)
      30000   TABLE ACCESS FULL CALL_FACT (cr=1466 pr=0 pw=0 time=120048 us)
      15000   INDEX UNIQUE SCAN EX_PK (cr=31001 pr=0 pw=0 time=298198 us)(object id 64376)
    ********************************************************************************Note that the tables, scaled down to 30,000 and 15,000 rows, are created like this:
    SQL> create table call_fact (col1, col2)
      2  as
      3   select level
      4        , lpad('*',100,'*')
      5     from dual
      6  connect by level <= 30000
      7  /
    Tabel is aangemaakt.
    SQL> create table exceptions (row_id, col)
      2  as
      3  select rowid
      4       , lpad('*',100,'*')
      5    from call_fact
      6   where mod(col1,2) = 0
      7  /
    Tabel is aangemaakt.
    SQL> alter table exceptions add constraint ex_pk primary key (row_id)
      2  /
    Tabel is gewijzigd.
    SQL> exec dbms_stats.gather_table_stats(user,'call_fact')
    PL/SQL-procedure is geslaagd.
    SQL> exec dbms_stats.gather_table_stats(user,'exceptions',cascade=>true)
    PL/SQL-procedure is geslaagd.Without declaring row_id not null, I've test exists to be definitely much faster, as the not in variant cannot do an antijoin anymore.
    Regards,
    Rob.

  • Excution of a PL/SQL procedure with CURSOR for big tables

    I have prepared a proceudre that uses CURSOR to make a complex query for tables with big number of records, something like 900'000. And the execution failed; ORA-01652:impossible to extend the temporary segment of 64 in the space of storage TEMP.
    Any sugestion.

    This brings us to the following question: How could I calculate the bytes required by a cursor?. It is a selection of certain fields of very big tables. Let's say that the fields are NUMBER(4), NUMBER(8) and CHAR(2). The fields are in 2 relational tables of 900'000 each. What size is required for a procedure like this.
    Your help is really appreciated.

  • How to UPDATE a big table in Oracle via Bulk Load

    Hi all,
    in a datastore target as Oracle 11g, I have a big table having 300milions of record; the structure is One integer key + 10 columns attributes .
    In IQ Source i have the same table with the same size ; the structure is One integer key + 1 column attributes .
    What i need to do is to UPDATE that single field in Oracle from the values stored in IQ .
    Any idea on how to organize efficiently the dataflow and the target writing mode ? bulk load ? api ?
    thank you
    Maurizio

    Hi,
    You cannot do bulk load when you need to UPDATE a field. Because all a bulk load does is add records to your table.
    Since you have to UPDATE a field, i would suggest to go for SCD with
    source > TC > MO > KG >target
    Arun

  • Print very big JTable

    Hi all,
    i have to print a very big table with 100000 rows and 6 columns, i have put the System.gc() at the end of the print method but when i print the table the print process become too big (more or less 700 kB for page and there are 1048 pages).
    It is possible to make a pdf of my table and this solution is better like the first?
    When i make the preview this take a lot of time for the size of the table, because first i have to create the table and then i preview it.
    There is a way to reduce the time lost for the table generation?
    N.B.: the data in the table is always the same.
    Thanks a lot!!!

    There is a way to reduce the time lost for the table
    generation? Write a table model, extending AbstractTableModel.
    The model is queried for each cell. Usually all the columns
    of one row are retrieved before getting next row. You may cache
    one row in the model: not the whole table!

  • Optimize delete in a very big database table

    Hi,
    For delete entries in database table i use instruction:
    Delete from <table> where <zone> = 'X'.
    The delete take seven hours (the table is very big and  <zone> isn't an index)
    How can i optimize for reduce the delete time.
    Thanks in advance for your response.
    Regards.

    what is the size of the table and how many lines are you going to delete?
    I would recommend you to delete only up to 5000 or 10000 records in one step.
    do 100 times.
    select *
              from
              into table itab.
              where
              up to 10.000 records.
    if ( itab is initial )
      exit.
    endif.
    delete ... from table itab.
    commit work.
    If this is still too slow, than you should create a secondary index with zone.
    You can drop the index after the deletion is finished.
    Siegfried

  • Cannot INSERT records into Partitioned Spatial Table and Index

    I am trying to tune our Spatial Storage by creating partitioning our spatial_entity table and index. I used the World Geographic Reference System (GEOREF) creating a partition for each 15 x 15 degree grid square assigning a partition key of decimal longitude, decimal_latitude. The build went OK, however when trying to insert a data record I receive an ORA-14400: Inserted partition key does not map to any partition.
    I validated the CREATE(s), and all appears correct, but obviously something is not correct, which is prompting for expert help in this forum.
    I would be very grateful for your help.
    Below are the code snippets for the table and index, and an insert statement.
    CREATE TABLE spatial_entity
         geoloc_type VARCHAR2 (60 BYTE) NOT NULL
    ,entity_id NUMBER NOT NULL
    ,metadata_xml_uuid VARCHAR2 (40 BYTE) NOT NULL
    ,geoloc MDSYS.sdo_geometry NOT NULL
    ,nee_method CHAR (1 BYTE) NOT NULL
    ,nee_status CHAR (1 BYTE) NOT NULL
    ,decimal_latitude NUMBER (15, 6) NOT NULL
    ,decimal_longitude NUMBER (15, 6) NOT NULL
    PARTITION BY RANGE (decimal_longitude, decimal_latitude)
         PARTITION p_lt_0_90s
              VALUES LESS THAN (1, -90)
         ,PARTITION p_lt_0_75s
              VALUES LESS THAN (1, -75)
         ,PARTITION p_lt_0_60s
              VALUES LESS THAN (1, -60)
         ,PARTITION p_lt_0_45s
              VALUES LESS THAN (1, -45)
         ,PARTITION p_lt_0_30s
              VALUES LESS THAN (1, -30)
         ,PARTITION p_lt_0_15s
              VALUES LESS THAN (1, -15)
         ,PARTITION p_lt_0_0
              VALUES LESS THAN (1, 0)
         ,PARTITION p_lt_0_15n
              VALUES LESS THAN (1, 15)
         ,PARTITION p_lt_0_30n
              VALUES LESS THAN (1, 30)
         ,PARTITION p_lt_0_45n
              VALUES LESS THAN (1, 45)
         ,PARTITION p_lt_0_60n
              VALUES LESS THAN (1, 60)
         ,PARTITION p_lt_0_75n
              VALUES LESS THAN (1, 75)
         ,PARTITION p_lt_0_90n
              VALUES LESS THAN (1, maxvalue)
    CREATE INDEX geo_spatial_ind ON spatial_entity (geoloc)
    INDEXTYPE IS mdsys.spatial_index
    PARAMETERS ('layer_gtype=MULTIPOINT TABLESPACE=GEO_SPATIAL_IND') LOCAL
    (PARTITION p_lt_0_90s,
    PARTITION p_lt_0_75s,
    PARTITION p_lt_0_60s,
    PARTITION p_lt_0_45s,
    PARTITION p_lt_0_30s,
    PARTITION p_lt_0_15s,
    PARTITION p_lt_0_0,
    PARTITION p_lt_0_15n,
    PARTITION p_lt_0_30n,
    PARTITION p_lt_0_45n,
    PARTITION p_lt_0_60n,
    PARTITION p_lt_0_75n,
    PARTITION p_lt_0_90n,
    INSERT INTO spatial_entity
         geoloc_type
         ,entity_id
         ,metadata_xml_uuid
         ,geoloc
         ,nee_method
         ,nee_status
         ,decimal_latitude
         ,decimal_longitude
    VALUES
                   'BATCH'
                   ,0
                   ,'6EC25B76-8482-4F95-E0440003BAD57EDF'
                   ,"MDSYS"."SDO_GEOMETRY"
                        2001
                        ,8307
                        ,"MDSYS"."SDO_POINT_TYPE" (32.915286, 44.337902, NULL)
                        ,NULL
                        ,NULL
                   ,'M'
                   ,'U'
                   ,32.915286
                   ,44.337902
    Thank you for you help.
    Dave

    Thank you for your quick reply. I did not post the entire CREATE script as it is quite long. The portion of the script that is applicable to the INSERT is:
    ,PARTITION p_lt_45e_90s
              VALUES LESS THAN (23, -90)
         ,PARTITION p_lt_45e_75s
              VALUES LESS THAN (23, -75)
         ,PARTITION p_lt_45e_60s
              VALUES LESS THAN (23, -60)
         ,PARTITION p_lt_45e_45s
              VALUES LESS THAN (23, -45)
         ,PARTITION p_lt_45e_30s
              VALUES LESS THAN (23, -30)
         ,PARTITION p_lt_45e_15s
              VALUES LESS THAN (23, -15)
         ,PARTITION p_lt_45e_0
              VALUES LESS THAN (23, 0)
         ,PARTITION p_lt_45e_15n
              VALUES LESS THAN (23, 15)
         ,PARTITION p_lt_45e_30n
              VALUES LESS THAN (23, 30)
         ,PARTITION p_lt_45e_45n
              VALUES LESS THAN (23, 45)
         ,PARTITION p_lt_45e_60n
              VALUES LESS THAN (23, 60)
         ,PARTITION p_lt_45e_75n
              VALUES LESS THAN (23, 75)
         ,PARTITION p_lt_45e_90n
              VALUES LESS THAN (23, maxvalue)
    Or, I do not fully understand. Are you indicating that I must explcitly state the longitude in each clause,
    e.g ,PARTITION p_lt_45e_45n
              VALUES LESS THAN (45, 45)
    ,PARTITION p_lt_45w_45n
              VALUES LESS THAN (-45, 45)
    If so, that answers the question of why it cannot find a partition, however an Oracle White Paper "Oracle Spatial Partitioning Best Practices" Sept 2004, discusses multi column partitioning such as represented by this problem, and gives an INSERT statement example of :
    CREATE TABLE multi_partn_table (in_date DATE,
    geom SDO_GEOMETRY, x_value NUMBER, y_value NUMBER)
    PARTITION BY RANGE (X_VALUE,Y_VALUE)
    PARTITION P_LT_90W_45S VALUES LESS THAN (1,-45),
    PARTITION P_LT_90W_0 VALUES LESS THAN (1,0),
    PARTITION P_LT_90W_45N VALUES LESS THAN (1,45),
    PARTITION P_LT_90W_90N VALUES LESS THAN (1,MAXVALUE
    and as I am writing this I am seeing that I failed to include the longitude and latitude in the SDO_GEOMETRY clause, so it does appear tht I need to explicitly state the longitude valuues.
    What is your judgement sir?
    Dave

  • How does table SMW3_BDOC become very big?

    Hi,
    The table SMW3_BDOC which store BDocs in my system becomes very big with several million records. Some BDocs in this table are sent several month ago. I'm very strange that why those BDocs were not processed?
    If I want to clean this table, will inconsistancy occurrs in system? And how can I clean this table for those very old BDocs?
    Thanks a lot for your help!

    Hi Long,
    I have faced the same issue recently on our Production system and this created a huge performance issue and completely blocked the system with TimeOut errors.
    I was able to clean up the same by running the report SMO8_FLOW_REORG in SE38.
    If you are very sure about cleaning up, first delete all the unnecessary Bdocs and then run this report.
    At the same time, check any CSA* queue is stuck in CRM inbound queue SMQ2. If yes, select it, manually unlock it, activate and then refresh. Also check any unnecessary queues stuck up there.
    Hope this could help you.
    regards,
    kalyan

  • SAPSR3DB   XMII_TRANSACTION table LOG column is very big

    Hi,
    We have a problem about MII server.
    SAPSR3DB   XMII_TRANSACTION table LOG column is very big data in it.
    How can it be decrease the size of data in this column?
    Regards.

    In 12.1 its XMII Administration Menu (Menu.jsp) --> System Management --> DefaultTransactionPersistance.
    In production I recommend setting this to 'ONERROR'
    There is also the TransactionPersistenceLifetime which determines how long entries will stay in the log table.
    We set this to 8 hours.

  • Delete 50 Million records from a table with 60 Million records

    Hi,
    I'm using oracle9.2.0.7 on win2k3 32bit.
    I need to delete 50M rows from a table that contains 60M records. This db was just passed on to me. I tried to use the delete statement but it takes too long. After reading the articles and forums, the best way to delete that many records from a table is to create a temp table, transfer the data needed to the temp table, drop the big table then rename temp table to big table. But the key here is in creating an exact replica of the big table.I have gotten the create table, indexes and constraints script in the export file from my production DB. But in the forums I read, I noticed that I haven't gotten the create grant script, is there a view I could use to get this? Can dbms.metadata get this?
    When I need to create an exact replica of my big table, I only need:
    create table, indexes, constraints, and grants script right? Did I miss anything?
    I just want to make sure that I haven't left anything out. Kindly help.
    Thanks and Best Regards

    Can dbms.metadata get this?
    Yes, dbms_metadata can get the grants.
    YAS@10GR2 > select dbms_metadata.GET_DEPENDENT_DDL('OBJECT_GRANT','TEST') from dual;
    DBMS_METADATA.GET_DEPENDENT_DDL('OBJECT_GRANT','TEST')
      GRANT SELECT ON "YAS"."TEST" TO "SYS"
    When I need to create an exact replica of my big table, I only need:
    create table, indexes, constraints, and grants script right? Did I miss anything?
    There are triggers, foreign keys referencing this table (which will not permit you to drop the table if you do not take care of them), snapshot logs on the table, snapshots based on the table, etc...

  • Managing a big table

    Hi All,
    I have a big table in my database. When I say big, it is related to data stored in it (around 70 million recs) and also no of columns (425).
    I do not have any problems with it now, but going ahead I assume, it would be a bottleneck or very difficult to manage this table.
    I have a star schema for the application of which this is a master table.
    Apart from partitioning the table is there any other way of better handling such a table.
    Regards

    Hi,
    Usually the fact tables tend to be smaller in number of columns and larger in number of records while the dimension tables obey to the opposite larger number of columns, which is were the powerful of the dimension lays on, and very few (in some exceptions even millions of record) records. So the high number of columns make me thing that the fact table may be, only may be, I don't have enough information, improperly designed. If that is the case then you may want to revisit that design and most likely you will find some 'facts' in your fact table that can become attributes of any of the dimension tables linked to.
    Can you say why are you adding new columns to the fact table? A fact table is created for a specific business process and if done properly there shouldn't be such a requirement of adding new columns. A fact use to be limited in the number of metrics you can take from it. In fact, it is more common the oposite, a factless fact table.
    In any case, from the point of view of handling this large table with so many columns I would say that you have to focus on avoiding the increasing number of columns. There is nothing in the database itself, such as partitioning that could do this for you. So one option is to figure out which columns you want to get 'vertical partition' and split the table in at least two new tables. The set of columns will be those that are more frequently used or those that are more critical to you.Then you will have to link these two tables together and with the rest of dimensions. But, again if you are adding new columns then is just a matter of time that you will be running in the same situation in the future.
    I am sorry but cannot offer better advice than to revisit the design of your fact table. For doing that you may want to have a look at http://www.kimballgroup.com/html/designtips.html
    LW

  • Performance question - Caching data of a big table

    Hi All,
    I have a general question about caching, I am using an Oracle 11g R2 database.
    I have a big table about 50 millions of rows that is accessed very often by my application. Some query runs slow and some are ok. But (obviously) when the data of this table are already in the cache (so basically when a user requests the same thing twice or many times) it runs very quickly.
    Does somebody has any recommendations about caching the data / table of this size ?
    Many thanks.

    Chiwatel wrote:
    With better formatting (I hope), sorry I am not used to the new forum !
    Plan hash value: 2501344126
    | Id  | Operation                            | Name          | Starts | E-Rows |E-Bytes| Cost (%CPU)| Pstart| Pstop | A-Rows |  A-Time  | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
    |  0 | SELECT STATEMENT        |                    |      1 |        |      |  7232 (100)|      |      |  68539 |00:14:20.06 |    212K|  87545 |      |      |          |
    |  1 |  SORT ORDER BY                      |                |      1 |  7107 |  624K|  7232  (1)|      |      |  68539 |00:14:20.06 |    212K|  87545 |  3242K|  792K| 2881K (0)|
      2 |  NESTED LOOPS                      |                |      1 |        |      |            |      |      |  68539 |00:14:19.26 |    212K|  87545 |      |      |          |
    |  3 |    NESTED LOOPS                      |                |      1 |  7107 |  624K|  7230  (1)|      |      |  70492 |00:07:09.08 |    141K|  43779 |      |      |          |
    *  4 |    INDEX RANGE SCAN                | CM_MAINT_PK_ID |      1 |  7107 |  284K|    59  (0)|      |      |  70492 |00:00:04.90 |    496 |    453 |      |      |          |
    |  5 |    PARTITION RANGE ITERATOR        |                |  70492 |      1 |      |    1  (0)|  KEY |  KEY |  70492 |00:07:03.32 |    141K|  43326 |      |      |          |
    |*  6 |      INDEX UNIQUE SCAN              | D1T400P0      |  70492 |      1 |      |    1  (0)|  KEY |  KEY |  70492 |00:07:01.71 |    141K|  43326 |      |      |          |
    |*  7 |    TABLE ACCESS BY GLOBAL INDEX ROWID| D1_DVC_EVT    |  70492 |      1 |    49 |    2  (0)| ROWID | ROWID |  68539 |00:07:09.17 |  70656 |  43766 |      |      |          |
    Predicate Information (identified by operation id):
      4 - access("ERO"."MAINT_OBJ_CD"='D1-DEVICE' AND "ERO"."PK_VALUE1"='461089508922')
      6 - access("ERO"."DVC_EVT_ID"="E"."DVC_EVT_ID")
      7 - filter(("E"."DVC_EVT_TYPE_CD"='END-GSMLOWLEVEL-EXCP-SEV-1' OR "E"."DVC_EVT_TYPE_CD"='STR-GSMLOWLEVEL-EXCP-SEV-1'))
    Your user has executed a query to return 68,000 rows - what type of user is it, a human being cannot possibly cope with that much data and it's not entirely surprising that it might take quite some time to return it.
    One thing I'd check is whether you're always getting the same execution plan - Oracle's estimates here are out by a factor of about 95 (7,100 rows predicted vs. 68,500 returned) perhaps some of your variation in timing relates to plan changes.
    If you check the figures you'll see about half your time came from probing the unique index, and half came from visiting the table. In general it's hard to beat Oracle's caching algorithms, but indexes are often much smaller than the tables they cover, so it's possible that your best strategy is to protect this index at the cost of the table. Rather than trying to create a KEEP cache the index, though, you MIGHT find that you get some benefit from creating a RECYCLE cache for the table, using a small percentage of the available memory - the target is to fix things so that table blocks you won't revisit don't push index blocks you will revisit from memory.
    Another detail to consider is that if you are visiting the index and table completely randomly (for 68,500 locations) it's possible that you end up re-reading blocks several times in the course of the visit. If you order the intermediate result set from the from the driving table first you may find that you're walking the index and table in order and don't have to re-read any blocks. This is something only you can know, though.  THe code would have to change to include an inline view with a no_merge and no_eliminate_oby hint.
    Regards
    Jonathan Lewis

Maybe you are looking for

  • Looking for a gaming controller that will work with iMac

    I'm not really into gaming, but I just bought a game and right now I have to use the keyboard keys and I'm finding it to be a not so enjoyable gaming experience. So I would like to purchase a gaming controller.... does anyone have any recommendations

  • Error logging in on my new ipad - HELP!!!!

    HI! I just got my ipad, and IM trying to log it in to connect to itunes etc. when I log in, I use my applie ID and my password and then I get this error - For your security, your APple ID login must be an email address.... Well, my ID is not an email

  • Im having issues getting my lap top find my headphone output in the system settings

    Could this be a problem with the drivers not reconginizing the jack or a problem with the optical in the jack not reading that there are haedphones pluged in? Im not sure. When I plug headphones in and try to ajust the the volume and I get the vloume

  • Inter-company payment terms not populated

    Dear Guru, we have inter-company setup, but the payment terms, which defaulted from customer/vendor master, cann't be populated to inter-company posting line.  Could you give some idea how to enable the payment terms default for those inter-company p

  • CS3 Toolbar question

    Yes I know, its CS3 but this is what I have and I'm taking a class using it.  I am having a problem with CS3 Flash and the toolbar on the left.  Certain tools have options and those options are not showing up. Yes, I went to Customize Tools panel and