Analyze table after insert a large number of records?

For performance purpose, is it a good practice to execute an 'analyze table' command after inserting a large number of a records into a table in Oracle 10g, if there is a complex query following the insert?
For example:
Insert into foo ...... //Insert one million records to table foo.
analyze table foo COMPUTE STATISTICS; //analyze table foo
select * from foo, bar, car...... //Execute a complex query whithout hints
//after 1 million records inserted into foo
Does this strategy help to improve the overall performance?
Thanks.

Different execution plans will most frequently occur when the ratio of the number of records in various tables involved in the select has changed tremendously. This happens above all if 'fact' tables are growing and 'lookup' tables stayed constant.
This is why you shouldn't test an application with a small number of 'fact' records.
This can happen both with analyze table and dbms_stats.
The advantage of dbms_stats is, it will export the current statistics to a stats to table, so you can always revert to them using dbms_stats.import_stats.
You can even overrule individual table and column statistics by artificial values.
Hth
Sybrand Bakker
Senior Oracle DBA

Similar Messages

  • Table Properties: Tweaks for large number of records

    Hi experts,
    I have a small table, about 5 columns, which will store records numbering in millions (at least 7 million). Is there any specific list of properties/checklist that I need to look into and ensure that the db health is good?
    Regards,
    Anand.

    Make sure you have appropriate indexing for your query needs. Make sure you have sufficient and complete declarative integrity constraints in place - yes foreign keys and check constraints matter and affect query plans. Seven million rows is not necessarily a lot, however you might consider how the data is accessed, used, and purged and see if partitioning is appropriate ( if you are licensed for the option). Also, make sure the SQL is efficient as possible - for instance if you are updating this table using a cursor loop, all it will do is get massively slower over time.

  • Why should we create index on  the table after inserting data ?

    Please tell me the Reason, why should we create index on the table after inserting data .
    while we can also create index on the table before insertion of the data.

    The choice depends on a number of factors, the main being how many rows are going to be inserted in the table as a percentage of the existing rows, or the percentage growth.
    Creating index after a table has been populated works better when the tables are large or the inserts are large for the following reasons
    1. The sort and creation of index is more efficient when done in batch and written in bulk. So works faster.
    2. When the index is being written blocks get acquired as more data gets written. So, when a large number of rows get inserted in a table that already has an index , the index data blocks start splitting / chaining. This increases the "depth" of the inverted b-tree makes and that makes the index less efficient on I/O. Creating index after data has been inserted allows Orale to create optical block distribution/ reduce splitting / chaining
    3. If an index exists then it too is routed through the undo / redo processes. Thats an overhead which is avoided when you create index after populating the table.
    Regards

  • How can i  print reports to different printer by use Trigger on table after insert

    Hello,
    Please can any one tell me how can i print (any message) to different printer (network & local printer) by use Trigger on table after insert.
    regards,
    Linda.

    What you want to do cannot be done with PL/SQL, which does have any print utilities. However you could write something using Java Stored Procedures.
    Of course the "different printer" bit will have to be data driven as triggers are not interactive.
    rgds, APC

  • Best way to delete large number of records but not interfere with tlog backups on a schedule

    Ive inherited a system with multiple databases and there are db and tlog backups that run on schedules.  There is a list of tables that need a lot of records purged from them.  What would be a good approach to use for deleting the old records?
    Ive been digging through old posts, reading best practices etc, but still not sure the best way to attack it.
    Approach #1
    A one-time delete that did everything.  Delete all the old records, in batches of say 50,000 at a time.
    After each run through all the tables for that DB, execute a tlog backup.
    Approach #2
    Create a job that does a similar process as above, except dont loop.  Only do the batch once.  Have the job scheduled to start say on the half hour, assuming the tlog backups run every hour.
    Note:
    Some of these (well, most) are going to have relations on them.

    Hi shiftbit,
    According to your description, in my opinion, the type of this question is changed to discussion. It will be better and 
    more experts will focus on this issue and assist you. When delete large number of records from tables, you can use bulk deletions that it would not make the transaction log growing and runing out of disk space. You can
    take the table offline for maintenance, a complete reorganization is always best because it does the delete and places the table back into a pristine state. 
    For more information about deleting a large number of records without affecting the transaction log.
    http://www.virtualobjectives.com.au/sqlserver/deleting_records_from_a_large_table.htm
    Hope it can help.
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Lookups with large number of records do not return the page

    Hi,
    I am developing an application using Oracle JHeadstart 10.1.3 Preview Version 10.1.3.0.78
    In my application I created a lookup under domains and used that lookup for an attribute (Display Type for this attribute is: dropDownList) in a group to get the translation fro this attribute. The group has around 14,800 records and the lookup has around 7,400 records.
    When I try to open this group (Tab), the progress shows that it is progressing but it does not open even after a long time.
    If I change the Display Type for the attribute from dropDownList to textInput then it works fine.
    I have other lookups with lower number of records. Those lookups work fine with dropDownList Display Type.
    Only I have this kind of problem when I have a lookup with large number of records.
    Is there any limitation of record number for lookups under Domains?
    How I can solve this?
    I need to translate the attribute (get the description from another table using the code).
    Your help would be appreciated.
    Thanks
    Syed

    We have also faced similar issue, but us, it was happening when we were using the dropDownList in a table, while the same dropDownList was working in table format. In our case the JVM is just used to crash and after google'ing it here in forums, found that it might be related to some JVM issue on Windows XP machines without Service Pack 2.
    Anyway... the workaround that we taken to get around the issue is to use LOV instead of a dropDownList in your jHeadStart.
    Hope this helps...
    - rutwik

  • Problem fetch large number of records

    Hi
    I want to fetch large number of record from database.and I use secondary index database for improve performance for example my database has 100000 records and query fetch 10000 number of records from this database .I use secondary database as index and move to secondary database until fetch all of the information that match for my condition.but when I move to this loop performance terrible.
    I know when I use DB_MULTIPLE fetch all of the information and performance improves but
    I read that I can not use this flag when I use secondary database for index.
    please help me and say me the flag or implement that fetch all of the information all to gether and I can manage this data to my language
    thanks alot
    regards
    saeed

    Hi Saeed,
    Could you post here your source code, that is compiled and ready to be executed, so we can take a look at the loop section ?
    You won't be able to do bulk fetch, that is retrieval with DB_MULTIPLE given the fact that the records in the primary are unordered by master (you don't have 40K consecutive records with master='master1'). So the only way to do things in this situation would be to position with a cursor in the secondary, on the first record with the secondary key 'master1' retrieve all the duplicate data (primary keys in the primary db) one by one, and do the corresponding gets in the primary database based on the retrieved keys.
    Though, there may be another option that should be taken into consideration, if you are willing to handle more work in your source code, that is, having a database that acts as a secondary, in which you'll update the records manually, with regard to the modifications performed in the primary db, without ever associating it with the primary database. This "secondary" would have <master> as key, and <std_id>, <name> (and other fields if you want to) as data. Note that for every modification that your perform on the std_info database you'll have to perform the corresponding modification on this database as well. You'll then be able to do the DBC->c_get() calls on this database with the DB_MULTIPLE flag specified.
    I have other question.is there any way that fetch information with number of record?
    for example fetch information that located third record of my database.I guess you're refering to logical record numbers, like the relational database's ROW_ID. Since your databases are organized as BTrees (without the DB_RECNUM flag specified) this is not possible directly.You could perform this if use a cursor and iterate through the records, and stop on the record whose number is the one you want (using an incrementing counter to keep track of the position). If your database could have operated with logical record numbers (BTree with DB_RECNUM, Queue or Recno) this would have been possible directly:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/logrec.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/renumber.html
    Regards,
    Andrei

  • Internal Error 500 started appearing even after setting a large number for postParametersLimit

    Hello,
    I adopted a CF 9 web-application and we're receiving the Internal 500 Error on a submit from a form that has line items for a RMA.
    The server originally only had Cumulative Hot Fix 1 on it and I thought if I installed Cumulative Hot Fix 4, I would be able to adjust the postParametersLimit variable in the neo-runtime.xml.  So, I tried doing this, and I've tried setting the number to an extremely large number (last try was 40000), and I'm still getting this error.  I've tried putting a <cfabort> on the first line on the cfm file that is being called, but I'm still getting the 500 error.
    As I mentioned, it's a RMA form and if the RMA has a few lines say up to 20 or 25 it will work.
    I've tried increasing the following all at the same time:
    postParameterSize to 1000 MB
    Max size of post data 1000MB
    Request throttle Memory 768MB
    Maximum JVM Heap Size - 1024 MB
    Enable HTTP Status Codes - unchecked
    Here's some extra backgroun on this situation.  This is all that happened before I got the server:
    The CF Server is installed as a virtual machin and was originally part of a domain that was exposed to the internet and the internal network.  The CF Admin was exposed to the internet.
    AT THIS TIME THE RMA FORM WORKED PROPERLY, EVEN WITH LARGE NUMBER OF LINE ITEMS.
    The CF Server was hacked, so they did the following:
    They took a snapshot of the CF Server
    Unjoined it from the domain and put it in the DMZ.
    The server can no longer connect to the internet outbound, inbound connections are allowed through SSL
    Installed cumulative hot fix 1 and hot fix APSB13-13
    Changed the Default port for SQL on the SQL Server.
    This is when the RMA form stopped working and I inherited the server.  Yeah!
    Any ideas on what i can try next or why this would have suddenly stopped working after making the above changes on the server.
    Thank you

    Start from the beginning. Return to the default values, and see what happens. To do so, proceed as follows.
    Temporarily shut ColdFusion down. Create a back-up of the file neo-runtime.xml, just in case.
    Now, open the file in a text editor and revert postParametersLimit and postSizeLimit to their respective default values, namely,
    <var name='postParametersLimit'><number>100.0</number></var>
    <var name='postSizeLimit'><number>100.0</number></var>
    That is, 100 parameters and 100 MB, respectively. (Note that there is no postParameterSize! If you had included that element in the XML, remove it.)
    Restart ColdFusion. Test and tell.

  • Which is the best way for posting a large number of records?

    I have around 12000 register to commit to dababase.
    Which is the best way for doing it?
    What depends on ?
    Nowadays I can't commit such a large number of register..The dabatase seems hanged!!!
    Thanks in advance

    Xavi wrote:
    Nowadays I can't commit such a large number of registerIt should be possible to insert tens of thousands of rows in a few seconds using an insert statement even with a complex query such as the all_objects view, and commit at the end.
    SQL> create table t as select * from all_objects where 0 = 1;
    Table created.
    Elapsed: 00:00:00.03
    SQL> insert into t select * from all_objects;
    32151 rows created.
    Elapsed: 00:00:09.01
    SQL> commit;
    Commit complete.
    Elapsed: 00:00:00.00
    I meant RECORDS instead of REGISTERS.Maybe that is where you are going wrong, records are for putting on turntables.

  • Table memory has Hughe through number of records very low number

    Dear team,
    When I'm checking one table it has below number of records.
    select count(*) from table1
    4980092
    but the space allocated for this table
    select sum(bytes) from user_segments where segment_name = 'table1';
    SUM(BYTES)
    2361712640
    I'm surprised with this size.
    When find the cause,I found the if we are delerting the records memory won't get freed, then i how can i freeup the memory for this table.
    Delete happenning for thie table frequently on daily basis.

    user11081688 wrote:
    Dear team,
    When I'm checking one table it has below number of records.
    select count(*) from table1
    4980092
    but the space allocated for this table
    select sum(bytes) from user_segments where segment_name = 'table1';
    SUM(BYTES)
    2361712640
    I'm surprised with this size.
    why?
    When find the cause,I found the if we are delerting the records memory won't get freed, correct
    then i how can i freeup the memory for this table.
    there is no need to do so, since space will be reused by new rows.
    Delete happenning for thie table frequently on daily basis.if DELETE occurs daily, why is number of rows close to zero?
    how many rows get INSERT daily?
    what is average ROW LENGTH?
    SQL> select 2361712640/4980092 from dual;
    2361712640/4980092
            474.230725
    SQL

  • CLIENT_TEXT_IO - Hanging on "PUT" for large number of records

    I have successfully used CLIENT_TEXT_IO but my users have run into an error where the Form hangs and spits out details such:
    "oracle.forms.net.HTTPNStream.doFlush"
    etc....
    This happens when the number of records in the datablock is high (ex: 70,000 recs). So my question is: Is there a limit on how many lines you can write to a file?
    I'm just creating a CSV file on the client's machine using CLIENT_TEXT_IO.PUT_LINE. It works fine on say a few thousand recs but after that it hangs.
    I'm on Oracle Application Server 10g, Release 9.0.4 on Windows Server 2003, and forms compiled using Oracle Developer Suite 9.0.4.
    Thanks,
    Gio

    Hello,
    When playing with huge data, it is better to generate the file on the A.S. then get it back to the client.
    <p>Read this article</p>
    Francois

  • Capability of inserting a specific number or records ...

    Hi ,
    Is there any way to permit the end-user enter a specific number or records in a multi-record block..... according to the number of fetched records in another block...????
    I assume that the trigger when-create-record can do that ... Are there any other solutions...???
    Thanks ,
    Simon

    ..Or,
    this is for single block ,but i believe works also for multirecord
    A parameter for defining the limit for the number of records the user can
    query.
    1. Define a parameter, :max_record, which is the limit for the number of records
    the user can enter. Make sure to define this parameter as numeric
    and provide a default value.
    2. For a form with a single block, create the following triggers at block level:
    a. Attach the following PL/SQL block to a KEY-CREREC trigger to create a
    record only when :system.cursor_record is less than :max_record.
    DECLARE
    a NUMBER;
    b NUMBER;
    BEGIN
    a := :system.cursor_record;
    LAST_RECORD;
    b := :system.cursor_record;
    IF b >= :parameter.max_record THEN
    GO_RECORD(a);
    MESSAGE('max record exceeded - create rec III');
    RAISE FORM_TRIGGER_FAILURE;
    END IF;
    GO_RECORD(a);
    IF :system.cursor_record < :parameter.max_record THEN
    CREATE_RECORD;
    ELSE
    MESSAGE('max record exceeded - create rec ');
    RAISE FORM_TRIGGER_FAILURE;
    END IF;
    END;
    b. To navigate to the next record when :system.cursor_record is
    less than the :max_record, create a KEY-DOWN trigger.
    IF :system.cursor_record < :parameter.max_record THEN
    DOWN;
    ELSE
    MESSAGE('max records key-down');
    END IF;

  • How do I use UTL_FILE to insert a large number of fields to a file?

    Hi
    I am trying to use UTL_FILE for the first time in a Stored Procedure. I need to run a complex query to select 50 fields from various tables. I need these to be inserted into one line in the output file for all rows. Is this possible? My procedure so far is like the following
    CREATE OR REPLACE PROCEDURE PROC_TEST IS
    output_file UTL_FILE.FILE_TYPE;
    BEGIN
    FOR query in (SELECT FIELD1, FIELD2, ..........FIELD50)
    FROM TABLE A, TABLE B
    WHERE A.ID = B.ID
    ETC
    LOOP
    UTL_FILE.PUT_LINE(output_file, <put all 50 fields for all records into file> );
    END LOOP;               
    UTL_FILE.FCLOSE (output_file);
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    NULL;
    WHEN OTHERS THEN
         UTL_FILE.FCLOSE_ALL;
    RAISE;
    END PROC_TEST;
    Do I need to define 'query' (after the FOR) anywhere, also please advise with how I put all of the fields into the file.
    Thanks
    GB

    Thanks Steve,
    I have the UTL_FILE working fine now.
    I have other queries to run and conditions to apply in the same procedure, and I need to schedule via Enterprise Manager, therefore using UTL_FILE in a procedure seemed the best option. I looked up Data-pump but this seems to be an 11g feature, and we are still on 10g therefore I will not be able to use it.
    Thanks for your help.
    GB

  • Saving of data in a table having large number of records

    Hi,
    i'm working in forms 6i and database 10g.
    i'm having two tables, stock_head and stock_detail.
    The stock_detail table is having millions of records.
    The stock_detail is having 3 database triggers.
    the saving of data into these tables is very slow even after disabling the triggers.
    can anyone please help me regarding this matter...
    How to improve the performance?
    please help me...

    As always the same thing applies to these type of queries
    - No exact version numbers are provided
    - The problem description is way too vague to resolve the issue
    - The requestor doesn't read documentation
    - The requestor didn't use online resources, and didn't search this forum
    The central question always is
    What is it waiting for
    So you need to run ADDM and/or AWR reports provided you are properly licensed, or statspack when you don't have a license for AWR/ADDM.
    Apart from that no help is possible, as the post didn't contain a problem description other than 'It doesn't work, help'
    Sybrand Bakker
    Senior Oracle DBA

  • Slow record selection in tableView component with large number of records

    Hi experts,
    we have a Business Server Page (flow logic) with several htmlb:inputField's. As known from SAP standard we would like to offer value helper (F4) to the users for the ease of record selection.
    We use the onValueHelp() method of the inputField to open a extra browser window through JavaScript. In the popup another html-website is called, containing a tableView component with all available records. We use the SINGLESELECT mode for the table view.
    Everything works perfect and efficient, unless the tableView contains too many entries. If the number of possible entries is large the whole component performs very very slow. For example the selection of the record can take more than one minute. Also the navigation between pages through the buttons at the bottom of the component takes a lot of time. It seems that the tableView component can not handle so many entries.
    We tried to switch between stateful and stateless mode, without success. Is there a way to perform the tableView selection without doing a server-round-trip? Any ideas and comments will be appreciated.
    Best regards,
    Sebastian

    Hi Raja,
    thank you for your hint. I took a look at sbspext_table/TableViewClient.bsp but did not really understand how the Java-Script coding works. Where is the JavaScript code in that example? Which file, does it contain.
    Meanwhile I implemented another way to evite the server round trip.
    - Switch page mode of the popup window to "Stateful"
    - Use OnInitialization method like OnCreate (as shown in [using OnInitialization like OnCreate])
    - Limit the results of the SELECT statement with UP TO 1000 ROWS
    Best regards,
    Sebastian

Maybe you are looking for

  • Acrobat Reader 9.3.4

    The new Acroabt Reader update 9.3.4 does not work properly. This update causes a problem on my desktop. After installing this update all the icons in the Start Menu (Windows 7) were gone. The icons were transformed into "blanc pages" (see image below

  • TS4022 I am not able to use iclould with Internetexplorer.

    I am not able to use iclould with Internetexplorer.  I was able to until November.  I have been way and on my return I can not even log in usin IE.  I can using Chrome.  I am an IE user so would like help please?

  • How to insert the data even the trigger returned null

    Hi Gurus, I have a question regarding on how to insert the record even the condition being used by a "before insert trigger" returned null. The scenario is: 2 tables have the same structure: Table A (Name varchar2 (100), No Number) Table B (Name_1 va

  • Switching CC Apps

    Hello, I ordered the monthly Photoshop CC app before I realized the OS on my Macbook is way too obsolete to run it. Meanwhile, I've while running a trial version of Illustrator CC without any problem. Also, the OS I need to upgrade in order to run PS

  • Report Job execution time

    Hi Everybody In 10g Grid I have a job which runs every day, In Job activity I can see exceution time if I click on each and every run But I need to create a report on how long that job took to execute in last 2 months. Can somebody point me to data d