Index creation online - performance impact on database

hi,
I have oracle 11.1.0.7 database running on Linux as 3 node RAC.
I have a huge table which has more than 255 columns and is about 400GB in size which is also highly fragmented because of constant DML activities.
Questions:
1. For now i am trying to create an index Online while the business applications are running.
Will there be any performance impact on the database to create index Online on a single column of a table 'TBL' while applications are active against the same table? So basically my question will index creation on a object during DML operations on the same object have performance impact on the database? is there a major performance impact difference in the database in creating index online and not online?
2. I tried to build an index on a column which has NULL value on this same table 'TBL' which has more than 255 columns and is about 400GB in size highly fragmented and has about 140 million rows.
I requested the applications to be shutdown, but the index creation with parallel of 4 a least took more than 6 hours to complete.
We have a Pre-Prod database which has the exported and imported copy of the Prod data. So the pre-Prod is a highly de-fragmented copy of the Prod.
When i created the same index on the same column with NULL, it only took 15 minutes to complete.
Not sure why on a highly fragmented copy of Prod it took more than 6 hours compared to highly defragmented copy of Pre-Prod where the index creation took only 15 minutes.
Any thoughts would be helpful.
Thanks.
Phil.

How are you measuring the "fragmentation" of the table ?
Is the pre-prod database running single instance or RAC ?
Did you collect any workload stats (AWR / Statspack) on the pre-prod and production systems while creating (or failing to create) the index ?
Did you check whether the index creation ended up in-memory, single pass or multi pass in in the two environments ?
The commonest explanation for this type of difference is two-fold:
a) the older data needs a lot of delayed block cleanout, which results in a lot of random I/O to the undo tablespace - slowing down I/O generally
b) the newer end of the table is subject to lots of change, so needs a lot of work relating to read-consistency - which also means I/O on the undo system
  --  UPDATED:  but you did say that you had stopped the application so this bit wouldn't have been relevant.
On top of this, an online (re)build has to lock the table briefly at the start and end of the build, and in a busy system you can wait a long time for the locks to be acquired - and if the system has been busy while the build has been going on it can take quite a long time to apply the journal file to finish the index build.
Regards
Jonathan Lewis

Similar Messages

  • Help needed in index creation and its impact on insertion of records

    Hi All,
    I have a situation like this, and the process involves 4 tables.
    Among the 4 tables, 2 tables have around 30 columns each, and the other 2 has 15 columns each.
    This process contains validation and insert procedure.
    I have already created some 8 index for one table, and an average of around 3 index on other tables.
    Now the situation is like, i have a select statement in validation procedure, which involves select statement based on all the 4 tables.
    When i try to run that select statement, it takes around 30 secs, and when checked for plan, it takes around 21k memory.
    Now, i am in a situation to create new index for all the table for the purpose of this select statement.
    The no.of times this select statement executes, is like, for around 1000 times of insert into table, 200 times this select statement gets executed, and the record count of these tables would be around one crore.
    Will the no.of index created in a table impacts insert statement performace, or can we create as many index as we want in a table ? Which is the best practise ?
    Please guide me in this !!!
    Regards,
    Shivakumar A

    Hi,
    index creation will most definitely impact your DML performance because when inserting into the table you'll have to update index entries as well. Typically it's a small overhead that is acceptable in most situations, but the only way to say definitively whether or not it is acceptable to you is by testing. Set up some tests, measure performance of some typical selects, updates and inserts with and without an index, and you will have some data to base your decision on.
    Best regards,
    Nikolay

  • Question: Will online backup impact database performance for DB6 V9.1

    Dear All,
    I would like to know will online backup impact the database performance? for eg: access will be slower if online backup is currently running. Appreciate if someone can shed some light on this as i'm new to DB6.
    I know for oracle it will impact due the performance due to tablespace are locked in backup mode and this will increaes I/O load due to every block is written to redo log during backup instead of just the changes.
    Hope to hear from you soon.
    Cheers,
    Nicholas Chang.

    Hello Nicholas,
    Here is some additional information on throttling utilities such as online backups:
    SET UTIL_IMPACT_PRIORITY command
    Changes the impact setting for a running utility. Using this command, you can:
    throttle a utility that was invoked in unthrottled mode
    unthrottle a throttled utility (disable throttling)
    reprioritize a throttled utility (useful if running multiple simultaneous throttled utilities)
    Scope
    Authorization
    One of the following:
    sysadm
    sysctrl
    sysmaint
    Required connection
    Instance. If there is more than one partition on the local machine, the attachment should be made to the correct partition. For example, suppose there are two partitions and a LIST UTILITIES command resulted in the following output:
    ID = 2
    Type = BACKUP
    Database Name = IWZ
    Partition Number = 1
    Description = online db
    Start Time = 07/19/2007 17:32:09.622395
    State = Executing
    Invocation Type = User
    Throttling:
    Priority = Unthrottled
    Progress Monitoring:
    Estimated Percentage Complete = 10
    Total Work = 97867649689 bytes
    Completed Work = 10124388481 bytes The instance attachment must be made to partition 1 in order to issue a SET UTIL_IMPACT_PRIORITY command against the utility with ID 2. To do this, set DB2NODE=1 in the environment and then issue the instance attachment command.
    Command syntax
    >>-SET UTIL_IMPACT_PRIORITY FORutility-idTOpriority----><
    Command parameters
    utility-id
    ID of the utility whose impact setting will be updated. IDs of running utilities can be obtained with the LIST UTILITIES command.
    TO priority
    Specifies an instance-level limit on the impact associated with running a utility. A value of 100 represents the highest priority and 1 represents the lowest priority. Setting priority to 0 will force a throttled utility to continue unthrottled. Setting priority to a non-zero value will force an unthrottled utility to continue in throttled mode.
    Examples
    The following example unthrottles the utility with ID 2.
       SET UTIL_IMPACT_PRIORITY FOR 2 TO 0
    The following example throttles the utility with ID 3 to priority 10. If the priority was 0 before the change then a previously unthrottled utility is now throttled. If the utility was previously throttled (priority had been set to a value greater than zero), then the utility has been reprioritized.
       SET UTIL_IMPACT_PRIORITY FOR 3 TO 10
    Relationship between UTIL_IMPACT_LIM and UTIL_IMPACT_PRIORITY settings
    The database manager configuration parameter util_impact_lim sets the limit on the impact throttled utilities can have on the overall workload of the machine. 0-99 is a throttled percentage, 100 is no throttling.
    The SET UTIL_IMPACT_PRIORITY command sets the priority that a particular utility has over the resources available to throttled utilities as defined by the util_impact_lim configuration parameter. (0 = unthrottled)
    Using the backup utility as an example, if the util_impact_lim=10, all utilities can have no more than a 10% average impact upon the total workload as judged by the throttling algorithm. Using two throttled utilities as an example:
    Backup with util_inpact_priority 70
    Runstats with util_impact_priority 50
    Both utilities combined should have no more than a 10% average impact on the total workload, and the utility with the higher priority will get more of the available workload resources. For both the backup and runstats operations, it is also possible to declare the impact priority within the command line of that utility. If you do not issue the SET UTIL_IMPACT_PRIORITY command, the utility will run unthrottled (irrespective of the setting of util_impact_lim).
    To view the current priority setting for the utilities that are running, you can use the LIST UTILITIES command.
    Usage notes
    Throttling requires that an impact policy be defined by setting the util_impact_lim configuration parameter.
    Regards,
    Adam Wilson
    SAP Development Support

  • Performance impact on oracle 11g database by audit enable

    Hi All,
    Shall we enable audit on some siebel db tables like s_party s_contacts s_order s_quote s_org_ext
    We need to see who deleted account records from oracle tables manually
    Since auditing is not enabled.
    We have given delete privelege to to all users as required by Siebel application.
    So Is this good idea to get Auditing enabled on these selected tables or Is there any performance impact on database.
    Is it good idea to enable audit for these tables espacially in siebel
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0      Production
    TNS for HPUX: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production

    Hello,
    Ok do it and generate AWR to see how the performance is getting impacted.remember auditing just some tables is not a big matter but auditing everything is the problem that is why fine grained auditing exist.please also remember to clean the audit records regularly because the auditing will be just a problem with the space in case you have many deletes which should not happen in your case.
    Kind regards
    Mohamed

  • CONTEXT index creation - performance!

    Hi,
    I have a table with about 5Million rows. The content that needs to be indexed is of RAW datatype. The average size (length) of this field is about 50 characters (it could be more).
    I am trying to index this column to perform a keyword search. DEtails are furnished below.
    table:
    SQL> desc kwtai
    Name Null? Type
    TSD_HH24 DATE
    COUNTRY_CODE_ALPHA_2 VARCHAR2(2)
    ONETWORK NUMBER(6)
    OADDRESS VARCHAR2(25)
    DNETWORK NUMBER(6)
    DADDRESS VARCHAR2(25)
    MESSAGE_LENGTH NUMBER
    MESSAGE_CONTENT RAW(2000)
    Preferences:-
    begin
    Ctx_Ddl.Create_Preference('mc_storage', 'BASIC_STORAGE');
    ctx_ddl.set_attribute('mc_storage','I_TABLE_CLAUSE',
    'tablespace large_index storage (initial 10M next 10M)');
    ctx_ddl.set_attribute('mc_storage', 'K_TABLE_CLAUSE',
    'tablespace large_index storage (initial 10M next 10M)');
    ctx_ddl.set_attribute('mc_storage', 'R_TABLE_CLAUSE',
    'tablespace large_index storage (initial 1M) lob (data) store as (cache)');
    ctx_ddl.set_attribute('mc_storage', 'N_TABLE_CLAUSE',
    'tablespace large_index storage (initial 1M)');
    ctx_ddl.set_attribute('mc_storage', 'I_INDEX_CLAUSE',
    'tablespace large_index storage (initial 1M) compress 2');
    ctx_ddl.create_preference('mc_lex', 'BASIC_LEXER');
    ctx_ddl.set_attribute('mc_lex', 'skipjoins', '_-"''`~!@#$%^&*()+=|}{[]\:;<>?/.,');
    ctx_ddl.set_attribute('mc_lex', 'INDEX_STEMS','NONE');
    end;
    create index kwtaidx on kwtai (message_content) indextype is ctxsys.context
    parameters (' lexer mc_lex storage mc_storage memory 500M ')
    parallel 16;
    This create index takes about 4 hours to complete on a 8CPU dual core machine.
    This is on Oracle 10g (10.2.0.4)
    The reason i am creating the index as opposed to syncing it is because the data gets loaded into this table only once a day and it gets cleared once my keyword analysis is done.
    Any pointers to speed up the index creation will be really appreciated! Thanks in advance!

    My base table has the text that needs to be indexed stored in the "MESSAGE_CONTENT" column which for now is RAW data type. The data stored in this table are in hex representation.
    Some examples -
    MESSAGE_CONTENT
    616C70686120626574612067616D6D612064656C746120657073696C6F6E207A657461206E69F16F
    616C70686120626574612067616D6D612064656C746120657073696C6F6E207A657461
    616C70686120626574612067616D6D612064656C746120657073696C6F6E207A657461206E69C3B16F
    6865792E2C2C77686174277320676F696E67206F6E2E2E2E7066206368616E67277320697320736F6D652072657374617572616E742E2074686579206172652070736564756F2D636F6F6C
    54686520477265656B20616C7068616265742069732074686520736372697074207468617420686173206265656E
    54686520477265656B20616C7068616265742069732074686520736372697074207468617420686173206265656E20706F73742D64617461
    Now with your suggestion i tried to bypass this. So what i did was added a format column to my base table and updated it to "TEXT". My database is in UTF8.
    Now when i create the index with the following preferences it takes less than a minute.
    begin
    Ctx_Ddl.Create_Preference('kwta_storage', 'BASIC_STORAGE');
    ctx_ddl.set_attribute('kwta_storage','I_TABLE_CLAUSE',
    'tablespace TEXT_INDEX storage (initial 10M next 10M)');
    ctx_ddl.set_attribute('kwta_storage', 'K_TABLE_CLAUSE',
    'tablespace TEXT_INDEX storage (initial 10M next 10M)');
    ctx_ddl.set_attribute('kwta_storage', 'R_TABLE_CLAUSE',
    'tablespace TEXT_INDEX storage (initial 1M) lob (data) store as (cache)');
    ctx_ddl.set_attribute('kwta_storage', 'N_TABLE_CLAUSE',
    'tablespace TEXT_INDEX storage (initial 1M)');
    ctx_ddl.set_attribute('kwta_storage', 'I_INDEX_CLAUSE',
    'tablespace TEXT_INDEX storage (initial 1M) compress 2');
    ctx_ddl.create_preference('mylex', 'BASIC_LEXER');
    ctx_ddl.set_attribute('mylex', 'skipjoins', '_-"''`~!@#$%^&*()+=|}{[]\:;<>?/,');
    ctx_ddl.set_attribute('mylex','punctuations','.?!');
    ctx_ddl.set_attribute('mylex', 'INDEX_STEMS','NONE');
    ctx_ddl.set_attribute('mylex', 'continuation','\-');
    Ctx_Ddl.Create_Stoplist ( 'mystop' );
    Ctx_Ddl.Add_Stopword ( 'mystop', 'is' );
    Ctx_Ddl.Add_Stopword ( 'mystop', 'has' );
    Ctx_Ddl.Add_Stopword ( 'mystop', 'the' );
    Ctx_Ddl.Add_Stopword ( 'mystop', 'that' );
    end;
    create index kwtaidx on kwtai (message_content) indextype is ctxsys.context
    parameters ('filter ctxsys.auto_filter format column fmt stoplist mystop lexer mylex storage kwta_storage memory 500M')
    parallel 16;
    When i select distinct tokens from the $I table i get the following
    TOKEN_TEXT
    RESTAURANT
    GREEK
    WHATS
    ARE
    DELTA
    ZETA
    ALPHA
    ALPHABET
    EPSILON
    PF
    PSEDUOCOOL
    SOME
    CHANGS
    NIÃO
    ON
    POSTDATA
    SCRIPT
    BEEN
    GAMMA
    GOING
    HEY
    NI
    O
    THEY
    BETA
    Now what i am also wondering is if the text (message_content column) is being converted to UTF8 (database characterset) by using AUTO_FILTER. Is my assumption correct? Not sure how to validate this?
    And, would you kindly share of why RAW must not be used in this case?
    Thanks for all your pointers!

  • Oracle Text Indexing performance in Unicode database

    Forum folks,
    I'm looking for overall performance thoughts in Text Indexing within a Unicode database. Part of our internal testing suites includes searching on values using contains filters over indexed binary and text documents. We've architected these tests such that they could be run in a suite or on their own, thus, the data is loaded at the beginning of each test and then the text indexes are created and populated prior to running any of the actual testing.
    We have the same tests running on non-unicode instances of Oracle 11gR2 just fine, but when we run them against a Unicode instance, we are almost always seeing timing issues where the indexes haven't finished populating, thus our tests are saying we've only found n number of hits when we are expecting n+ 50 or in some cases n + 150 records to be returned.
    We are just looking for some general information in regards to text indexing performance in a unicode database. Will we need to add sleep time to the testing to allow for the indexes to populate? How much time? We would rather not get into having to create different tests for unicode vs non-unicode, but perhaps that is necessary.
    Any insight you could provide would be most appreciated.
    Thanks in advance,
    Dan

    Roger,
    Thanks much for your quick reply...
    When you talk about Unicode, do you mean AL32UTF8?
    --> Yes, this is the Unicode charset we are using.
    Is the data the same in both cases, or are you indexing simple 7-bit ascii data in the one database, and foreign text (maybe Chinese?) in the UTF8 database?
    With the same data, there should be virtually no difference in performance due to the AL32UTF8 database character set.
    --> We have a data generation tool we utilize. For non-unicode data, we generate using all 256 characters in the ISO-8859-1 set. With our Unicode data for clobs, we generate using only the first 1,000 characters of UTF8 by setting up an array of code points...0 - 1000. For Blobs, we have sets of sample word documents and pdfs that are inserted, then indexed.
    I'm not sure I understand your testing methodology. Do you run ( load-data, index-data, run-queries ) sequentially?
    --> That is correct. We utilize the ctx_ddl package to populate the pending table and then to sync the index....The following is an example of the ddl we generate to create and populate the index:
    create index "DBMEARSPARK_ORA80"."RESRESUMEDOC" on "DBMEARSPARK_ORA80"."RESUME" ("RESUMEDOC") indextype is CTXSYS.CONTEXT parameters(' nopopulate sync (every "SYSTIMESTAMP + INTERVAL ''30'' MINUTE" PARALLEL 2) filter ctxsys.auto_filter ') PARALLEL 2;
    execute ctx_ddl.populate_pending('"DBMEARSPARK_ORA80"."RESRESUMEDOC"',null);
    execute ctx_ddl.sync_index('"DBMEARSPARK_ORA80"."RESRESUMEDOC"',null,null,2);
    If so, there should be no way that the indexes can be half-created. If not, don't you have some check to see if the index creation has finished before running the query test?
    --> Excellent question....is there such a check? I have not found a way to do that yet...
    Were you just lucky with the "non-unicode" tests that the indexing just happened to have always finished by the time you ran the queries?
    --> This is quite possible as well. If there is a check to see if the index is ready, then we could add that into our infrastructure.
    --> Thanks, again, for responding so quickly.
    Edited by: djulson on Feb 12, 2013 7:13 AM

  • Portal Context Index Creation Performance issue

    Recreating Portal Context Indexes takes around 36 hours at our site (after portal upgrade from 3.0.9.8.2 to 3.0.9.8.5 as per release notes). I was following the Note:158368.1 to rebuild the indexes. Is there anything that i can do to tune this ?
    thanks
    subu

    Unfortunately indexing is generally a fairly intensive operation and can be time consuming.
    There are some things that you can do to optimize the performance of your database as a whole which may in turn help the performance of your indexing operation. Look at the Performance Guide and Reference book in the database documentation.
    Much of the time spent indexing is taken up by filtering binary documents and fetch content identified by URL attributes. In the case of the later, it might be worth checking in the ctx_user_index_errors view to ensure that you don't have a lot of URL requests that are timing out. The timeout is set to 30 seconds and if there are a lot or URLs where the host cannot be resolved or the fetch times out it might be costing a lot of time during the indexing operation. This is often the case if a proxy is required to reach the URLs but the proxy has not be configured correctly.

  • ORA-1652: Index creation error

    At the time of table re-organisation, Index creation got aborted for the secondary index creation. then we have continued our activity leaving the index creation due to short of System downtime.
    Again we have run the sql statment when the sap system is up, then it has thrown an error:
    ORA-1652 unable to extend temp segment by 128 in tablespace PSAPTEMP.
    In the current system, we are not ready to increase the size of datafile or resizing.where it leads to the increase the size of database.
    Here we have option to create a new temp tablespace, will delete after index creation, but while doing this how do we point the new tablespace which is created for this activity.
    SAP 4.6c oracle 9 sun solaris.
    Kindly advise what can be done. reply ASAP

    Martin,
    We have discussed with the team like,
    1. Create new temporary tablespace with desired Size which should be minimum 25GB
                CREATE temporary tablespace PSAPTEMP1......
        2. If the original tablespace is a default temporary tablespace, set the new tablespace as default temporary tablespace in the database.
                 SQL> alter database default temporary tablespace PSAPTEMP1;
        3. Perform the index creation
        4. Make the old tablespace PSAPTEMP as the default temporary tablespace.
                 SQL> alter database default temporary tablespace PSAPTEMP
       5. Drop the new tablespace.
                 SQL> drop tablespace temp including contents.
    Here I have a question, while switching the default temporary tablespace from PSAPTEMP to the much bigger new PSAPTEMP1Tablespace whether this will affect the running transaction.
    Any impact on switching the tablespace online..?
    We are performing this activity in the online system(running sap system)
    Thanks,

  • Mandatory criteria of selection / performance impact

    Dear Sap Gurus,
    I have heared that using mandatory criteria of selection could be usefull to improve performance queries.
    Why ?! probably to go throw the index, and therefore optimize database access time ....
    If somebody could tell me more about this topics it will be great !!!

    hi,
    check sap course bw360, correctly defined selection order may improve query performance, drop index may have negative impact to query performance
    ... from bw360 ...
    Indexes on tables can help to improve the reading performance for all types of reading described above.
    The order of the key fields determines the order of the fields for the primary index. The fields most frequently used for selection should be first in the order.
    Additional indexes can be defined for other reading accesses, for example:
    Calendar month for selective deletion / archiving
    Sales Document number for reporting
    In transaction DB05, you can determine the selectivity (distinct values) of table fields (single fields as well as fields in combination) and the distribution of field values. This information helps to define suitable indexes.
    for query performance, take a look
    oss note
    557870 'FAQ BW Query Performance'
    and 567746 'Composite note BW 3.x performance Query and Web'.
    Prakash weblog for good query design
    /people/prakash.darji/blog/2006/01/27/query-creation-checklist
    /people/prakash.darji/blog/2006/01/26/query-optimization
    BW Performance Tuning Knowledge Center - SAP Developer Network (SDN)
    Business Intelligence Performance Tuning [original link is broken]
    hope this helps.

  • Performance impact in Oracle 8i - BLOB vs BFILE

    Hi Guys,
    We are evaluting intermedia to store multimedia objects.
    Does any know if storing and retreiving documents in Oracle database has impact on standard data stored in the database?
    Is it worth having a seperate database instance for storing tables with intermedia objects?
    Pal

    Part 2:
    Example 1: Let us estimate the storage requirements for a data set consisting of 500 video clips comprising a total size of 250MB (average size 512K bytes). Assume a LOB chunk size of 32768 bytes. Our model estimates that we need (8000 * 32) bytes or 250 k bytes for the index and 266 MB to hold the media data. Since the original media size is 250 MB, this represents about a 6.5% storage overhead for storing the media data in the database. The following table definition could be used to store this amount of data.
    create table video_items
    video_id number ,
    video_clip ordsys.ordvideo
    -- storage parameters for table in general
    tablespace video1 storage (initial 1M next 10M )
    -- special storage parameters for the video content
    lob (video_clip.source.localdata) store as
    (tablespace video2 storage (initial 260k next 270M )
    disable storage in row nocache nologging chunk 32768);
    Example 2: Let us estimate the storage requirements for a data set consisting of 5000 images with an average size of 56K bytes. The total amount of media data is 274 MB. Since the average image size is smaller, it is more space efficient to choose a smaller chunk size, say 8K, to store the data in the lob. Our model estimates that we will need about 313 MB to store the data and a little over 1 MB to store the index. In this case the 40 MB of storage required beyond the raw media content size represents a 15% overhead.
    Estimating retrieval costs
    Performance testing has shown that Oracle can achieve equivalent and even higher throughput performance for media content retrieval than a file system. The test was configured to retrieve media data from a server system to a requesting client system. In the database case, simple C client programs used OCI with LOB read callbacks to retrieve the data from the database. For the file system case, the client program used the standard C library functions to read data from the file system. Note that in this client server configuration, files are served remotely by the file server system. In essence, we are comparing distributed file system performance with Oracle database and SQLNet performance. These tests were performed on Windows NT 4 SP5.
    Although Oracle achieved higher absolute performance, the relative CPU cost per unit of throughput ranged from 1.7 to 3 times the file system cost. (For these tests, database performance ranged from 3.4 million to 9 million bytes/sec while file system performance ranged from 2.6 million bytes/sec to 7 million bytes/sec as the number of clients ranged from 1 to 5) One reason for the very high relative CPU cost at the higher end of performance is that as the 100 Mbs network approaches saturation, the system used more CPU to achieve the next increment of throughput. If we restrict ourselves to not exceeding 70% of network utilization, then the database can use up to 2.5 times as much CPU as the file system per unit of throughput.
    NOTE WELL: The extra CPU cost factors pertain only to media retrieval aspect of the workload. They do not apply to the entire system workload. See example.
    Example: A file based media asset system uses 10% of a single CPU simply to serve media data to requesting clients. If we were to store the media in an Oracle database and retrieve content from the database then we could expect to need 20-25% of a single CPU to serve content at the same throughput rate.

  • Relation between temp tablespace and index creation

    Hi,
    I have my Oracle database (11gR1) on windows 2008 server R1 64 bit..
    This is my development database. i have one table which has more than 2 billion rows , the problem i m facing here is while creating the index on this table i m getting temp segment error , while my temp tablespace size is 32 gb.
    Here my doubt is :
    1.What will happen in temp tablespace when index is created ? Relation between temp and index creation ?
    2. how to create the index on a huge table?
    3. What is the meaning og logging and no logging in INDEX creation .
    4. how can we over come for these kind of problem and manage the temp TS..
    Thanks & Regards,
    Vikash Chauradia

    add another tempfile?
    1.What will happen in temp tablespace when index is created ? Relation between temp and index creation ?
    index creation needs sort. how much depends on the size of the index.
    2. how to create the index on a huge table?
    create an interim (temporary? :)) huge temporary space for the very purpose.
    http://docs.oracle.com/cd/B28359_01/server.111/b28310/indexes003.htm#i1006643
    3. What is the meaning og logging and no logging in INDEX creation .
    nologging means you the creation isnt in the logs so if you need to recover you cant get back to it. when using nologging in a prod env you might do it for performance during a period of heavy dml such as a large index creation and then backup afterwards. common enough.
    4. how can we over come for these kind of problem and manage the temp TS..
    current tempspace size =X
    is X big enough? if yes, cup of tea, if no, make X bigger.
    It doesnt matter what X is.

  • Table has 80 million records - Performance impact if we stop archiving

    HI All,
    I have a table (Oracle 11g) which has around 80 million records till now we used to do weekly archiving to maintain size. But now one of the architect at my firm suggested that oracle does not have any problem with maintaining even billion of records with just a few performance tuning.
    I was just wondering is it true and moreover what kind of effect would be their on querying and insertion if table size is 80 million and increasing every day ?
    Any comments welcomed.

    What is true is that Oracle database can manage tables with billions of rows but when talking about data size you should give table size instead of number of rows because you wont't have the same table size if the average row size is 50 bytes or if the average row size is 5K.
    About performance impact, it depends on the queries that access this table: the more data queries need to process and/or to return as result set, the more this can have an impact on performance for these queries.
    You don't give enough input to give a good answer. Ideally you should give DDL statements to create this table and its indexes and SQL queries that are using these tables.
    In some cases using table partitioning can really help: but this is not always true (and you can only use partitioning with Entreprise Edition and additional licensing).
    Please read http://docs.oracle.com/cd/E11882_01/server.112/e25789/schemaob.htm#CNCPT112 .

  • Will there be any performance impact

    Hi All,
        Currently i'm having table employee with 1 millon records.. (emp ID is primary key). In process , i want to insert new employee ID and use for program and deleting it finally(simplyfing changes in current program).. every day this will take 100K trancations.
       I'm planning to commit only after delete. (ie insert -> make some update --> delete the same row --> commit).
    Will this emp IDs added to index memory  and give performance impact though i'm commiting the transaction after deleting the rows?
    database : oracle 10g.
    Thanks!!!

    If I understand you correctly, this sounds like a use case for a global temporary table (with the same structure as your employee table).
    As you insert, update and delete the same row within one single transaction (for the convenience of your code I assume), those row will only ever be visible to the session that (temporarily) inserts them into the table.
    The design you are suggesting has (at least) the following performance impact:
    1) it will inhibit concurrency
         - other sessions reading the table while transient rows are inserted and are being updated may have to clone some data buffers and apply UNDO to get read consistent clones of the buffers being modified.
         - you may cause buffer busy wait events as you modify the blocks belonging to your employee table while other sessions want to read the blocks affected by these modifications (the severity of this depends on how your 100K transactions are spread throughout the day and what activity runs on the database in parallel).
         - you will increase activity on the hash chain latches protecting the buffers of your employee table (the same applies to the severity as for the previous point).
    2) You increase the amount of REDO generated by your code. Using a global temporary table your 100K transactions will also generate some REDO, but significantly less.
    3) Using the global temporary table approach you don't need to delete the rows once you are done with your processing - you simply define your global temporary table as "ON COMMIT DELETE ROWS".
    4) You'll have to do all the work associated with the index maintenance to insert and delete the corresponding index entry (see my post from  Jun 24, 2013 8:16 PM)

  • Index-building strategy for multi-terabyte database

    Running 11g.
    We have about 17 million XML files to load into a brand new database. They are to be indexed with a context index. After the 17 million records are imported, there will be future imports of approximately 30,000 records every two weeks, loaded in batch.
    1. What is the best way to load the 17 million? Should they be loaded in chunks, say 1 M at a time, and then indexed? Or load them all, then index them all at once? (Based on preliminary tests the initial load will take 9 days and the indexing will take a little under 7 days.)
    I vote for doing it in chunks, since the developers will want access to the system periodically during the data load and I want to be able to start and stop the process without causing a catastrophe.
    But I have read that this can introduce fragmentation into the index. Is this really something to worry about, given that my chunks are so large? Would there be any real benefit from doing the entire index operation in one go?
    2. After each of the bi-weekly 30,000-record imports, we will run CTX_DDL.SYNC_INDEX which I estimate will take about 20 minutes to run. Will this cause fragmentation over time? I have read that it is advisable to perform a full index rebuild occasionally but obviously we won't want to do that; that would take days and days. I guess the alternative is to run OPTIMIZE_INDEX
    http://download.oracle.com/docs/cd/B28359_01/text.111/b28304/cddlpkg.htm#i998200
    ...any advice on how often to run this, considering the hugeness of my dataset? Fortunately the data will be read-only (apart from the bi-weekly load), so I'm thinking that there won't be very much fragmentation occurring at all. Am I correct?

    There are two types of fragmentation, one during index creation and one during additional loads. The first can be minimised with some tweaking, the second should not be a big issue because there are only 26 additional loads in one year in.
    1)
    You will not have any issues loading the XML and index them using Oracle Text as long you use a sensible partitioning. Index the whole dataset in one go, the parallel clause works very well during indexing. Have a look at the initial memory allocation for every thread what is created, I found the default values in 10g and 9i are far too small. When you use 100 partitions it will create 100 Oracle text indexes, nothing to be scared off. More partitions you use, less index memory is required for each thread, but it adds to fragmentation.
    You can reduce the initial indexing time and fragmentation in various ways:
    Use parallel indexing and partitioning, I used 6 threads over 25 partitions and it reduced time to index 8 millions XML documents from 2 days to less than 12 hours (XML documents as clobs using a user_datastore).
    Tweak indexing memory to your requirements, somehow it is more memory than CPU bound. More memory you use less fragmentation occurs and it will finish earlier.
    Use a representative list of stop words. For this I use to query the DR$...I tables directly (query is out my head)
    SELECT token_text, SUM(token_count) total
    FROM DR$...I
    WHERE UPPER(token_text) = tokent_text – remove XML tags
    ORDER BY total DESC
    Have a look at the top entries if you can include them into the stop word list; they will cause trouble later on when querying the index.
    2) We found an index rebuild (drop/create) useful for following scenario, but this more a feeling than solid science
    1 million XML records, daily loads of around 1 thousand records with many updates => quarterly rebuilds
    8 million XML records, bi-weekly loads of around 20 thousand records with very few updates => once a year or so.

  • Fast index creation suggestions wanted

    Hi:
    I've loaded a table with a little over 100,000,000 records. The table has several indexes which I must now create. Need to do this as fast as possible.
    I've read the excellent article by Don Burleson (http://www.dba-oracle.com/oracle_tips_index_speed.htm) but still have a few questions.
    1) If the table is not partitioned, does it still make sense to use "parallel"?
    2) The digit(s) following "compress" indicate the number of consective columns at the head of the index that have duplicates. True?
    3) How will the compressed index effect query performance (vs not compressed) down the line?
    4) In the future I will be doing lots and lots of updates on the indexed columns of the records as well as lots of record deletes and inserts into/out-of the table. Will these updates/inserts/deletes run faster or slower given that the indexes are compressed vs if they were not?
    5) In order to speed up the sorting, is it better to add datafiles to the TEMP table or create more TEMP tables (remember, running "parallel")
    Thanks in Advance

    There are people who would argue that excellent and Mr. Burleson do not belong in the same sentence.
    1) Yes, you can still use parallel (and nologging) to create the index, but don't expect 20 - 30 times faster index creation.
    2) It is the number of columns to compress by, they may not neccesarily have duplicates. For a unique index the default is number of columns - 1, for a non-unique index the default is the number of columns.
    3) If you do a lot of range scans or fast full index scans on that index, then you may see some performance benefit from reading fewer blocks. If the index is mostly used in equality predicates, then the performance benefit will likely be minimal.
    4) It really depends on too many factors to predict. The performance of inserts, updates and deletes will be either
    A) Slower
    B) The same
    C) Faster
    5) If you are on 10G, then I would look at temporary tablespace groups which can be beneficial for parallel operations. If not, then allocate as much memory as possible to sort_area_size to minimize disk sorts, and add space to your temporary tablespace to avoid unable to extend. Adding additional temporary tablespaces will not help because a user can only use one temporary tablespace at a time, and parallel index creation is only one user.
    You might want to do some searching at Tom Kyte's site http://asktom.oracle.com for some more responsible answers. Tom and Don have had their disagreements in the past, and in most of them, my money would be on Tom to be corerct.
    HTH
    John

Maybe you are looking for

  • Can I clone my mac pro 4.1 os 10.6.8 and use it in a macbook pro late 2012(no retina)?

    I'm a music producer and work most of the time on a mac Pro 4.1 quad core intel Xeon, Mac OS X 10.6.8. I just purchased a Macbook Pro 13" (mid 2012) OS X 10.9.3 and my idea was to clone the Mac Pro i use for work and put it on the Macbook so i'd be a

  • N73 memory card

    My N73 is the Hong Kong IE version and I bought it in January. The problem is that the memory card just randomly removes itself during normal usage, once i even noticed that everytime I press the '3' key it just removes itself. Is anyone else experia

  • Keywords and thumbnails issue

    I've got a few events where a vertical thumbnail or two are overlapping the thumbnails above them and pushing up their keywords. It seems to throw off the layout in general, because some of the other keywords are getting misaligned as well. Any ideas

  • CcBPM

    Hi all, Is there any difference between BPM and ccBPM? Please let me know. Regards Krishna.

  • Does a dell a920 work with apple

    Does a Dell A920 printer work with my Mac Book Pro?