Index creation during prime time

In a production environment, where a table is accessed by online users, can an index be created on the table at the same time ? I mean, will this affect performance and is it only performance that is affected. Insert, Delete, update and select operations will be performed by users at the time. Which of these areas will be affected and are there any other potential issues ? (The table is a partitioned table with 30 partitions)
Thanks

404045, it would have been nice if you would have mentioned the version of Oracle you are working with and what kind of table the index is being built on: heap, IOT, partitioned, non-partitioned.
As Vivek mentioned you can add the index online if you have version 9+ otherwise as Syed said the create index operation will require an exclusive lock on the table and hold up all DML. More than likely if the table is busy you will not be able to get the lock to run the create index.
From a couple of tests Online index create and rebuild operations definitely affect the performance of DML operations on the base table and these DML operations definitely affect the time it takes for the index to build.
HTH -- Mark D Powell --

Similar Messages

  • Parallel Index creation takes more time...!!

    OS - Windows 2008 Server R2
    Oracle - 10.2.0.3.0
    My table size is - 400gb
    Number of records - 657,45,95,123
    my column definition first_col varchar2(22) ; -> I am creating index on this column
    first_col -> actual average size of column value is 10
    I started to create index on this column by following command
    CREATE INDEX CALL_GROUP1_ANO ON CALL_GROUP1(A_NO) LOCAL PARALLEL 8 NOLOGGING COMPRESS ;
    -> In my first attempt after three hours I got an error :
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    So I increased the size of temp tablespace to 380GB ,Because i expect the size of first_col index this much.
    -> In my second attempt Index creation is keep going even after 17 hours...!!
    Now the usage of temp space is 162 GB ... still it is growing..
    -> I checked EM Advisor Central ADDM :
    it says - The PGA was inadequately sized, causing additional I/O to temporary tablespaces to consume significant database time.
    1. why this takes this much of Temp space..?
    2. It is required this much of time to CREATE INDEX in parallel processing...? more than 17 hrs
    3. How to calculate and set the size of PGA..?

    OraFighter wrote:
    Oracle - 10.2.0.3.0
    My table size is - 400gb
    Number of records - 657,45,95,123
    my column definition first_col varchar2(22) ; -> I am creating index on this column
    first_col -> actual average size of column value is 10
    I started to create index on this column by following command
    CREATE INDEX CALL_GROUP1_ANO ON CALL_GROUP1(A_NO) LOCAL PARALLEL 8 NOLOGGING COMPRESS ;
    Now the usage of temp space is 162 GB ... still it is growing..The entire data set has to be sorted - and the space needed doesn't really vary with degree of parallelism.
    6,574,595,123 index entries with a key size of 10 bytes each (assuming that in your choice of character set one character = one byte) requires per row approximately
    4 bytes row overhead 10 bytes data, 2 bytes column overhead for data, 6 bytes rowid, 2 bytes column overhead for rowid = 24 bytes.
    For the sorting overheads, using the version 2 sort, you need approximately 1 pointer per row, which is 8 bytes (I assumed you're on 64 bit Oracle on this platform) - giving a total of 32 bytes per row.
    32 * 6,574,595,123 / 1073741824 = 196 GB
    You haven't said how many partitions you have, but you might want to consider creating the index unusable, then issuing a rebuild command on each partition in turn. From "Practical Oracle 8i":
    <blockquote>
    In the absence of partitioned tables, what would you do if you needed to create a new index on a massive data set to address a new user requirement? Can you imagine the time it would take to create an index on a 450M row table, not to mention the amount of space needed in the temporary segment. It's the sort of job that you schedule for Christmas or Easter and buy a couple of extra discs to add to the temporary tablespace.
    With suitably partitioned tables, and perhaps a suitably friendly application, the scale of the problems isn't really that great, because you can build the index on each partition in turn. This trick depends on a little SQL feature that appears to be legal even though I haven't managed to find it in the SQL reference manual:
         create index big_new_index on partitioned_table (colX)
         local
         UNUSABLE
         tablespace scratchpad
    The key word is UNUSABLE. Although the manual states that you can 'alter' an index to be unusable, it does not suggest that you can create it as initially unusable, nevertheless this statement works. The effect is to put the definition of the index into the data dictionary, and allocate all the necessary segments and partitions for the index - but it does not do any of the real work that would normally be involved in building an index on 450M rows.
    </blockquote>
    (The trick was eventually documented a couple of years after I wrote the book.)
    Regards
    Jonathan Lewis

  • Index creation a long time..Please help to tune the creation time.

    Hi all,
    I am creating a index after using impdp to put the data in that table.
    Below is my index creation command.The index creation takes ~30 minutes .
    Can the forum memebers suggest me how to put this index creation with parallel clause or otherwise to reduce the time it takes to create the index?
    +++++++++++++++++++++++++++++++++++++++++++++++
    spool incre_HUNTER_PK_1.log
    set lines 200 pages 0 echo on feedback on timing on time on
    alter session enable parallel dml;
    alter session enable parallel ddl;
    CREATE UNIQUE INDEX "HUNTER_PK" ON "HUNTER" ("HUNTER_NUM", "BILL_SEQ", "BILL_VERSION")
    PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING COMPUTE STATISTICS
    STORAGE(INITIAL 4294967296 NEXT 16777216 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "HUNTER_LARGE_02";
    ALTER TABLE HUNTER ADD PRIMARY KEY ("HUNTER_NUM", "BILL_SEQ", "BILL_VERSION") USING INDEX HUNTER_PK;
    ALTER INDEX "HUNTER_PK" NOLOGGING NOPARALLEL;
    spool off
    +++++++++++++++++++++++++++++++++++++++++++++++
    Some other details:
    1. My imdp command import nearly the below details
    . . imported "HUSTY"."HUNTER" 42.48 GB 218185783 rows
    2. It is a non-partitioned table.
    3. I cant drop the table at the target.
    Regds,
    Kunwar

    Kunwar wrote:
    Can the forum memebers suggest me how to put this index creation with parallel clause or otherwise to reduce the time it takes to create the index?
    What version of the database?
    Creating indexes in parallel is described in the documentation. Search the on-line documentation for the syntax for create index; if there aren't any specific examples of creating indexes in parallel do a Google search for "create index parallel"

  • Speed Issues during Prime Time

    Over the last two months my speed during primetime hours 4 pm to around midnight have degraded drastically.  During these hours I typically get 1mbs, during the rest of the hours from midnight to around 4 again, I get the full 7mbs that I pay for.  I thought DSL wasn't supposed to have these problems and it was typical for cable companies.  I would really like this to be resolved as it wasn't always like this and now its to the point where its unacceptable.  I've tried removing my router and connecting both my pc and my mac directly to the dsl during these times incase it was some sort of virus issue, and that didn't solve my problem.  Not really sure what to do.
    So to summerize
    -Slow internet speeds in primetime hours 4  pm to 12 am PST
    -Problem has only been happening in the past three months but has been getting noticeably worse recently.
    -4pm internet speed drops to 1mbs at midnight goes back to the 7 mbs I pay for and stays that way until around the same time the next day.
    Tried hooking up modem to both a pc and a mac and removing the router during these slow periods as well as restarting the modem with no positive result.
    The modem I have is a Westell G90610015-20 Rev E.
    Router is a netgear but again this happens with or without the router.
    Speedtest numbers during these times agree with it, and when I check the modem connection setting it still says I'm at 7000+ down so I'm not exactly sure whats wrong.

    Please post your Modem Transceiver Statistics. You can obtain them by visiting http://192.168.1.1/ , clicking System Monitoring, Advanced Monitors and then Transceiver Statistics. If you are presented with a Username and Password prompt, try one of the following combinations.
    admin/password
    admin/password1
    admin/admin
    admin/admin1
    Your Verizon username and password
    Additionally, how is your voice service, if you have a phone with with Verizon? Do you have Static, buzzing, or any other noise on the line that should not be there? 
    If you are the original poster (OP) and your issue is solved, please remember to click the "Solution?" button so that others can more easily find it. If anyone has been helpful to you, please show your appreciation by clicking the "Kudos" button.

  • Variable Creation During Run Time

    Is there anyway to create new instances of variables during run time based off of user input from a command line prompt. Thnx~!

    Depends what you mean .. if you mean the user can type "a = 8", and it creates a variable "a" with the value 8, then no.
    But you can implement something very similar using a HashMap. Use put("a", new Integer(8)), or whatever, and then get it out later with get("a").

  • Re: Slow Speed During Prime Time

    I moved to a property connected to the Leith Exchange from the Newington one. Since then I have noticed a considerable deterioration in my download speed. Netflix will often be very pixelated and other video steaming will take a while to buffer.
    This is inconsistent but does seem to struggle more at peak times.

    Which part of the log is it you are after? It goes back for as far as I can see but the last few lines are:
    20:47:10, 06 Jun. (603780.980000) Wire Lan Port 2 up
    20:47:08, 06 Jun. (603779.000000) Lease for IP 192.168.1.82 renewed by host Apple-​TV (MAC 18:ee:69:0b:16:a9). Lease duration: 1440 min
    20:47:08, 06 Jun. (603779.000000) Device connected: Hostname: Apple-​TV IP: 192.168.1.82 MAC: 18:ee:69:0b:16:a9 Lease time: 1440 min. Link rate: 100.0 Mbps
    20:47:08, 06 Jun. (603778.930000) Lease requested
    20:47:08, 06 Jun. (603778.560000) Device disconnected: Hostname: Apple-​TV IP: 192.168.1.82 MAC: 18:ee:69:0b:16:a9
    20:47:07, 06 Jun. (603777.970000) Wire Lan Port 2 down
    20:45:58, 06 Jun. (603708.570000) Wire Lan Port 2 up
    Results from the speedtest:
    Results Image not loaded
    1. Best Effort Test: -provides background information.
    Download Speed
    5.32 Mbps
    0 Mbps 21 Mbps
    Max Achievable Speed
    Download speedachieved during the test was - 5.32 Mbps
    For your connection, the acceptable range of speeds is 4 Mbps-21 Mbps.
    IP Profile for your line is - 10.16 Mbps
    2. Upstream Test: -provides background information.
    Upload Speed
    0.86 Mbps
    0 Mbps 0.83 Mbps
    Max Achievable Speed
    Upload speed achieved during the test was - 0.86Mbps
    Additional Information:
    Upstream Rate IP profile on your line is - 0.83 Mbps
    We were unable to identify any performance problem with your service at this time.
    It is possible that any problem you are currently, or had previously experienced may have been caused by traffic congestion on the Internet or by the server you were accessing responding slowly.
    If you continue to encounter a problem with a specific server, please contact the administrator of that server in the first instance.
    I'm not sure about the socket, there is the main socket in the cupboard in the hallway with the option to send it anywhere in the property by connecting it to the corresponding socket. See photo:
    http://imgur.com/VVWXqur
    I cannot connect directly to this socket as their is no power in the cupboard (helpful!). In the living room there is just a normal faceplate where I then have the filter connected then the broadband.
    I also do not have a landline to do the quiet line test with.

  • Rule Index Creation taking indefinte time

    I am using Oracle 10.2.0.3 database. I have a model with nearly 250000 triples. When I am trying to create rule index on it using 'rdfs' rulebase, it is taking indefinite time. Yesterday night I have started the script and even after 15 hours it was still not completed. When I checked in the RDFI_RULEINDEX_INFO view, it is showing its status as invalid. I am able to create rulindex with 'rdf' on the same model. Also, I am able to create rule indexes on all other models with 'rdfs'.
    Can you tell me why this is happening? This is a priority task for me, please reply soon.
    Thanks,
    Rajesh Narni.
    Edited by: rajesh narni on Sep 10, 2008 10:59 PM

    Below is the PL/SQL procedure that i am calling:
    begin
    sdo_rdf_inference.create_rules_index(
    'Sample_rix',
    sdo_rdf_models('SampleModel'),
    sdo_rdf_rulebases('rdfs')
    end;
    I am able to create rule indexes for all other models with the combination of rdfs and rdf rulebase. For the SampleModel, I could create a rule index with rdf rulebase. But I could not create a rule index with rdfs rulebase and SampleModel.
    Thanks,
    Rajesh.

  • Creation of Index taking lot of time!!

    DB:10.2.0.3.0
    OS:AIX 64 bits
    Application Server:10133
    Hi All,
    When we try to create Normal Index, it takes more than 9 hrs to create whereas on other similar environment instance it takes not more than 3 to 4 hrs.
    We modified PGA_AGGREGATE_TARGET to 2000m from 300m and set WORKAREA_SIZE_POLICY to 'Auto' from 'manual' and now the index creation is faster.
    Could someone explain this.
    Appreciate your time and effort!!
    Regards,

    Setting PGA_AGGREGATE_TARGET to a nonzero value has the effect of automatically setting the WORKAREA_SIZE_POLICY parameter to AUTO. This means that SQL working areas used by memory-intensive SQL operators (such as sort, group-by, hash-join, bitmap merge, and bitmap create) will be automatically sized. A nonzero value for this parameter is the default since, unless you specify otherwise, Oracle sets it to 20% of the SGA or 10 MB, whichever is greater.
    Note: 464045.1 - How To Increase Speed of Index Recreation
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=464045.1
    Note: 102339.1 - Temporary Segments: What Happens When a Sort Occurs
    https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=102339.1
    PGA_AGGREGATE_TARGET
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams157.htm

  • Time, more taken in index creation

    when i create an normal index on one column or an composite index on 3-4 columns on a table having say 9-10 million records, the time is too much, i did it in our test database & it too upto an hour to create the index. Please help in explaining, why it is taking so much time.
    regards.

    Well it depends what you consider the time should be. Indexing a table of 10m records is going to take time but to speed things up a bit you could specify NOLOGGING in the index creation script, you should take a backup afterwards though.
    HTH
    David

  • Issue with index creation of an infocube.

    Hi,
    I have an issue with creation of index's for a info cube in SAP BI. when i create index's using
    a process chain for the cube it is getting failed  in the process chain. when i try to check the index's  for this
    cube  manual the following below massage is shown.
    *The recalculation can take some time (depending on data volume)*
    *The traffic light status is not up to date during this time*
    Even i tried to repair the index's using the standard program "SAP_INFOCUBE_INDEXES_REPAIR" in SE38
    to repair the index so it is leading to dump in this case.
    Dear experts with the above issue please suggest. 
    Regards,
    Prasad.

    Hi,
    Please check the Performance tab in the Cube manage and try doing a repair index from there.
    This generates a job so check the job in SM37 and see if it finishes. If it fails, check the job log which will give you the exact error.
    These indices are F fact table indices so if nothing works, try activating the cube by the pgm 'RSDG_CUBE_ACTIVATE' and see if that resolves the issue.
    Let us know the results.

  • Index creation online - performance impact on database

    hi,
    I have oracle 11.1.0.7 database running on Linux as 3 node RAC.
    I have a huge table which has more than 255 columns and is about 400GB in size which is also highly fragmented because of constant DML activities.
    Questions:
    1. For now i am trying to create an index Online while the business applications are running.
    Will there be any performance impact on the database to create index Online on a single column of a table 'TBL' while applications are active against the same table? So basically my question will index creation on a object during DML operations on the same object have performance impact on the database? is there a major performance impact difference in the database in creating index online and not online?
    2. I tried to build an index on a column which has NULL value on this same table 'TBL' which has more than 255 columns and is about 400GB in size highly fragmented and has about 140 million rows.
    I requested the applications to be shutdown, but the index creation with parallel of 4 a least took more than 6 hours to complete.
    We have a Pre-Prod database which has the exported and imported copy of the Prod data. So the pre-Prod is a highly de-fragmented copy of the Prod.
    When i created the same index on the same column with NULL, it only took 15 minutes to complete.
    Not sure why on a highly fragmented copy of Prod it took more than 6 hours compared to highly defragmented copy of Pre-Prod where the index creation took only 15 minutes.
    Any thoughts would be helpful.
    Thanks.
    Phil.

    How are you measuring the "fragmentation" of the table ?
    Is the pre-prod database running single instance or RAC ?
    Did you collect any workload stats (AWR / Statspack) on the pre-prod and production systems while creating (or failing to create) the index ?
    Did you check whether the index creation ended up in-memory, single pass or multi pass in in the two environments ?
    The commonest explanation for this type of difference is two-fold:
    a) the older data needs a lot of delayed block cleanout, which results in a lot of random I/O to the undo tablespace - slowing down I/O generally
    b) the newer end of the table is subject to lots of change, so needs a lot of work relating to read-consistency - which also means I/O on the undo system
      --  UPDATED:  but you did say that you had stopped the application so this bit wouldn't have been relevant.
    On top of this, an online (re)build has to lock the table briefly at the start and end of the build, and in a busy system you can wait a long time for the locks to be acquired - and if the system has been busy while the build has been going on it can take quite a long time to apply the journal file to finish the index build.
    Regards
    Jonathan Lewis

  • Create index is taking more time

    Hi,
    One of the concurrent program is taking more time , We generate the trace file and found that the create index is taking more time.
    Below is from the trace file and such type of index creation is happening lot of time in Oracle standard program.
    Can somebody let me know why there is a big difference between cpu and elapse time.
    We are seeing the PX Deq: Execute Reply Event as well.look idle time for database.
    Please let me know which parameter of the database is affecting this.
    CREATE INDEX ITEM_CATEGORIES_N2_BD9 ON ITEM_CATEGORIES_BD9(CATEGORY_SET_ID,
    SR_CATEGORY_ID,ORGANIZATION_ID,SR_INSTANCE_ID) PARALLEL TABLESPACE MSCX
    STORAGE( INITIAL 40960 NEXT 33554432 PCTINCREASE 0) PCTFREE 10 INITRANS 11
    MAXTRANS 255
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 3 0 0
    Execute 1 0.35 364.82 131168 117945 60324 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.35 364.83 131168 117948 60324 0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 80 (recursive depth: 2)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    reliable message 1 0.00 0.00
    enq: KO - fast object checkpoint 1 0.01 0.01
    PX Deq: Join ACK 6 0.00 0.00
    PX Deq Credit: send blkd 112 0.00 0.01
    PX qref latch 7 0.00 0.00
    PX Deq: Parse Reply 3 0.00 0.00
    PX Deq: Execute Reply 604 1.96 364.42
    log file sync 1 0.00 0.00
    PX Deq: Signal ACK 1 0.00 0.00
    latch: session allocation 2 0.00 0.00
    Regards,

    user12121524 wrote:
    CREATE  INDEX ITEM_CATEGORIES_N2_BD9 ON ITEM_CATEGORIES_BD9(CATEGORY_SET_ID,
    SR_CATEGORY_ID,ORGANIZATION_ID,SR_INSTANCE_ID) PARALLEL  TABLESPACE MSCX
    STORAGE(  INITIAL 40960 NEXT 33554432 PCTINCREASE 0) PCTFREE 10 INITRANS 11
    MAXTRANS 255
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          3          0           0
    Execute      1      0.35     364.82     131168     117945      60324           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.35     364.83     131168     117948      60324           0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 80     (recursive depth: 2)
    Elapsed times include waiting on following events:
    Event waited on                             Times   Max. Wait  Total Waited
    ----------------------------------------   Waited  ----------  ------------
    reliable message                                1        0.00          0.00
    enq: KO - fast object checkpoint                1        0.01          0.01
    PX Deq: Join ACK                                6        0.00          0.00
    PX Deq Credit: send blkd                      112        0.00          0.01
    PX qref latch                                   7        0.00          0.00
    PX Deq: Parse Reply                             3        0.00          0.00
    PX Deq: Execute Reply                         604        1.96        364.42
    log file sync                                   1        0.00          0.00
    PX Deq: Signal ACK                              1        0.00          0.00
    latch: session allocation                       2        0.00          0.00
    What you've given us is the query co-ordinator trace, which basically tells us that the the coordinator waited 364 seconds for the PX slaves to tell it that they had completed their tasks ("PX Deq: Execute Reply" time). You need to look at the slave traces to find out where they spent their time - and that's probably not going to be easy if there are lots of parallel pieces of processing going on.
    If you want to do some debugging (in general) one option is to add a query against V$pq_tqstat after each piece of parallel processing and log the results to a named file, or write them to a table with a tag, as this will tell you how many slaves were involved, how, and what the distribution of work and time was.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan

  • Index creation in BKPF table

    Hello Gurus,
    I have a bad performance in IDCP transaction, i read the sap note 511819 and this recommend create an index in BKPF table with fields:
    'MANDT'
    'BUKRS'
    'XBLNR'
    But i see in the system the table have an index with fields:
    MANDT
    BUKRS
    BSTAT
    XBLNR
    Is necessary the index creation if already exists one index with this fields?
    The table have 150 million rows and 9 indexes.
    What is your suggestion?
    Best regards,
    Ernesto Castro.

    HI,
    If you have already index 001 with MANDT BUKRS BSTAT XBLNR
    fields than it's not necessary to create another index with MANDT BUKRS  XBLNR fields.
    you need to create index 001 on following table only as described in note.
    table :  VBRK   fields : MANDT  XBLNR
    table :  LIKP   fields: MANDT XBLNR
    also activate this index only during the least critical time ( when maximum free resources available )
    regards,
    kaushal

  • Systemcopy using R3load - Index creation VERY slow

    We exported a BW 7.0 system using R3load (newest tools and SMIGR_CREATE_DDL) and now importing it into the target system.
    Source database size is ~ 800 GB.
    The export was running a bit more than 20 hours using 16 parallel processes. The import is still running with the last R3load process. Checking the logs I found out that it's creating indexes on various tables:
    (DB) INFO: /BI0/F0TCT_C02~150 created#20100423052851
    (DB) INFO: /BIC/B0000530000KE created#20100423071501
    (DB) INFO: /BI0/F0COPC_C08~01 created#20100423072742
    (DB) INFO: /BI0/F0COPC_C08~04 created#20100423073954
    (DB) INFO: /BI0/F0COPC_C08~05 created#20100423075156
    (DB) INFO: /BI0/F0COPC_C08~06 created#20100423080436
    (DB) INFO: /BI0/F0COPC_C08~07 created#20100423081948
    (DB) INFO: /BI0/F0COPC_C08~08 created#20100423083258
    (DB) INFO: /BIC/B0000533000KE created#20100423101009
    (DB) INFO: /BIC/AODS_FA00~010 created#20100423121754
    As one can see on the timestamps the creation of one index can take an hour or more.
    x_cons is showing constant CrIndex reading in parallel, however, the througput is not more than 1 - 2 MB/sec.  Those index creation processes are running now since over two days (> 48 hours) and since the .TSK files don't mentioned those indexes any more I wonder how many of them are to be created and how long this will take.
    The whole import was started at "2010-04-20 12:19:08" (according to import_monitor.log) so running now since more than three days with four parallel processes. Target machine has 4 CPUs and 16 GB RAM (CACHE_SIZE is 10 GB). The machine is idling though with 98 - 99 %.
    I have three questions:
    - why does index creation take such a long time? I'm aware of the fact, that the cache may not be big enough to take all the data but that speed is far from being acceptable. Doing a Unicode migration, even in parallel, will lead to a downtime that may not be acceptable by the business.
    - why are the indexes not created first and then filled with the data? Each savepoint may take longer but I don't think that it will take that long.
    - how to find out which indexes are still to be created and how to estimate the avg. runtime of that?
    Markus

    i Peter,
    I would suggest creating an SAP ticket for this, because these kind of problems are quite difficult to analyze.
    But let me describe the index creation within MaxDB. If only one index creation process is active, MaxDB can use multiple Server Tasks (one for each Data Volume) to possibly increase the I/O throughput. This means the more Data Volumes you have configured, the faster the parallel index creation process should be. However, this hugely depends on your I/O system being able to handle an increasing amount of read/write requests in parallel. If one index creation process is running using parallel Server tasks, all further indexes to be created at that time can only utilize one single User Task for the I/O.
    The R3load import process assumes that the indexes can be created fast, if all necessary base table data is still present in the Data Cache. This mostly applies to small tables up to table sizes that take up a certain amount of the Data Cache. All indexes for these tables are created right after the table has been imported to make use of the fact, that all the needed data for index creation is still in the cache. Many idexes may be created simultaneously here, but only one index at a time can use parallel Server Tasks.
    If a table is too large in relation to the total database size, then its indexes are being queued for serial index creation to be started when all tables were imported. The idea is that the needed base table data would likely have been flushed out of the Data Cache already and so there is additional I/O necessary rereading that table for index creation. And this additional I/O would greatly benefit from parallel Server Tasks accessing the Data Volumes. For this reason, the indexes that are to be created at the end are queued and serialized to ensure that only one index creation process is active at a time.
    Now you mentioned that the index creation process takes a lot of time. I would suggest (besides opening an OSS ticket) to start the MaxDB tool 'Database Analyzer' with an interval of 60 seconds configured during the whole import. In addition, you should activate the 'time measurement' to get a reading on the I/O times. Plus, ensure that you have many Data Volumes configured and that your I/O system can handle that additional loag. E.g. it would make no sense to have 25 Server Tasks all writing to a single local disk, I would assume that the disk would become a bottle neck...
    Hope my reply was not too confusing,
    Thorsten

  • Why index creation is slower on the new server?

    Hi there,
    Here is a bit of background/env info:
    Existing prod server (physical): Oracle 10gR2 10.2.0.5.8 on ASM
    RAM: 96GB
    CPUs: 8
    RHEL 5.8 64bit
    Database size around 2TB
    New server:
    VMWare VM with Oracle 10gR2 10.2.0.5.8 on ASM
    RAM 128GB
    vCPUs: 16
    RHEL 5.8 64bit
    Copy of prod DB (from above server) - all init param are the same
    I noticed that Index creation is slower on this server. I ran following query:
    SELECT name c1, cnt c2, DECODE (total, 0, 0, ROUND (cnt * 100 / total)) c3
      FROM (SELECT name, VALUE cnt, (SUM (VALUE) OVER ()) total
              FROM v$sysstat
             WHERE name LIKE 'workarea exec%')
    C1
    C2
    C3
    workarea executions - optimal
    100427285
    100
    workarea executions - onepass
    2427
    0
    workarea executions - multipass
    0
    0
    Following bitmap index takes around 40mins in prod server while it takes around 2Hrs on the VM.
    CREATE BITMAP INDEX MY_IDX ON
    MY_HIST(PROD_YN)  TABLESPACE TS_IDX PCTFREE 10
    STORAGE(INITIAL 12582912 NEXT 12582912 PCTINCREASE 0 ) NOLOGGING
    This index is created during a batch-process and the dev team is complaining of slowness of the batch on new server. I have found this one statement responsible for some of the grief. There may be more and I am investigating.
    I know that adding "parallel" option may speedup but I want find out why is it slow on the new server.
    I tried creating a simple index on a large table and it took 2min in prod and 3.5min on this VM. So I guess index creation is slower on this VM in general. DMLs (select/insert/delete/update) seem to work with better elapsed time.
    Any clues what might be causing this slowness in index creation?
    Best regards

    I have been told that the SAN in use by the VM has capacity of 10K IOPS. Not sure of this info helps. I don't know more than this about the storage.
    What else do I need to find out? Please let me know - I'll check with my Sys Admin and update the thread.
    Best regards

Maybe you are looking for

  • My alerts - itunes 10.6.3 and still not fixed?

    I've seen past discussions of the My Alerts issues and it still looks like no one has fixed this. In short - My Alerts is useless. It has pages of artists I don't want and rarely shows artists on my "alerts" check list. The email is always a mixed ba

  • IPod won't sync songs

    Okay, a few months ago I copied some songs onto my iPod from a CD, and they synced just fine. I synced the iPod to another computer to upload files onto it, but it still worked just fine on my computer afterwards. But now the songs I copied from the

  • How to implement row level security using external tables

    Hi All Gurus/ Masters, I want to implement row level security using external tables, as I'm not sure how to implement that. and I'm aware of using it by RPD level authentication. I can use a filter condition in my user level so that he can access his

  • MM docyment reference in vendor report

    Dear Experts, One of my clients requirement is that he want  PO and GR  reference against every credit to the vendor through MM. In standard report fbl1n  there is a field for purchasing document but it is coming blank . can you suggest a way so that

  • Messages Stuck in Queue in PI

    Hi, For a few inbound interfaces for which PI is the middleware, we have found that the messages are stuck in queue in PI. We have also found that the messages are stuck in the queue in PI because one message which has an RFC error (It does a RFC loo