Index creation in BKPF table

Hello Gurus,
I have a bad performance in IDCP transaction, i read the sap note 511819 and this recommend create an index in BKPF table with fields:
'MANDT'
'BUKRS'
'XBLNR'
But i see in the system the table have an index with fields:
MANDT
BUKRS
BSTAT
XBLNR
Is necessary the index creation if already exists one index with this fields?
The table have 150 million rows and 9 indexes.
What is your suggestion?
Best regards,
Ernesto Castro.

HI,
If you have already index 001 with MANDT BUKRS BSTAT XBLNR
fields than it's not necessary to create another index with MANDT BUKRS  XBLNR fields.
you need to create index 001 on following table only as described in note.
table :  VBRK   fields : MANDT  XBLNR
table :  LIKP   fields: MANDT XBLNR
also activate this index only during the least critical time ( when maximum free resources available )
regards,
kaushal

Similar Messages

  • Is it worth creating secondary index on BKPF table ?

    Hello,
    One of my clients is using ECC version 5.0. I have a requirement wherein I need to fetch the data from BKPF table based on AWKEY, BUKRS and GJAHR. There is no standard secondary index available.
    I have decided to create a secondary index on these fields in the following order:
    1) MANDT
    2) AWKEY
    3) BUKRS
    4) GJAHR
    I know that creating secondary indexes does improve performance during data retrieval. But when I checked the total number of entries in BKPF table in production system, there are more than 20 lac 2 million records.
    I am worried that creation of secondary index will create another table of such a large size in production that has data sorted on the above fields. Also, the RAM of production system is 6 GB.
    Please suggest if creation of secondary index is a good measure OR should I recommend the client to increase the system RAM?
    Regards,
    Danish.
    Edited by: Thomas Zloch on Oct 3, 2011 3:01 PM

    Hi,
    Secondary Index on BKPF-AWKEY has been successfully created in production. The report which used to timeout in foreground as well as in background is now executing in less than 10 seconds !!
    Client is very much satisfied with this. But, there is one problem that we are facing now is that when the user is changing an existing billing document via VF02, on SAVE, system takes a very long time to save the changes done.
    We monitored the processes via SM50 and found that there was a sequential read on BKPF table.
    Before transporting the index in production, system approximately took not more than 5 seconds to save the Billing Document. But now system takes more than 20 minutes just to save the billing document via VF02.
    I really don't know what has gone wrong. I can't figure out if I have missed any step while creating the index.
    I did the following,
    1) Created a Secondary index on BKPF, saved it in the transport request and activated it. Since the index was not existing in database ORACLE, I activated and adjusted the table via SE14. Now, index exists on database as well. Working perfectly in development.
    2) Imported the transport request to Production. Checked in SE11. Index was existing and active. Also, index existed in database ORACLE.
    Have I missed anything ? Is it required to activate and adjust the database via SE14 in production too ?
    Regards,
    Danish.

  • Indexes for bkpf table

    Hi all ,
    I created secondary indexes as per given thread bu vikrant , but hw to use this index in program.
    reply me.
    bye ,
    jyotsna

    Continue your thread here: error when selecting bkpf table
    Thread locked.

  • Need help in using a Secondary Index of a GL Table

    Hello ABAPers,
    Good morning/afternoon!
    I  was wondering if anyone could give me a hand on this issue.
    To give you a brief background, I need to retrieve Open and Cleared items in the last 24 hours, that is, from 11 am yesterday till 10:59:59 today. The criteria is to pull the information using Entry Date (CPUDT) and Entry Time (CPUTM) from BKPF and using BELNR, retrieve the postings in BSIS and BSAS. as you can see in the piece of code. However, this process takes soooooo long to run.
    Below is the code snippet.
    We use 46C and MS SQL database.
    The index fields of BKPF~ZB2 are: MANDT, CPUDT and BLART.
    The index fields of BSIS~Z01are: BUKRS, GJAHr, HKONT, PRCTR, GSBER
    TYPES: BEGIN OF i_bseg,
           bukrs LIKE bkpf-bukrs,
           hkont LIKE bsis-hkont,
           gjahr LIKE bkpf-gjahr,
           belnr LIKE bkpf-belnr,
           buzei LIKE bsis-buzei,
           bldat LIKE bsis-bldat,
           budat LIKE bsis-budat,
           waers LIKE bsis-waers,
           monat LIKE bsis-monat,
           shkzg LIKE bsis-shkzg,
           gsber LIKE bsis-gsber,
           dmbtr LIKE bsis-dmbtr,
           wrbtr LIKE bsis-wrbtr,
           cpudt LIKE bkpf-cpudt,
           cputm LIKE bkpf-cputm,
           END OF i_bseg.
    *Internal Tables
    DATA:
          i_bkpf TYPE TABLE OF bkpf,
          i_bseg TYPE STANDARD TABLE OF i_bseg.
    Select records from yesterday's entry date.
        SELECT  bukrs belnr gjahr monat budat cpudt cputm waers
                FROM bkpf INTO  CORRESPONDING FIELDS OF TABLE i_bkpf
                WHERE ( cpudt = so_date2-low AND
                        cputm >= so_cputm-low AND
                        cputm <= so_cputm-high )
                OR    ( cpudt = so_date2-high AND
                        cputm >= so_time2-low AND
                        cputm <= so_time2-high )
                %_HINTS MSSQLNT 'INDEX("BKPF" "BKPF~ZB2")'.
        IF sy-subrc NE 0.
          MESSAGE e006 WITH 'BKPF'.
        ENDIF.
    Select Open Items***
        SELECT bukrs hkont gjahr belnr buzei  budat bldat
               waers monat shkzg gsber  dmbtr wrbtr
               FROM bsis
               APPENDING CORRESPONDING FIELDS OF TABLE i_bseg
               FOR ALL ENTRIES IN i_bkpf
               WHERE bukrs = i_bkpf-bukrs
               AND gjahr = i_bkpf-gjahr
               AND hkont IN so_saknr
               AND belnr = i_bkpf-belnr
               %_HINTS MSSQLNT 'INDEX("BSIS" "BSIS~Z01")'.
    Also get any cleared items ***
        SELECT bukrs hkont gjahr belnr buzei  budat bldat
               waers monat shkzg gsber  dmbtr wrbtr
               FROM bsas
               APPENDING CORRESPONDING FIELDS OF TABLE i_bseg
               FOR ALL ENTRIES IN i_bkpf
               WHERE bukrs IN so_bukrs
               AND   hkont IN so_saknr
               AND   gjahr = sp_gjahr
               AND   belnr = i_bkpf-belnr.
    Many thanks,
    Rosemarie

    Thanks for your response.
    The date and time fields are defined in my selection screen and the default values pre-set in the INITIALIZATION, so the dates and times are already stored as a range.
    The problem is why is the process taking so long. Is the syntax for the INDEX correct?
    PARAMETERS:p_not RADIOBUTTON GROUP pass.
    SELECT-OPTIONS:
      so_date2 FOR bkpf-cpudt,         " Entry Date
      so_cputm FOR sy-uzeit,         " Entry Time
      so_time2 for sy-uzeit.        " Entry Time 2
    INITIALIZATION.
    RANGES: so_time2 FOR sy-uzeit.
      g_date = sy-datum - 1.
      g_year = sy-datum+0(4).
      sp_gjahr = g_year.
      CONCATENATE sp_gjahr '0101' INTO g_from_date.
      g_from_time = '110000'.
      g_24_hour   = '235959'.
      g_time_diff = g_24_hour - g_from_time.
      g_to_time = g_from_time + g_time_diff.
      g_to_time2 = g_from_time - 1.
      MOVE: 'I'      TO so_cpudt-sign,
            'BT'     TO so_cpudt-option,
            g_from_date   TO so_cpudt-low,
            sy-datum TO so_cpudt-high.
      APPEND so_cpudt.
      MOVE: 'I'      TO so_date2-sign,
            'BT'     TO so_date2-option,
            g_date   TO so_date2-low,
            sy-datum TO so_date2-high.
      APPEND so_date2.
      MOVE: 'I'         TO so_cputm-sign,
            'BT'        TO so_cputm-option,
            g_from_time TO so_cputm-low,
            g_to_time   TO so_cputm-high.
      APPEND so_cputm.
      MOVE: 'I'         TO so_time2-sign,
            'BT'        TO so_time2-option,
            '000000'    TO so_time2-low,
             g_to_time2 TO so_time2-high.
      APPEND so_time2.

  • Help needed in index creation and its impact on insertion of records

    Hi All,
    I have a situation like this, and the process involves 4 tables.
    Among the 4 tables, 2 tables have around 30 columns each, and the other 2 has 15 columns each.
    This process contains validation and insert procedure.
    I have already created some 8 index for one table, and an average of around 3 index on other tables.
    Now the situation is like, i have a select statement in validation procedure, which involves select statement based on all the 4 tables.
    When i try to run that select statement, it takes around 30 secs, and when checked for plan, it takes around 21k memory.
    Now, i am in a situation to create new index for all the table for the purpose of this select statement.
    The no.of times this select statement executes, is like, for around 1000 times of insert into table, 200 times this select statement gets executed, and the record count of these tables would be around one crore.
    Will the no.of index created in a table impacts insert statement performace, or can we create as many index as we want in a table ? Which is the best practise ?
    Please guide me in this !!!
    Regards,
    Shivakumar A

    Hi,
    index creation will most definitely impact your DML performance because when inserting into the table you'll have to update index entries as well. Typically it's a small overhead that is acceptable in most situations, but the only way to say definitively whether or not it is acceptable to you is by testing. Set up some tests, measure performance of some typical selects, updates and inserts with and without an index, and you will have some data to base your decision on.
    Best regards,
    Nikolay

  • Fast index creation suggestions wanted

    Hi:
    I've loaded a table with a little over 100,000,000 records. The table has several indexes which I must now create. Need to do this as fast as possible.
    I've read the excellent article by Don Burleson (http://www.dba-oracle.com/oracle_tips_index_speed.htm) but still have a few questions.
    1) If the table is not partitioned, does it still make sense to use "parallel"?
    2) The digit(s) following "compress" indicate the number of consective columns at the head of the index that have duplicates. True?
    3) How will the compressed index effect query performance (vs not compressed) down the line?
    4) In the future I will be doing lots and lots of updates on the indexed columns of the records as well as lots of record deletes and inserts into/out-of the table. Will these updates/inserts/deletes run faster or slower given that the indexes are compressed vs if they were not?
    5) In order to speed up the sorting, is it better to add datafiles to the TEMP table or create more TEMP tables (remember, running "parallel")
    Thanks in Advance

    There are people who would argue that excellent and Mr. Burleson do not belong in the same sentence.
    1) Yes, you can still use parallel (and nologging) to create the index, but don't expect 20 - 30 times faster index creation.
    2) It is the number of columns to compress by, they may not neccesarily have duplicates. For a unique index the default is number of columns - 1, for a non-unique index the default is the number of columns.
    3) If you do a lot of range scans or fast full index scans on that index, then you may see some performance benefit from reading fewer blocks. If the index is mostly used in equality predicates, then the performance benefit will likely be minimal.
    4) It really depends on too many factors to predict. The performance of inserts, updates and deletes will be either
    A) Slower
    B) The same
    C) Faster
    5) If you are on 10G, then I would look at temporary tablespace groups which can be beneficial for parallel operations. If not, then allocate as much memory as possible to sort_area_size to minimize disk sorts, and add space to your temporary tablespace to avoid unable to extend. Adding additional temporary tablespaces will not help because a user can only use one temporary tablespace at a time, and parallel index creation is only one user.
    You might want to do some searching at Tom Kyte's site http://asktom.oracle.com for some more responsible answers. Tom and Don have had their disagreements in the past, and in most of them, my money would be on Tom to be corerct.
    HTH
    John

  • Creating a bit map index on a partitioned table

    Dear friends,
    I am trying to create a bitmap index on a partitioned table but am receiving the following ORA error. Can you please let me know on how to create a local bit map index as the message suggests?
    ERROR at line 1:
    ORA-25122: Only LOCAL bitmap indexes are permitted on partitioned tables
    Trying to use the keyword local in front leads to wrong syntax.
    Thanks in advance !!
    Somnath

    ORA-25122 Only LOCAL bitmap indexes are permitted on partitioned tables
    Cause: An attempt was made to create a global bitmap index on a partitioned table.
    Action: Create a local bitmap index instead
    Example of a Local Index Creation
    CREATE INDEX employees_local_idx ON employees (employee_id) LOCAL;
    Example is about btree and I think it will work for bitmap also.

  • Error when selecting bkpf table

    select single belnr
         into it_data-belnr
         from bkpf
         where awkey = it_vbrp1-vbeln and
               blart = 'RV'.
    above code i am using for selecting bkpf table , actually key field are not useful in this case & hws to use index.
    plz reply.
    Thanks,
    Jyotsna

    Hi,
    select single belnr
    into it_data-belnr
    from bkpf
    where awkey = it_vbrp1-vbeln and
    blart = 'RV'.
    In above query what is it_vbrp1-vbeln? if it is internal table you are missing some thing like For all entries in it_vbrp1.Check it once.

  • CONTEXT index creation - performance!

    Hi,
    I have a table with about 5Million rows. The content that needs to be indexed is of RAW datatype. The average size (length) of this field is about 50 characters (it could be more).
    I am trying to index this column to perform a keyword search. DEtails are furnished below.
    table:
    SQL> desc kwtai
    Name Null? Type
    TSD_HH24 DATE
    COUNTRY_CODE_ALPHA_2 VARCHAR2(2)
    ONETWORK NUMBER(6)
    OADDRESS VARCHAR2(25)
    DNETWORK NUMBER(6)
    DADDRESS VARCHAR2(25)
    MESSAGE_LENGTH NUMBER
    MESSAGE_CONTENT RAW(2000)
    Preferences:-
    begin
    Ctx_Ddl.Create_Preference('mc_storage', 'BASIC_STORAGE');
    ctx_ddl.set_attribute('mc_storage','I_TABLE_CLAUSE',
    'tablespace large_index storage (initial 10M next 10M)');
    ctx_ddl.set_attribute('mc_storage', 'K_TABLE_CLAUSE',
    'tablespace large_index storage (initial 10M next 10M)');
    ctx_ddl.set_attribute('mc_storage', 'R_TABLE_CLAUSE',
    'tablespace large_index storage (initial 1M) lob (data) store as (cache)');
    ctx_ddl.set_attribute('mc_storage', 'N_TABLE_CLAUSE',
    'tablespace large_index storage (initial 1M)');
    ctx_ddl.set_attribute('mc_storage', 'I_INDEX_CLAUSE',
    'tablespace large_index storage (initial 1M) compress 2');
    ctx_ddl.create_preference('mc_lex', 'BASIC_LEXER');
    ctx_ddl.set_attribute('mc_lex', 'skipjoins', '_-"''`~!@#$%^&*()+=|}{[]\:;<>?/.,');
    ctx_ddl.set_attribute('mc_lex', 'INDEX_STEMS','NONE');
    end;
    create index kwtaidx on kwtai (message_content) indextype is ctxsys.context
    parameters (' lexer mc_lex storage mc_storage memory 500M ')
    parallel 16;
    This create index takes about 4 hours to complete on a 8CPU dual core machine.
    This is on Oracle 10g (10.2.0.4)
    The reason i am creating the index as opposed to syncing it is because the data gets loaded into this table only once a day and it gets cleared once my keyword analysis is done.
    Any pointers to speed up the index creation will be really appreciated! Thanks in advance!

    My base table has the text that needs to be indexed stored in the "MESSAGE_CONTENT" column which for now is RAW data type. The data stored in this table are in hex representation.
    Some examples -
    MESSAGE_CONTENT
    616C70686120626574612067616D6D612064656C746120657073696C6F6E207A657461206E69F16F
    616C70686120626574612067616D6D612064656C746120657073696C6F6E207A657461
    616C70686120626574612067616D6D612064656C746120657073696C6F6E207A657461206E69C3B16F
    6865792E2C2C77686174277320676F696E67206F6E2E2E2E7066206368616E67277320697320736F6D652072657374617572616E742E2074686579206172652070736564756F2D636F6F6C
    54686520477265656B20616C7068616265742069732074686520736372697074207468617420686173206265656E
    54686520477265656B20616C7068616265742069732074686520736372697074207468617420686173206265656E20706F73742D64617461
    Now with your suggestion i tried to bypass this. So what i did was added a format column to my base table and updated it to "TEXT". My database is in UTF8.
    Now when i create the index with the following preferences it takes less than a minute.
    begin
    Ctx_Ddl.Create_Preference('kwta_storage', 'BASIC_STORAGE');
    ctx_ddl.set_attribute('kwta_storage','I_TABLE_CLAUSE',
    'tablespace TEXT_INDEX storage (initial 10M next 10M)');
    ctx_ddl.set_attribute('kwta_storage', 'K_TABLE_CLAUSE',
    'tablespace TEXT_INDEX storage (initial 10M next 10M)');
    ctx_ddl.set_attribute('kwta_storage', 'R_TABLE_CLAUSE',
    'tablespace TEXT_INDEX storage (initial 1M) lob (data) store as (cache)');
    ctx_ddl.set_attribute('kwta_storage', 'N_TABLE_CLAUSE',
    'tablespace TEXT_INDEX storage (initial 1M)');
    ctx_ddl.set_attribute('kwta_storage', 'I_INDEX_CLAUSE',
    'tablespace TEXT_INDEX storage (initial 1M) compress 2');
    ctx_ddl.create_preference('mylex', 'BASIC_LEXER');
    ctx_ddl.set_attribute('mylex', 'skipjoins', '_-"''`~!@#$%^&*()+=|}{[]\:;<>?/,');
    ctx_ddl.set_attribute('mylex','punctuations','.?!');
    ctx_ddl.set_attribute('mylex', 'INDEX_STEMS','NONE');
    ctx_ddl.set_attribute('mylex', 'continuation','\-');
    Ctx_Ddl.Create_Stoplist ( 'mystop' );
    Ctx_Ddl.Add_Stopword ( 'mystop', 'is' );
    Ctx_Ddl.Add_Stopword ( 'mystop', 'has' );
    Ctx_Ddl.Add_Stopword ( 'mystop', 'the' );
    Ctx_Ddl.Add_Stopword ( 'mystop', 'that' );
    end;
    create index kwtaidx on kwtai (message_content) indextype is ctxsys.context
    parameters ('filter ctxsys.auto_filter format column fmt stoplist mystop lexer mylex storage kwta_storage memory 500M')
    parallel 16;
    When i select distinct tokens from the $I table i get the following
    TOKEN_TEXT
    RESTAURANT
    GREEK
    WHATS
    ARE
    DELTA
    ZETA
    ALPHA
    ALPHABET
    EPSILON
    PF
    PSEDUOCOOL
    SOME
    CHANGS
    NIÃO
    ON
    POSTDATA
    SCRIPT
    BEEN
    GAMMA
    GOING
    HEY
    NI
    O
    THEY
    BETA
    Now what i am also wondering is if the text (message_content column) is being converted to UTF8 (database characterset) by using AUTO_FILTER. Is my assumption correct? Not sure how to validate this?
    And, would you kindly share of why RAW must not be used in this case?
    Thanks for all your pointers!

  • Create an index on a huge table

    hi gurus
    I am going to create an index on a very large table(194GB) with the temporary tablespace size is 80G.
    I am afraid that during the index creation the temporary tablespace is not enouth to hold the data needed
    to create the index,because i only have 80g in temporary tablespace. How do i estimated the size i need to create a such large index and is there an effcient way to do such index creation?
    thank u in advance.

    Jozsef wrote:
    Hi there,
    I have done similar things couple of times before and I have used a pretty good method to obtain a rough (and it was really a rough, but enough) estimation.
    Sorry folks, it does not work properly for create index...this method only works for SELECT statments, SO IGNORE IT :(
    It nearly works in 10g:
    | Id  | Operation              | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | CREATE INDEX STATEMENT |       | 10000 | 40000 |    10   (0)| 00:00:01 |
    |   1 |  INDEX BUILD NON UNIQUE| T1_I1 |       |       |            |          |
    |   2 |   SORT CREATE INDEX    |       | 10000 | 40000 |            |          |
    |   3 |    TABLE ACCESS FULL   | T1    | 10000 | 40000 |     6   (0)| 00:00:01 |
    Note
       - estimated index size: 196K bytesThe "Note" tells you Oracle's estimate of the final space allocation needed for the index. There are various reasons why the estimate is not very accurate, and why it's not a good estimate of the space requirement in the TEMP tablespace, but it gives you a figure that is probably in the right ballpark (factor of 2 out, either way, probably).
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "For every expert there is an equal and opposite expert."
    Arthur C. Clarke

  • Index creation online - performance impact on database

    hi,
    I have oracle 11.1.0.7 database running on Linux as 3 node RAC.
    I have a huge table which has more than 255 columns and is about 400GB in size which is also highly fragmented because of constant DML activities.
    Questions:
    1. For now i am trying to create an index Online while the business applications are running.
    Will there be any performance impact on the database to create index Online on a single column of a table 'TBL' while applications are active against the same table? So basically my question will index creation on a object during DML operations on the same object have performance impact on the database? is there a major performance impact difference in the database in creating index online and not online?
    2. I tried to build an index on a column which has NULL value on this same table 'TBL' which has more than 255 columns and is about 400GB in size highly fragmented and has about 140 million rows.
    I requested the applications to be shutdown, but the index creation with parallel of 4 a least took more than 6 hours to complete.
    We have a Pre-Prod database which has the exported and imported copy of the Prod data. So the pre-Prod is a highly de-fragmented copy of the Prod.
    When i created the same index on the same column with NULL, it only took 15 minutes to complete.
    Not sure why on a highly fragmented copy of Prod it took more than 6 hours compared to highly defragmented copy of Pre-Prod where the index creation took only 15 minutes.
    Any thoughts would be helpful.
    Thanks.
    Phil.

    How are you measuring the "fragmentation" of the table ?
    Is the pre-prod database running single instance or RAC ?
    Did you collect any workload stats (AWR / Statspack) on the pre-prod and production systems while creating (or failing to create) the index ?
    Did you check whether the index creation ended up in-memory, single pass or multi pass in in the two environments ?
    The commonest explanation for this type of difference is two-fold:
    a) the older data needs a lot of delayed block cleanout, which results in a lot of random I/O to the undo tablespace - slowing down I/O generally
    b) the newer end of the table is subject to lots of change, so needs a lot of work relating to read-consistency - which also means I/O on the undo system
      --  UPDATED:  but you did say that you had stopped the application so this bit wouldn't have been relevant.
    On top of this, an online (re)build has to lock the table briefly at the start and end of the build, and in a busy system you can wait a long time for the locks to be acquired - and if the system has been busy while the build has been going on it can take quite a long time to apply the journal file to finish the index build.
    Regards
    Jonathan Lewis

  • Index creation of 0figl_o02 taking too long.

    Hi,
    we load data to a ODS 0FIGL_O02 approx .5 million records daily. and recreate the index using the function module - RSSM_PROCESS_ODS_CREA_INDEXES.
    The index creation job used to last for 3 -4 hours six months ago. but now the job runs for 6 hours, is there a way to decrese the  job time.
    the no of records in the active table of ODS  is 424 million.

    hi,
    this DSO is based on DS 0FI_GL_4 which is delta enabled.
    Do you mean to say that you are receiving .5 million data daily?
    if yes then there is not much that you can do, as the program will try to create the incremental index and will have to find the current index of 240 million records. One thing you can do is that as a regular Monthly activity you can completely delete the index of the cube and recreate it(this may take huge time but it would correct some of the corrupt indexes).
    this below sap note might help you.
    Note 1413496 - DB2-z/OS: BW: Building indexes on active DSO data
    If you are not using to report or lookup on this DSO then please do not create secondary indexes.
    regards,
    Arvind.
    Edited by: Arvind Tekra on Aug 25, 2011 5:18 PM

  • Spatial query index creation fails with ORA-13282: failure on initializatio

    Hi,
    I have an Oracle 10g 10.2.0.5.0 database newly installed. Spatial index creation fails:
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-13282: failure on initialization of coordinate transformation
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 10
    The script I am trying to run is:
    Insert into USER_SDO_GEOM_METADATA
    (TABLE_NAME, COLUMN_NAME, DIMINFO, SRID)
    Values
    ('SOME_TABLE', 'geo',
    "SDO_DIM_ARRAY"(
    "SDO_DIM_ELEMENT"('X',600000,900000,0.001),
    "SDO_DIM_ELEMENT"('Y',150000,400000,0.001)), 23700);
    CREATE INDEX IX_GEO_SOME_TABLE ON SOME_TABLE (GEO) INDEXTYPE IS MDSYS.SPATIAL_INDEX ;
    Earlier I had some issues with NLS settings in relation to Spatial, but in this particular case setting the NLS_LANG for AMERICAN_AMERICA does not help. I found this comment http://www.orafaq.com/forum/t/127312/2/ but would not like to hack around with MDSYS table content. Any help is highly appreciated.
    Regards, Tamas

    Tamas,
    1 . . .Are you indexing a table that already has geometries or an empty table?
    . . . .If the former, do all the geometries in that table have the same (not NULL) SRID (23700)?
    2 . .The link you posted suggests a parsing problem since in Hungarian (23700), the decimal seperator is a comma (not a period). Accordingly, I believe the edit to mdsys.sdo_cs_srs.WKTEXT would be:
    PROJCS["HD72 / EOV", GEOGCS [ "HD72", DATUM ["Hungarian Datum 1972 (EPSG ID 6237)", SPHEROID ["GRS 1967 (EPSG ID 7036)", 6378160, 298,247167427]], PRIMEM [ "Greenwich", 0,000000 ], UNIT ["Decimal Degree", 0,01745329251994328]], PROJECTION ["Egyseges Orszagos Vetuleti (EPSG OP 19931)"], UNIT ["Meter", 1]]
                                                                                                                                         ^                                    ^                                   ^                                                                                                  Regards,
    Noel

  • Performance problem with query on bkpf table

    hi good morning all ,
    i ahave a performance problem with a below query on bkpf table .
    SELECT bukrs
               belnr
               gjahr
          FROM bkpf
          INTO TABLE ist_bkpf_temp 
         WHERE budat IN s_budat.
    is ther any possibility to improve the performanece by using index .
    plz help me ,
    thanks in advance ,
    regards ,
    srinivas

    hi,
    if u can add bukrs as input field or if u have bukrs as part of any other internal table to filter out the data u can use:
    for ex:
    SELECT bukrs
    belnr
    gjahr
    FROM bkpf
    INTO TABLE ist_bkpf_temp
    WHERE budat IN s_budat
        and    bukrs in s_bukrs.
    or
    SELECT bukrs
    belnr
    gjahr
    FROM bkpf
    INTO TABLE ist_bkpf_temp
    for all entries in itab
    WHERE budat IN s_budat
        and bukrs = itab-bukrs.
    Just see , if it is possible to do any one of the above?? It has to be verified with ur requirement.

  • Parallel Index creation takes more time...!!

    OS - Windows 2008 Server R2
    Oracle - 10.2.0.3.0
    My table size is - 400gb
    Number of records - 657,45,95,123
    my column definition first_col varchar2(22) ; -> I am creating index on this column
    first_col -> actual average size of column value is 10
    I started to create index on this column by following command
    CREATE INDEX CALL_GROUP1_ANO ON CALL_GROUP1(A_NO) LOCAL PARALLEL 8 NOLOGGING COMPRESS ;
    -> In my first attempt after three hours I got an error :
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    So I increased the size of temp tablespace to 380GB ,Because i expect the size of first_col index this much.
    -> In my second attempt Index creation is keep going even after 17 hours...!!
    Now the usage of temp space is 162 GB ... still it is growing..
    -> I checked EM Advisor Central ADDM :
    it says - The PGA was inadequately sized, causing additional I/O to temporary tablespaces to consume significant database time.
    1. why this takes this much of Temp space..?
    2. It is required this much of time to CREATE INDEX in parallel processing...? more than 17 hrs
    3. How to calculate and set the size of PGA..?

    OraFighter wrote:
    Oracle - 10.2.0.3.0
    My table size is - 400gb
    Number of records - 657,45,95,123
    my column definition first_col varchar2(22) ; -> I am creating index on this column
    first_col -> actual average size of column value is 10
    I started to create index on this column by following command
    CREATE INDEX CALL_GROUP1_ANO ON CALL_GROUP1(A_NO) LOCAL PARALLEL 8 NOLOGGING COMPRESS ;
    Now the usage of temp space is 162 GB ... still it is growing..The entire data set has to be sorted - and the space needed doesn't really vary with degree of parallelism.
    6,574,595,123 index entries with a key size of 10 bytes each (assuming that in your choice of character set one character = one byte) requires per row approximately
    4 bytes row overhead 10 bytes data, 2 bytes column overhead for data, 6 bytes rowid, 2 bytes column overhead for rowid = 24 bytes.
    For the sorting overheads, using the version 2 sort, you need approximately 1 pointer per row, which is 8 bytes (I assumed you're on 64 bit Oracle on this platform) - giving a total of 32 bytes per row.
    32 * 6,574,595,123 / 1073741824 = 196 GB
    You haven't said how many partitions you have, but you might want to consider creating the index unusable, then issuing a rebuild command on each partition in turn. From "Practical Oracle 8i":
    <blockquote>
    In the absence of partitioned tables, what would you do if you needed to create a new index on a massive data set to address a new user requirement? Can you imagine the time it would take to create an index on a 450M row table, not to mention the amount of space needed in the temporary segment. It's the sort of job that you schedule for Christmas or Easter and buy a couple of extra discs to add to the temporary tablespace.
    With suitably partitioned tables, and perhaps a suitably friendly application, the scale of the problems isn't really that great, because you can build the index on each partition in turn. This trick depends on a little SQL feature that appears to be legal even though I haven't managed to find it in the SQL reference manual:
         create index big_new_index on partitioned_table (colX)
         local
         UNUSABLE
         tablespace scratchpad
    The key word is UNUSABLE. Although the manual states that you can 'alter' an index to be unusable, it does not suggest that you can create it as initially unusable, nevertheless this statement works. The effect is to put the definition of the index into the data dictionary, and allocate all the necessary segments and partitions for the index - but it does not do any of the real work that would normally be involved in building an index on 450M rows.
    </blockquote>
    (The trick was eventually documented a couple of years after I wrote the book.)
    Regards
    Jonathan Lewis

Maybe you are looking for