Index on a huge table

there is a table which has 6844153 rows, i am trying to create an index on 3 columns on that table, it is taking forever..please advice

it is taking foreverHow long is "forever"...really.
Since your table is not very large, I would think it should complete in minutes, not hours (only a guess of course, I don't know your hardware or environment).
Or...creating an index needs to lock the table. Maybe some other session has it locked already and you're just sitting and waiting.

Similar Messages

  • Create Index on a huge table

    Hi,
    We have a huge table and there is no index on this table, I want to create index on this table, we are working on Oracle 11g/Linux.
    Our business users are frequently accessing this table and select statement taking very long time.
    Please let me know which index type would be best suited for this and what would the command to create index on it.
    Would appreciate your assistance.
    Regards.

    >
    We have a huge table and there is no index on this table, I want to create index on this table, we are working on Oracle 11g/Linux.
    Our business users are frequently accessing this table and select statement taking very long time.
    Please let me know which index type would be best suited for this and what would the command to create index on it.
    >
    We need loads of information to suggest anything useful
    a) What database version is yours? (if 11g, you have good more options)
    b) What type of environment is it? OLTP or Warehouse?
    c) How big is the table?
    d) How many distinct values are there in that column that you want to index?
    e) Would this index undergo lots of inserts/deletes/updates? That is, is this table used mainly for querying or will it undergo continous inserts/updates/deletes?
    In case your environment is warehouse type where you load once and then mainly used for queries and more importantly, if that column has very few distinct columns (typical example is a GENDER column where you have only two distinct values), you will be largely benefitted from BITMAP index. If it's an OLTP environment where multiple processes will be inserting into the table, you never ever go near BITMAP index but do only B-Tree index.
    Finally, have you arrived at a concrete reason on why you want to build that index now rather than when you designed the table? If you don't need an index for sure, better not have it. If you are on 11g, you can have INVISIBLE index.
    Also, if it's a very large table, you may create the index nologging to avoid loads of redo generation (not recommended on production environment though). But you have to be aware that in the event of disaster recovery you will have to recreate the index after you restore the database. Also if you are on Dataguard environment, you have to take necessary precautions while doing NOLOGGING operations.
    Edited by: user12035575 on Sep 11, 2011 12:37 PM

  • Create an index on a huge table

    hi gurus
    I am going to create an index on a very large table(194GB) with the temporary tablespace size is 80G.
    I am afraid that during the index creation the temporary tablespace is not enouth to hold the data needed
    to create the index,because i only have 80g in temporary tablespace. How do i estimated the size i need to create a such large index and is there an effcient way to do such index creation?
    thank u in advance.

    Jozsef wrote:
    Hi there,
    I have done similar things couple of times before and I have used a pretty good method to obtain a rough (and it was really a rough, but enough) estimation.
    Sorry folks, it does not work properly for create index...this method only works for SELECT statments, SO IGNORE IT :(
    It nearly works in 10g:
    | Id  | Operation              | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | CREATE INDEX STATEMENT |       | 10000 | 40000 |    10   (0)| 00:00:01 |
    |   1 |  INDEX BUILD NON UNIQUE| T1_I1 |       |       |            |          |
    |   2 |   SORT CREATE INDEX    |       | 10000 | 40000 |            |          |
    |   3 |    TABLE ACCESS FULL   | T1    | 10000 | 40000 |     6   (0)| 00:00:01 |
    Note
       - estimated index size: 196K bytesThe "Note" tells you Oracle's estimate of the final space allocation needed for the index. There are various reasons why the estimate is not very accurate, and why it's not a good estimate of the space requirement in the TEMP tablespace, but it gives you a figure that is probably in the right ballpark (factor of 2 out, either way, probably).
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "For every expert there is an equal and opposite expert."
    Arthur C. Clarke

  • Bitmap index or Composite index better on a huge table

    Hi All,
    I got a question regarding the Bitmap index and Composite Index.
    I got a table which has got only two colums CUSTOMER(group_no NUMBER, order_no NUMBER)
    This is a 100Million+ record table and here I got 100K Group_nos and and unique 100Million order numbers. I.E Each group should have 1000 order numbers.
    I tested by creating a GLOBAL Bitmap index on this huge table(more than 1.5gb in size) and the GLOBAL Bitmap index that got created is under 50MB and when I query for a group number say SELECT * FROM CUSTOMER WHERE group_no=67677; --> 0.5 seconds to retrive all the 1000 rows. I checked for different groups and it is the same.
    Now I dropped the BitMap Index and re-created a Composite index on( group_no and order_no). The index size more than the table size and is around 2GB in size and when I query using the same select statment SELECT * FROM CUSTOMER WHERE group_no=67677; -->0.5 seconds to retrive all the 1000 rows.
    My question is which one is BETTER. BTree or BITMAP Index and WHY?
    Appreciate your valuable inputs on this one.
    Regars,
    Madhu K.

    Dear,
    Hi All,
    I got a question regarding the Bitmap index and Composite Index.
    I got a table which has got only two colums CUSTOMER(group_no NUMBER, order_no NUMBER)
    This is a 100Million+ record table and here I got 100K Group_nos and and unique 100Million order numbers. I.E Each group should have 1000 order numbers.
    I tested by creating a GLOBAL Bitmap index on this huge table(more than 1.5gb in size) and the GLOBAL Bitmap index that got created is under 50MB and when I query for a group number say SELECT * FROM CUSTOMER WHERE group_no=67677; --> 0.5 seconds to retrive all the 1000 rows. I checked for different groups and it is the same.
    Now I dropped the BitMap Index and re-created a Composite index on( group_no and order_no). The index size more than the table size and is around 2GB in size and when I query using the same select statment SELECT * FROM CUSTOMER WHERE group_no=67677; -->0.5 seconds to retrive all the 1000 rows.
    My question is which one is BETTER. BTree or BITMAP Index and WHY?
    Appreciate your valuable inputs on this one.First of all, bitmap indexes are not recommended for write intensive OLTP applications due to the locking threat they can produce in such a kind of applications.
    You told us that this table is never updated; I suppose it is not deleted also.
    Second, bitmap indexes are suitable for columns having low cardinality. The question is how can we define "low cardinality", you said that you have 100,000 distincts group_no on a table of 100,000,000 rows.
    You have a cardinality of 100,000/100,000,000 =0,001. Group_no column might be a good candidate for a bitmap index.
    You said that order_no is unique so you have a very high cardinality on this column and it might not be a candidate for your bitmap index
    Third, your query where clause involves only the group_no column so why are you including both columns when testing the bitmap and the b-tree index?
    Are you designing such a kind of index in order to not visit the table? but in your case the table is made only of those two columns, so why not follow Hermant advise for an Index Organized Table?
    Finally, you can have more details about bitmap indexes in the following richard foot blog article
    http://richardfoote.wordpress.com/2008/02/01/bitmap-indexes-with-many-distinct-column-values-wotsuh-the-deal/
    Best Regards
    Mohamed Houri

  • Update records in huge table

    Hi,
    I need to update two fields in a huge table (> 200.000.000 records). I've created 2 basic update scripts with a where clause. The problem is that there isn't an index on these fields in the where clause. How can I solve this? Creating a new index is not an option.
    An other solution is to update the whole table (so without a where clause) but I don't know if it takes a lot of time, locks records,...
    Any suggestions?
    Thanks.
    Ken

    Ken,
    You may be better off reading the metalink documents. PDML stands for Parallel DML. You can use parallel slaves to get the update done quickly. Obviously this is dependent on the number of parallel slaves you have and the degree you set
    Type PDML on metalink
    G

  • Partitioning huge tables.

    Hi all,
    I am looking in a database for a customer where they have huge tables.
    I have just executed:
    SELECT SEGMENT_NAME, SEGMENT_TYPE, OWNER, (BYTES/1024/1024) MEGAS
    FROM DBA_SEGMENTS
    ORDER BY MEGAS DESC;
    Some of them are displayed below:
    GL_JE_LINES     TABLE     GL     42,272
    SYS_IOT_TOP_789022     INDEX     APPLSYS     19,670
    WIP_PERIOD_BALANCES     TABLE     WIP     11,157
    SYS_IOT_TOP_789028     INDEX     APPLSYS     10,923
    MTL_TRANSACTION_ACCOUNTS     TABLE     INV     10,796
    WIP_TRANSACTION_ACCOUNTS     TABLE     WIP     10,763
    RLM_SCHEDULE_LINES_ALL     TABLE     RLM     10,482
    What kind of partition has anybody used for GL_JE_LINES table for example?
    Any advice or comment will be really appreciated.
    Thanks in advance.
    Kind regards,
    Francisco

    Francisco,
    Please see old threads for similar discussion -- http://forums.oracle.com/forums/search.jspa?threadID=&q=Partitioning&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    Thanks,
    Hussein

  • Huge Group by operation on Huge Table takes lot of time

    Hi,
    Pl find the below given process which takes of time in execution (approx 5-6 hrs)
    The mailn reason for this is
    1) It Fetch data from huge table partition (i.e 18GB data for per day)
    2)Performs Group by operations
    3)In the where clause Index is not there on destination_number so performs Full Table scan
    I have some idea i.e I need to change the Some Parameter which will make the process faster ,
    Can you please help on this
    create or replace table tmp_kumar nologging as
    SELECT c.series_num , subscriber_id , COUNT(1) cnt , SUM(NVL(total_currency_charge,0))total_currency_charge ,
    TRUNC(disconnect_date) FROM
    (select * from prepcdr.PREPCDR_MAR_P3_10 partition(disconnect_date_11) union all
    select * from prepcdr.PREPCDR_MAR_P3_10 partition(disconnect_date_11_new)) b,
    (SELECT series_num, des, created_dt, LENGTH (series_num) len
    FROM PREPCDR.HSS_SERIES_MAST where home_ind ='Y'
    UNION
    SELECT cimd_number, des, created_dt, LENGTH (cimd_number)
    FROM PREPCDR.HSS_CIMD_MASTER) c
    WHERE b.cdr_call_type = '86'
    AND SUBSTR (b.destination_number, 1, c.len) = c.series_num
    AND c.len = (SELECT MAX(x.len) FROM (SELECT series_num, des, created_dt, LENGTH (series_num) len
    FROM PREPCDR.HSS_SERIES_MAST where home_ind ='Y'
    UNION
    SELECT cimd_number, des, created_dt, LENGTH (cimd_number) len
    FROM PREPCDR.HSS_CIMD_MASTER) x WHERE x.series_num = SUBSTR (b.destination_number, 1, x.len))
    AND disconnect_date >= '11-MAR-2010'
    AND disconnect_date < '12-MAR-2010'
    GROUP BY c.series_num , TRUNC(disconnect_date) , suBscriber_id

    This, most likely, will be more efficient:
    SELECT  c.series_num,
            subscriber_id,
            COUNT(1) cnt,
            SUM(NVL(total_currency_charge,0)) total_currency_charge,
            TRUNC(disconnect_date)
      FROM  (
              select  *
                from  prepcdr.PREPCDR_MAR_P3_10 partition(disconnect_date_11)
             union all
              select  *
                from  prepcdr.PREPCDR_MAR_P3_10 partition(disconnect_date_11_new)
            ) b,
             SELECT  DISTINCT series_num,
                              des,
                              created_dt,
                              len
               FROM  (
                      SELECT  series_num,
                              des,
                              created_dt,
                              len,
                              RANK() OVER(ORDER BY len) rnk
                        FROM  (
                                SELECT  series_num,
                                        des,
                                        created_dt,
                                        LENGTH(series_num) len
                                  FROM  PREPCDR.HSS_SERIES_MAST
                                  where home_ind ='Y'
                               UNION ALL
                                SELECT  cimd_number,
                                        des,
                                        created_dt,
                                        LENGTH(cimd_number)
                                  FROM  PREPCDR.HSS_CIMD_MASTER
               WHERE rnk = 1
            ) c
      WHERE b.cdr_call_type = '86'
        AND SUBSTR(b.destination_number,1,c.len) = c.series_num
       AND disconnect_date >= DATE '2010-03-11'
       AND disconnect_date < DATE '2010-03-12'
      GROUP BY  c.series_num,
                TRUNC(disconnect_date),
                suBscriber_id
    /SY.

  • Accessing huge tables like bseg,  bkpf

    1) What are the precautions we should consider while accessing huge tables like bseg, bkpf or mseg tables.

    Hi,
    Some tips may be:
    1)
    Write the Select statements covering all( or almost all ) the primary keys in the same order as defined in the DB table in the WHERE clause.
    2)
    Incase, if you were using the fields that were not in the Primary key of the DB table, create Secondary indexes on these fields.
    3)
    Always try using an Array fetch of the records on the table instead of going for Select & Endselect....
    Thanks,
    Vishnu.

  • INDEX UNIQUE SCAN instead of   INDEX FULL SCAN or TABLE ACCESS FULL

    I have calculated statistics in all tables and indexes
    I have a table and a view and when I put it
    SELECT *
    FROM TABLE_A A
    INNER JOIN VIEW_B B ON A.KEY_ID = B.PFK_KEY_ID          
    WHERE (B.FK_ID_XXX = 1)
    If I see the execution plan:
    In TABLE_A make a
    TABLE ACCESS BY INDEX ROWID
    INDEX UNIQUE SCAN (FIELD_A_TABLE_A_PK)
    It’s OK. I NEED IT (INDEX UNIQUE SCAN)
    But If I put
    SELECT A.Field_1, A.Field_2, A.Field_3, A.Field_4
    FROM TABLE_A A
    INNER JOIN VIEW_B B ON A.KEY_ID = B.PFK_KEY_ID          
    WHERE (B.FK_ID_XXX = 1)
    In table A make a TABLE ACCESS FULL.
    Then If I put:
    SELECT /*+ INDEX(A FIELD_A_TABLE_A_PK) */ A.Field_1, A.Field_2, A.Field_3, A.Field_4
    FROM TABLE_A A
    INNER JOIN VIEW_B B ON A.KEY_ID = B.PFK_KEY_ID          
    WHERE (B.FK_ID_XXX = 1)
    If I see the execution plan:
    In TABLE_A make a
    TABLE ACCESS BY INDEX ROWID
    INDEX UNIQUE SCAN (FIELD_A_TABLE_A_PK)
    It’s OK. I NEED IT (INDEX UNIQUE SCAN)
    Finally, If I put other tables and views in the query (I NEED IT)
    For example:
    SELECT /*+ INDEX(A FIELD_A_TABLE_A_PK) */ A.Field_1, A.Field_2, A.Field_3, A.Field_4
    FROM TABLE_A A
    INNER JOIN VIEW_B B ON A.KEY_ID = B.PFK_KEY_ID          
    INNER JOIN TABLE_C….
    LEFT JOIN VIEW_D….
    WHERE (B.FK_ID_XXX = 1)
    If I see the execution plan:
    In TABLE_A make a
    TABLE ACCESS BY INDEX ROWID
    INDEX FULL SCAN (FIELD_A_TABLE_A_PK)
    I need INDEX UNIQUE SCAN instead of INDEX FULL SCAN or TABLE ACCESS FULL.
    How can obtain it?
    What happens???
    Thanks!

    Notice the difference in cardinality between your two select statements:
    SELECT STATEMENT, GOAL = ALL_ROWS Cost=5 Cardinality=1
    SELECT STATEMENT, GOAL = ALL_ROWS Cost=10450 Cardinality=472161Apparently since the optimizer believed the first statement was going to return one row, it used an index. But in the second statement it believed it was going to return nearly the whole table (didn't you say it had around 500k rows?). Hence full table scan.

  • Can we change the fields of database unique index in a customised table?

    Hi all..
    I want to know that can we create or change or delete the database unique index of a customized table?
    In my case, there is a customised table with 4 primary keys with all the records to be maintained thru transaction code SM30.
    There is database unique index maintained for this table which has 2 fields. These 2 fields are out of the 4 primary fields of the table.I hope I have made myself clear!
    Now when I am trying to insert a record in the table it give me a short dump.( It says duplication of records is not allowed)
    The reason being that the new record that I am trying to insert in the database table has those 2 fields for which the unique index is maintained is the same as an already existing record.And the other two fields are different from the already existing record.So overall the combination of the 4 primary fields is different.
    Please tell me how shall I proceed now?
    I also tried to change the Unique index but it is asking me some kind of authrization(You are not authorized to make changes (authorization object S_DEVELOP)).Also I am not sure whether changing the unique index is feasible or not.?
    Thanks.

    hi
    I think you will not be able to do unique indexing withou the help of primary keys,so use all the primary keys into the table field selections  and and then create indexing otherwise dupilication of keys can occur. if you are not able to keep the primary keys then go for non unique key indexing,where you have to add the client field and the any keys of your wish.

  • Creating a bit map index on a partitioned table

    Dear friends,
    I am trying to create a bitmap index on a partitioned table but am receiving the following ORA error. Can you please let me know on how to create a local bit map index as the message suggests?
    ERROR at line 1:
    ORA-25122: Only LOCAL bitmap indexes are permitted on partitioned tables
    Trying to use the keyword local in front leads to wrong syntax.
    Thanks in advance !!
    Somnath

    ORA-25122 Only LOCAL bitmap indexes are permitted on partitioned tables
    Cause: An attempt was made to create a global bitmap index on a partitioned table.
    Action: Create a local bitmap index instead
    Example of a Local Index Creation
    CREATE INDEX employees_local_idx ON employees (employee_id) LOCAL;
    Example is about btree and I think it will work for bitmap also.

  • Non-Partitioned Global Index on Range-Partitioned Table.

    Hi All,
    Is it possible to create Non-Partitioned Global Index on Range-Partitioned Table?
    We have 4 indexes on CS_BILLING range-partitioned table, in which one is CBS_CLIENT_CODE(*local partitioned index*) and others are unknown types of index to me??
    Means other 3 indexes are what type indexes ...either non-partitioned global index OR non-partitioned normal index??
    Also if we create index as :(create index i_name on t_name(c_name)) By default it will create Global index. Please correct me......
    Please help me in identifying other 3 indexes types by referring below ouputs!!!
    select INDEX_NAME,TABLE_NAME,PARTITIONING_TYPE,LOCALITY from dba_part_indexes where TABLE_NAME='CS_BILLING';
    INDEX_NAME TABLE_NAME PARTITI LOCALI
    CSB_CLIENT_CODE CS_BILLING RANGE LOCAL
    select index_name,index_type,table_name,table_type,PARTITIONED from dba_indexes where table_name='CS_BILLING';
    INDEX_NAME INDEX_TYPE TABLE_NAME TABLE_TYPE PAR
    CSB_CREATE_DATE NORMAL CS_BILLING TABLE NO
    CSB_SUBMIT_ORDER NORMAL CS_BILLING TABLE NO
    CSB_CLIENT_CODE NORMAL CS_BILLING TABLE YES
    CSB_ORDER_NBR NORMAL CS_BILLING TABLE NO
    select INDEX_OWNER,INDEX_NAME,TABLE_NAME,COLUMN_NAME from dba_ind_columns where TABLE_NAME='CS_BILLING';
    INDEX_OWNER INDEX_NAME TABLE_NAME COLUMN_NAME
    RPADMIN CSB_CREATE_DATE CS_BILLING CREATE_DATE
    RPADMIN CSB_SUBMIT_ORDER CS_BILLING SUBMIT_TO_INVOICE
    RPADMIN CSB_SUBMIT_ORDER CS_BILLING ORDER_NBR
    RPADMIN CSB_CLIENT_CODE CS_BILLING CLIENT_CODE
    RPADMIN CSB_ORDER_NBR CS_BILLING ORDER_NBR
    select dip.index_name, dpi.locality, dip.partition_name, dip.status
    from dba_part_indexes dpi, dba_ind_partitions dip
    where dpi.table_name ='CS_BILLING'
    and dpi.index_name = dip.index_name;
    INDEX_NAME LOCALI PARTITION_NAME STATUS
    CSB_CLIENT_CODE LOCAL CSB_2006_4Q USABLE
    CSB_CLIENT_CODE LOCAL CSB_2006_3Q USABLE
    CSB_CLIENT_CODE LOCAL CSB_2007_1Q USABLE
    CSB_CLIENT_CODE LOCAL CSB_2007_2Q USABLE
    CSB_CLIENT_CODE LOCAL CSB_2007_3Q USABLE
    CSB_CLIENT_CODE LOCAL CSB_2007_4Q USABLE
    CSB_CLIENT_CODE LOCAL CSB_2008_1Q USABLE
    CSB_CLIENT_CODE LOCAL CSB_2008_2Q USABLE
    CSB_CLIENT_CODE LOCAL CSB_2008_3Q USABLE
    CSB_CLIENT_CODE LOCAL CSB_2008_4Q USABLE
    CSB_CLIENT_CODE LOCAL CSB_2009_1Q USABLE
    CSB_CLIENT_CODE LOCAL CSB_2009_2Q USABLE
    CSB_CLIENT_CODE LOCAL CSB_2009_3Q USABLE
    CSB_CLIENT_CODE LOCAL CSB_2009_4Q USABLE
    select * from dba_part_indexes
    where table_name ='CS_BILLING'
    and locality = 'GLOBAL';
    no rows selected
    -Yasser
    Edited by: YasserRACDBA on Mar 5, 2009 11:45 PM

    Yaseer,
    Is it possible to create Non-Partitioned and Global Index on Range-Partitioned Table?
    Yes
    We have 4 indexes on CS_BILLING range-partitioned table, in which one is CBS_CLIENT_CODE(*local partitioned index*) and others are unknown types of index to me??
    Means other 3 indexes are what type indexes ...either non-partitioned global index OR non-partitioned normal index??
    You got local index and 3 non-partitioned "NORMAL" b-tree tyep indexes
    Also if we create index as :(create index i_name on t_name(c_name)) By default it will create Global index. Please correct me......
    Above staement will create non-partitioned index
    Here is an example of creating global partitioned indexes
    CREATE INDEX month_ix ON sales(sales_month)
       GLOBAL PARTITION BY RANGE(sales_month)
          (PARTITION pm1_ix VALUES LESS THAN (2)
           PARTITION pm2_ix VALUES LESS THAN (3)
           PARTITION pm3_ix VALUES LESS THAN (4)
            PARTITION pm12_ix VALUES LESS THAN (MAXVALUE));Regards

  • Transport of secondary index on /BIC/P table

    Hi All,
    I have created a secondary index on '/BIC/P table and it is not allowing me to assign a transport object as the table itself is temporary and it gets created automatically in the target system when we activate the info-object.
    Is there any way to transport the secondary index directly on the /BIC/P table ?
    Its urgent.
    Thanks
    Soumya

    Soumya,
    Sorry for not being clear enough: you can change the package of this index and transport it but since the P table could be regenerated automatically by the BW application when you change your IObj you may now get into troubles when importing a change.
    Indeed having now a local $TMP table with a non-local object underneath may not please your target system and even not your DEV system...
    One can always bypass SAP BW application; for instance creating indexes directly in the database; but this is at its own risk!
    hoping this will explain a bit further your issue
    Olivier.

  • Create/drop index on a busy table

    Hello,
    It's kind-a of a funny question but how do you create or drop a index on a very busy table? (by making the users wait...)
    I tried:
    lock table <table> in exclusive mode
    then in another session (session2)
    update <table> set col1 = 'a';
    (this session now waits)
    then back a the first session:
    create index bla on <table> (col1);
    And what happens is that the lock is released immediatelly, and in session2 the update updates and I get the standard "RESOURCE BUSY" error.
    Any ideas on how to do that ?
    Thanks

    Jozsef wrote:
    Hi there,
    I have done similar things couple of times before and I have used a pretty good method to obtain a rough (and it was really a rough, but enough) estimation.
    Sorry folks, it does not work properly for create index...this method only works for SELECT statments, SO IGNORE IT :(
    It nearly works in 10g:
    | Id  | Operation              | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | CREATE INDEX STATEMENT |       | 10000 | 40000 |    10   (0)| 00:00:01 |
    |   1 |  INDEX BUILD NON UNIQUE| T1_I1 |       |       |            |          |
    |   2 |   SORT CREATE INDEX    |       | 10000 | 40000 |            |          |
    |   3 |    TABLE ACCESS FULL   | T1    | 10000 | 40000 |     6   (0)| 00:00:01 |
    Note
       - estimated index size: 196K bytesThe "Note" tells you Oracle's estimate of the final space allocation needed for the index. There are various reasons why the estimate is not very accurate, and why it's not a good estimate of the space requirement in the TEMP tablespace, but it gives you a figure that is probably in the right ballpark (factor of 2 out, either way, probably).
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "For every expert there is an equal and opposite expert."
    Arthur C. Clarke

  • Index for a PSA table is not in the "customer" namespace

    Hi,
    While loading data through infosource 0CO_OM_WBS_1 to
    data target 0IMFA_1 loading failes and the reason given is that the system reads data from PSA table /BIC/B0001060000 of 0IM_FA_IQ_9 and the index generated for this table supplied return code 14.
    I found no notes in the subject - with the syntax :
    Index for a PSA table is not in the "customer" namespace.
    thanks in advance for your help.

    Hi,
    I think you need to speak to your basis guys / DBA as index created on PSA table is not in your tablespace. He should be able to help you.
    Vikash

Maybe you are looking for

  • Open a report from button click/javascript

    Hi, Does anyone know if there is a way to call a report from a button click? Or through javascript? I set the report to Display Results > Link - within the dashboard and then did a view source on it and found the function that calls the report from t

  • How do I activate print button on iPad

    Printer is not recognized by iPad - I get message 'no AirPrint printers found'

  • Error getting while deploying the application (java.lang.OutOfMemoryError).

    Hi, I am trying to migrate an application from JBoss to SAP Netweaver WAS. I am getting an exception when i am trying to deploy the application from Netweaver developer studio as "<b>java.lang.OutOfMemoryError</b>". The portion of the stack is given

  • LoadingSecondFile - Dosen't work

    loadMovie() loads SWFs into _root.mc1.mc2, and I use the below method to be able to identify the selected movieclip. It works when I load the first SWF but when the second SWF is been loaded it dosen't work. What may be the cause for this. I tried un

  • Is there anyway to retrieve files after crash?

    So iPhone wacked out and stopped working saying I had to plug in to iTunes.  Plugged in to a different itunes not on my imac and updated/restored iphone.  I get the phone back to my computer and then itunes isnt up to date phone cant work with my ima