Status of Fulltext Index Creation Process

Greetings!
We are creating a fulltext index over a jdbc/java call. We need to wait for the index creation to complete. How can I tell when the process is complete?
Thanks!
Mukunda

Thanks Markus!
We did some testing..The index entries are inserted into SYS.M_FULLTEXT_QUEUES when the index process is started and not when it is ended.
This table provides information such as document, indexed_document, queue_document, error_document counts. Am I missing something?
Regards,
Mukunda

Similar Messages

  • Systemcopy using R3load - Index creation VERY slow

    We exported a BW 7.0 system using R3load (newest tools and SMIGR_CREATE_DDL) and now importing it into the target system.
    Source database size is ~ 800 GB.
    The export was running a bit more than 20 hours using 16 parallel processes. The import is still running with the last R3load process. Checking the logs I found out that it's creating indexes on various tables:
    (DB) INFO: /BI0/F0TCT_C02~150 created#20100423052851
    (DB) INFO: /BIC/B0000530000KE created#20100423071501
    (DB) INFO: /BI0/F0COPC_C08~01 created#20100423072742
    (DB) INFO: /BI0/F0COPC_C08~04 created#20100423073954
    (DB) INFO: /BI0/F0COPC_C08~05 created#20100423075156
    (DB) INFO: /BI0/F0COPC_C08~06 created#20100423080436
    (DB) INFO: /BI0/F0COPC_C08~07 created#20100423081948
    (DB) INFO: /BI0/F0COPC_C08~08 created#20100423083258
    (DB) INFO: /BIC/B0000533000KE created#20100423101009
    (DB) INFO: /BIC/AODS_FA00~010 created#20100423121754
    As one can see on the timestamps the creation of one index can take an hour or more.
    x_cons is showing constant CrIndex reading in parallel, however, the througput is not more than 1 - 2 MB/sec.  Those index creation processes are running now since over two days (> 48 hours) and since the .TSK files don't mentioned those indexes any more I wonder how many of them are to be created and how long this will take.
    The whole import was started at "2010-04-20 12:19:08" (according to import_monitor.log) so running now since more than three days with four parallel processes. Target machine has 4 CPUs and 16 GB RAM (CACHE_SIZE is 10 GB). The machine is idling though with 98 - 99 %.
    I have three questions:
    - why does index creation take such a long time? I'm aware of the fact, that the cache may not be big enough to take all the data but that speed is far from being acceptable. Doing a Unicode migration, even in parallel, will lead to a downtime that may not be acceptable by the business.
    - why are the indexes not created first and then filled with the data? Each savepoint may take longer but I don't think that it will take that long.
    - how to find out which indexes are still to be created and how to estimate the avg. runtime of that?
    Markus

    i Peter,
    I would suggest creating an SAP ticket for this, because these kind of problems are quite difficult to analyze.
    But let me describe the index creation within MaxDB. If only one index creation process is active, MaxDB can use multiple Server Tasks (one for each Data Volume) to possibly increase the I/O throughput. This means the more Data Volumes you have configured, the faster the parallel index creation process should be. However, this hugely depends on your I/O system being able to handle an increasing amount of read/write requests in parallel. If one index creation process is running using parallel Server tasks, all further indexes to be created at that time can only utilize one single User Task for the I/O.
    The R3load import process assumes that the indexes can be created fast, if all necessary base table data is still present in the Data Cache. This mostly applies to small tables up to table sizes that take up a certain amount of the Data Cache. All indexes for these tables are created right after the table has been imported to make use of the fact, that all the needed data for index creation is still in the cache. Many idexes may be created simultaneously here, but only one index at a time can use parallel Server Tasks.
    If a table is too large in relation to the total database size, then its indexes are being queued for serial index creation to be started when all tables were imported. The idea is that the needed base table data would likely have been flushed out of the Data Cache already and so there is additional I/O necessary rereading that table for index creation. And this additional I/O would greatly benefit from parallel Server Tasks accessing the Data Volumes. For this reason, the indexes that are to be created at the end are queued and serialized to ensure that only one index creation process is active at a time.
    Now you mentioned that the index creation process takes a lot of time. I would suggest (besides opening an OSS ticket) to start the MaxDB tool 'Database Analyzer' with an interval of 60 seconds configured during the whole import. In addition, you should activate the 'time measurement' to get a reading on the I/O times. Plus, ensure that you have many Data Volumes configured and that your I/O system can handle that additional loag. E.g. it would make no sense to have 25 Server Tasks all writing to a single local disk, I would assume that the disk would become a bottle neck...
    Hope my reply was not too confusing,
    Thorsten

  • BIA Index creation error due to invalid characters

    Hi All,
    While creating BIA index on an Infocube the index creation process fails with the following message.
    Index for table '/BIC/SSPSEGTXT' is being processed
    A character set conversion is not possible.
    Parallel indexing process terminated (Task: '4')
    Turns out this field is a text field and has characters such as * and + in the description,which is causing this problem.
    How do I proceed with the index creation.
    Options I have are:
    a) Get rid of this field.. Impossible since there are a few queries that require this information.
    b) Run the Database scan tool RSI18N_Search to eliminate foreign characters ....but this does not work for me.
    Can anyone suggest some other option or a resolution to this problem. Or If anyone has prior experience of working with the program RSI18N_Search please let me know.
    Regards
    VK

    Venkat, Andres,
    You can try several things here;
    1) This is pretty ugly but in case you need a quick way around. Create another cube w/o that char, load all the data to that cube from the first cube, and start sending daily deltas to both cubes. Of course put the second cube to BWA
    2)  Delete the master data! When you try deleting the master data (a particular record or records that you identified), mostly you will find out that it's used in a transactional data(in ODS or Cube) therefore it will not allow you to delete those records, unless you delete the records from the cubes first, then try it again. OR you might be lucky that it's not stored in anycube and it will delete it right away!
    The right way is the second option, however when you delete data from cubes, ods, it will lock the cube + invalidate aggregates if you have + BWA indexes etc. You need to recreate them again. So try doing it in non-business hours or in your maintenance window.
    Cheers
    Tansu

  • Faster Context index creation!!

    Hi Experts,
    I am new to the concept of CONTEXT in Oracle and I havent worked on it. My problem is that we have monthly process of rebuilding the context index on a table ( varchar column). This process takes about 8hrs. When I created it the last time I increased the sort_area_size parameter for the session to 500M,increased sort_multiblock_read_count and db_file_multiblock_read_count depending upon the OS limitations,and the index was created in 4 hrs. But this is just DBA trick to do things faster and i beleive it can be done more faster.
    Can anyone suggest me what are the ways of speeding up the index creation process from the CONTEXT perspective
    ,like increasing default memory with Ctx_Adm.Set_Parameter ( 'DEFAULT_INDEX_MEMORY', '500M'); .
    Also having default_index_memory and sort_area_size as 500, will this take 1000M of memory during the index creation?
    Also i read somewhere that before creating index truncating the table DR$INDEX_ERROR will help speeding index creation. Is this right? I think it should not make a difference .
    Any suggestion on speeding up the index creation would be helpful. i cannot create the index in parallel as I am running Oracle 8.1.6 and base table is not partitioned.
    Thanks.
    Ankur

    Sorry for posting this question here. I have put in on TEXT forum.

  • Index Creation wrong error in process chain

    Hi Experts,
    One of my process chain got failed due to following error:
    "Index creation in Infocube got wrong"
    can anyone help,how to resolve this issue?
    Edited by: Nirav Shah on Dec 21, 2007 7:05 AM

    1)You can even see the INDEX in DB02 t.code
    2)Goto Cube -> Manage -> Performance TAb -> Repair Indices (you can see all the index related statuses hre, like check indexes, repair indexes, you will get the green/red status here if you got green status no worries, if red you can rebuild indexes)
    3) You can read the message from Process logs and you can find out some solve issue from there also
    Edited by: Aduri on Dec 21, 2007 7:10 AM

  • Index creation; What should the ideal process be?

    DB Version:11g R1
    I maintain a db related to retailing and wareshousing. Last week, i issued a CREATE INDEX statement at around 8:00 pm which is a 'not so busy time' for DB and applications. But the temp tablespace ran out of space during this exercise. It was a nightmare.
    Now, we have decided to follow a standardized process followed by experienced Oracle DBA professionals in the Industry. So, we would like to know what process you guys would follow if you are asked to CREATE or REBUILD Indexes.

    I maintain a db related to retailing and wareshousing. Last week, i issued a CREATE INDEX statement at around 8:00 pm which is a 'not so busy time' for DB and applications. But the temp tablespace ran out of space during this exercise. It was a nightmare. Here are some tips:
    http://searchoracle.techtarget.com/tip/Speed-up-Oracle-index-creation
    http://mehrajdba.wordpress.com/2009/01/30/how-to-make-an-index-creation-faster/
    HTH
    Z.K.

  • I am looking for a way to automate index creation using Adobe Reader Pro without having to use the screen user interface, as the indexing has to be run by a batch process.

    I am looking for a way to automate index creation using Adobe Reader Pro without having to use the screen user interface, as the indexing has to be run by a batch process.

    [discussion moved to Creating, Editing & Exporting PDFs forum.]

  • DATABASE.FULLTEXT indexer stuck on Delete

    I've run into an issue with our Database Fulltext Indexer on UCM 10g. The Automatic Update Cycle seems to be 'stuck' on Deleting metadata for a piece of content that is no longer in the system. The indexer State is "Clean Up" and the Status is Active. The message says "Deleting metadata (0 of 1)". I've run IdcAnalyze on the collection looking for possible errors but it did not report any problems.
    Is there some place I can clear this Indexer state and start processing new content items that are backing up? I'd like to be able to avoid a collection rebuild if at all possible.

    Srinath - the two things that stick out in my mind are these two items, one from the Server Logs and the other from Output after turning on indexer and system database..
    Looks like this query reached max time out and canceled.
    ------from Server Logs---------
    Unable to do clean up. csDbUnableToExecuteBatch() error occurred during batching: ORA-01013: user requested cancel of current operation
    java.sql.BatchUpdateException: error occurred during batching: ORA-01013: user requested cancel of current operation [ Details ]
    An error has occurred. The stack trace below shows more information.
    !csIndexerAbortedMsg!csIndexerUnableToCleanup!csDbUnableToExecuteBatch,!$error occurred during batching: ORA-01013: user requested cancel of current operation<br>!syExceptionType2,java.sql.BatchUpdateException,error occurred during batching: ORA-01013: user requested cancel of current operation<br>
    intradoc.common.ServiceException: !csIndexerUnableToCleanup
         at intradoc.indexer.Indexer.doCleanUp(Indexer.java:825)
         at intradoc.indexer.Indexer.doWork(Indexer.java:278)
         at intradoc.indexer.Indexer.doIndexing(Indexer.java:439)
         at intradoc.indexer.Indexer.buildIndex(Indexer.java:348)
         at intradoc.server.IndexerMonitor.doIndexing(IndexerMonitor.java:1012)
         at intradoc.server.IndexerMonitor$4.run(IndexerMonitor.java:832)
    Caused by: intradoc.data.DataException: !csDbUnableToExecuteBatch,!$error occurred during batching: ORA-01013: user requested cancel of current operation
         at intradoc.jdbc.JdbcWorkspace.handleSQLException(JdbcWorkspace.java:2371)
         at intradoc.jdbc.JdbcWorkspace.executeBatch(JdbcWorkspace.java:525)
         at intradoc.indexer.IndexerWorkObject.executeDependentQueriesAllowBatch(IndexerWorkObject.java:417)
         at intradoc.indexer.IndexerWorkObject.executeDependentQueriesEx(IndexerWorkObject.java:344)
         at intradoc.indexer.IndexerWorkObject.executeDependentQueries(IndexerWorkObject.java:338)
         at intradoc.indexer.Indexer.cleanUpQueries(Indexer.java:557)
         at intradoc.indexer.Indexer.doCleanUp(Indexer.java:821)
         ... 5 more
    Caused by: java.sql.BatchUpdateException: error occurred during batching: ORA-01013: user requested cancel of current operation
         at oracle.jdbc.driver.OracleStatement.executeBatch(OracleStatement.java:4534)
         at oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:213)
         at intradoc.jdbc.JdbcWorkspace.executeBatch(JdbcWorkspace.java:519)
         ... 10 more
    From server output .. the Indexer now says it's in Clean Up mode deleting metadata (0 of 3) and I'm assuming it's these three;
    systemdatabase     01.24 05:48:15.810     index update work     9.40 ms. SELECT/*+ INDEX (Revisions dIndexerState dStatus)*/ dID FROM Revisions
              WHERE dStatus='DELETED' AND dIndexerState = 'A'[Executed. Returned row(s): true]
    systemdatabase     01.24 05:48:15.811     index update work     Batched query added at index 0: DELETE/*+ INDEX (DocMeta PK_DocMeta)*/ FROM DocMeta
              WHERE DocMeta.dID = 485759
    systemdatabase     01.24 05:48:15.813     index update work     Batched query added at index 1: DELETE/*+ INDEX (DocMeta PK_DocMeta)*/ FROM DocMeta
              WHERE DocMeta.dID = 485760
    systemdatabase     01.24 05:48:15.813     index update work     Batched query added at index 2: DELETE/*+ INDEX (DocMeta PK_DocMeta)*/ FROM DocMeta
              WHERE DocMeta.dID = 503528

  • Help needed in index creation and its impact on insertion of records

    Hi All,
    I have a situation like this, and the process involves 4 tables.
    Among the 4 tables, 2 tables have around 30 columns each, and the other 2 has 15 columns each.
    This process contains validation and insert procedure.
    I have already created some 8 index for one table, and an average of around 3 index on other tables.
    Now the situation is like, i have a select statement in validation procedure, which involves select statement based on all the 4 tables.
    When i try to run that select statement, it takes around 30 secs, and when checked for plan, it takes around 21k memory.
    Now, i am in a situation to create new index for all the table for the purpose of this select statement.
    The no.of times this select statement executes, is like, for around 1000 times of insert into table, 200 times this select statement gets executed, and the record count of these tables would be around one crore.
    Will the no.of index created in a table impacts insert statement performace, or can we create as many index as we want in a table ? Which is the best practise ?
    Please guide me in this !!!
    Regards,
    Shivakumar A

    Hi,
    index creation will most definitely impact your DML performance because when inserting into the table you'll have to update index entries as well. Typically it's a small overhead that is acceptable in most situations, but the only way to say definitively whether or not it is acceptable to you is by testing. Set up some tests, measure performance of some typical selects, updates and inserts with and without an index, and you will have some data to base your decision on.
    Best regards,
    Nikolay

  • Delete Index in Process Chain Takes long time after SAP BI 7.0 SP 27

    After upgrading to SAP BI 7.0 SP 27 Delete index Process & Create index process in Process chain takes long time.
    For example : Delete index for 0SD_C03 takes around 55 minutes.
    Before SP upgrade it takes around 2 minutes to delete index from 0SD_C03.
    Regards
    Madhu P Menon

    Hi,
    Normally  index  creation or deletion can take long time in case  your database statistics are not updated properly, so can check  stat  after your data loading is completed and index generation is done,  Do creation of database statistics.
    Then try to recheck ...
    Regards,
    Satya

  • What are the User Exits for Sales Order creation process?

    Hi,
    what are the User Exits for Sales Order creation process? how can I find them?
    thanks in advance,
    will reward,
    Mindaugas

    Please check this info:
    User Exits In Sales Document Processing
    This IMG step describes additional installation-specific processing in sales document processing. In particular, the required INCLUDES and user exits are described.
    Involved program components
    System modifications for sales document processing affect different areas. Depending on the modification, you make the changes in the program components provided:
    MV45ATZZ
    For entering metadata for sales document processing. User-specific metadata must start with "ZZ".
    MV45AOZZ
    For entering additional installation-specific modules for sales document processing which are called up by the screen and run under PBO (Process Before Output) prior to output of the screen. The modules must start with "ZZ".
    MV45AIZZ
    For entering additional installation-specific modules for sales document processing. These are called up by the screen and run under PAI (Process After Input) after data input (for example, data validation). The modules must start with "ZZ".
    MV45AFZZ and MV45EFZ1
    For entering installation-specific FORM routines and for using user exits, which may be required and can be used if necessary. These program components are called up by the modules in MV45AOZZ or MV45AIZZ.
    User exits in the program MV45AFZZ
    The user exits which you can use for modifications in sales document processing are listed below.
    USEREXIT_DELETE_DOCUMENT
    This user exit can be used for deleting data which was stored in a separate table during sales document creation, for example, if the sales document is deleted.
    For example, if an additional table is filled with the name of the person in charge (ERNAM) during order entry, this data can also be deleted after the sales order has been deleted.
    The user exit is called up at the end of the FORM routine BELEG_LOESCHEN shortly before the routine BELEG_SICHERN.
    USEREXIT_FIELD_MODIFICATION
    This user exit can be used to modify the attributes of the screen fields.
    To do this, the screen fields are allocated to so-called modification groups 1 - 4 and can be edited together during a modification in ABAP. If a field has no field name, it cannot be allocated to a group.
    The usage of the field groups (modification group 1-4) is as follows:
    Modification group 1: Automatic modification with transaction MFAW
    Modification group 2: It contains 'LOO' for step loop fields
    Modification group 3: For modifications which depend on check tables or on other fixed information
    Modification group 4: is not used
    The FORM routine is called up for every field of a screen. If you require changes to be made, you must make them in this user exit.
    This FORM routine is called up by the module FELDAUSWAHL.
    See the Screen Painter manual for further information on structuring the interface.
    USEREXIT_MOVE_FIELD_TO_VBAK
    Use this user exit to assign values to new fields at sales document header level. It is described in the section "Transfer of the customer master fields into the sales document".
    The user exit is called up at the end of the FORM routine VBAK_FUELLEN.
    USEREXIT_MOVE_FIELD_TO_VBAP
    Use this user exit to assign values to new fields at sales document item level. It is described in the section "Copy customer master fields into the sales document".
    The user exit is called up at the end of the FORM routine VBAP_FUELLEN.
    USEREXIT_MOVE_FIELD_TO_VBEP
    Use this user exit to assign values to new fields at the level of the sales document schedule lines.
    The user exit is called up at the end of the FORM routine VBEP_FUELLEN.
    USEREXIT_MOVE_FIELD_TO_VBKD
    Use this user exit to assign values to new fields for business data of the sales document. It is described in the section "Copy customer master fields into sales document".
    The user exit is called up at the end of the FORM routine VBKD_FUELLEN.
    USEREXIT_NUMBER_RANGE
    Use this user exit to define the number ranges for internal document number assignment depending on the required fields. For example, if you want to define the number range depending on the sales organization (VKORG) or on the selling company (VKBUR), use this user exit.
    The user exit is called up in the FORM routine BELEG_SICHERN.
    USEREXIT_PRICING_PREPARE_TKOMK
    Use this user exit if you want to include and assign a value to an additional header field in the communication structure KOMK taken as a basis for pricing.
    USEREXIT_PRICING_PREPARE_TKOMP
    Use this user exit if you want to include or assign a value to an additional item field in the communication structure KOMP taken as a basis for pricing.
    USEREXIT_READ_DOCUMENT
    You use this user exit if further additional tables are to be read when importing TA01 or TA02.
    The user exit is called up at the end of the FORM routine BELEG_LESEN.
    USEREXIT_SAVE_DOCUMENT
    Use this user exit to fill user-specific statistics update tables.
    The user exit is called up by the FORM routine BELEG-SICHERN before the COMMIT command.
    Note
    If a standard field is changed, the field r185d-dataloss is set to X. The system queries this indicator at the beginning of the safety routine. This is why this indicator must also be set during the maintenance of user-specific tables that are also to be saved.
    USEREXIT_SAVE_DOCUMENT_PREPARE
    Use this user exit to make certain changes or checks immediately before saving a document. It is the last possibility for changing or checking a document before posting.
    The user exit is carried out at the beginning of the FORM routine BELEG_SICHERN.
    User exits in the program MV45AFZA
    USEREXIT_MOVE_FIELD_TO_KOMKD
    Use this user exit to include or assign values to additional header fields in the communication structure KOMKD taken as a basis for the material determination. This is described in detail in the section "New fields for material determination".
    USEREXIT_MOVE_FIELD_TO_KOMPD
    Use this user exit to include or assign values to additional item fields in the communication structure KOMPD taken as a basis for the material determination. This is described in detail in the section "New fields for material determination".
    USEREXIT_MOVE_FIELD_TO_KOMKG
    Use this user exit to include or assign values to additional fields in the communication structure KOMKG taken as a basis for material determination and material listing. This is described in detail in the section "New fields for listing/exclusion".
    USEREXIT_MOVE_FIELD_TO_KOMPG
    Use this user exit to include or assign values to additional fields in the communication structure KOMPG taken as a basis for material determination and material listung. This is described in detail in the section "New fields for listing/exclusion".
    USEREXIT_REFRESH_DOCUMENT
    With this user exit, you can reset certain customer-specific fields as soon as processing of a sales document is finished and before the following document is edited.
    For example, if the credit limit of the sold-to party is read during document processing, in each case it must be reset again before processing the next document so that the credit limit is not used for the sold-to party of the following document.
    The user exit is executed when a document is saved if you leave the processing of a document with F3 or F15.
    The user exit is called up at the end of the FORM routine BELEG_INITIALISIEREN.
    User-Exits in program MV45AFZB
    USEREXIT_CHECK_XVBAP_FOR_DELET
    In this user exit, you can enter additional data for deletion of an item. If the criteria are met, the item is not deleted (unlike in the standard system).
    USEREXIT_CHECK_XVBEP_FOR_DELET
    In this user exit, you can enter additional data for deletion of a schedule line. If the criteria are met, the schedule line is not deleted (unlike in the standard system).
    USEREXIT_CHECK_VBAK
    This user exit can be used to carry out additional checks (e.g. for completion) in the document header. The system could, for example, check whether certain shipping conditions are allowed for a particular customer group.
    USEREXIT_CHECK_VBAP
    This user exit can be used to carry out additional checks (e.g. for completion) at item level.
    USEREXIT_CHECK_VBKD
    The user exit can be used to carry out additional checks (e.g. for completion) on the business data in the order.
    USEREXIT_CHECK_VBEP
    This user exit can be use to carry out additional checks (e.g. for completion) on the schedule line. During BOM explosion, for example, you may want certain fields to be copied from the main item to the sub-items (as for billing block in the standard system).
    USEREXIT_CHECK_VBSN
    You can use this user exit to carry out additional checks (e.g. for completion) on the serial number.
    USEREXIT_CHECK_XVBSN_FOR_DELET In this user exit, you can enter additional criteria for deletion of the serial number. If the criteria are met, the serial number is not deleted (unlike in the standard system).
    USEREXIT_FILL_VBAP_FROM_HVBAP
    You can use this user exit to fill additional fields in the sub-item with data from the main item.
    USEREXIT_MOVE_FIELD_TO_TVCOM_H
    You can use this user exit to influence text determination for header texts. For example, you can include new fields for text determination or fill fields that already exist with a new value.
    USEREXIT_MOVE_FIELD_TO_TVCOM_I
    You can use this user exit to influence text determination for item texts. For example, you can include new fields for text determination or fill fields that already exist with a new value.
    User-Exits for product allocation:
    The following user exits all apply to structure COBL, in which the data for account determination is copied to item level.
    USEREXIT_MOVE_FIELD_TO_COBL
    Option to include new fields in structure COBL.
    USEREXIT_COBL_RECEIVE_VBAK
    Option to assign values from the document header to the new fields.
    USEREXIT_COBL_RECEIVE_VBAP
    Option to supply values from the item to the new fields.
    USEREXIT_COBL_SEND_ITEM
    A changed field can be copied from the structure into the item. You could use the user exit to display a certain field in the account assignment block (see also MV45AFZB).
    USEREXIT_COBL_SEND_HEADER
    A changed field can be copied from the structure to the header (see source text MV45AFZB)
    USEREXIT_SOURCE_DETERMINATION
    You can use this user exit to determine which plant will be used for the delivery. In the standard system, the delivering plant is copied from the customer master or the customer-material info record. If you want to use a different rule, then you must enter it in this user exit.
    USEREXIT_MOVE_FIELD_TO_ME_REQ
    With this user exit you can include additional fields for the following fields:
    EBAN (purchase requisition)
    EBKN (purchase requisition-account assignment)
    USEREXIT_GET_FIELD_FROM_SDCOM
    Option to include new fields for the variant configuration. Fields that are included in structure SDCOM can be processed and then returned to the order.
    USEREXIT_MOVE_WORKAREA_TO_SDWA
    You can use this user exit to format additional work areas for the variant configuration. You will find notes on the user exit in MV45AFZB.
    User-Exits for first data transfer:
    The following user exits can only be used for the first data transfer.
    Note
    Only use the user exits if the names/fields do NOT have the same name.
    USEREXIT_MOVE_FIELD_TO_VBAKKOM
    Option to include additional fields in structure VBAKKOM (communiction fields for maintaining the sales document header)
    USEREXIT_MOVE_FIELD_TO_VBAPKOM
    Option to include additional fields in structure VBAPKOM (communication fields for maintaining a sales item)
    USEREXIT_MOVE_FIELD_TO_VBEPKOM
    Option to include additional fields in structure VBEPKOM (communication fields for maintaining a sales document schedule line)
    USEREXIT_MOVE_FIELD_TO_VBSN
    You can use this user exit to include fields in structure VBSN (scheduling agreement-related change status).
    USEREXIT_MOVE_FIELD_TO_KOMKH
    You can use this user exit to include new fields for batch determination (document header).
    USEREXIT_MOVE_FIELD_TO_KOMPH
    You can use this user exit to include new fields for batch determination (document item).
    USEREXIT_CUST_MATERIAL_READ
    You can use this user exit to set another customer number in the customer material info record (e.g. with a customer hierarchy)
    USEREXIT_NEW_PRICING_VBAP
    Option for entry of preconditions for carrying out pricing again (e.g. changes made to a certain item field could be used as the precondition for pricing to be carried out again). Further information in MV45AFZB.
    USEREXIT_NEW_PRICING_VBKD
    Option for entry of preconditions for carrying out pricing again (e.g. changes to the customer group or price group could be set as the preconditions for the system to carry out pricing again). Further information in MV45AFZB.
    User-Exits in Program MV45AFZD
    USEREXIT_CONFIG_DATE_EXPLOSION
    The BOM is exploded in the order with the entry date. You can use this user exit to determine which data should be used to explode the BOM (explosion with required delivery date, for example).
    User exits in the program FV45EFZ1
    USEREXIT_CHANGE_SALES_ORDER
    In the standard SAP R/3 System, the quantity and confirmed date of the sales document schedule line is changed automatically if a purchase requisition is allocated, and it or the sales document is changed (for example, quantity, date).
    If you want to change this configuration in the standard system, you can define certain requirements in order to protect your sales orders from being changed automatically. Use this user exit for this purpose. Decide at this point whether the schedule lines are to be changed.
    User-Exits in Program RV45PFZA
    USEREXIT_SET_STATUS_VBUK
    In this user exit you can you can store a specification for the reserve fields in VBUK (header status). Reserve field UVK01 could, for example, be used for an additional order status (as for rejections status, etc.).
    The following workareas are available for this user exit:
    VBUK (header status)
    FXVBUP (item status)
    FXVBUV (Incompletion)
    USEREXIT_SET_STATUS_VBUP
    In this user exit you can you can store a specification for the reserve fields for VBUP (item status).
    The following workareas are available for this user exit:
    FXVBAP (Item data)
    FXVBAPF (Dynamic part of order item flow)
    FXVBUV (Incompletion)
    USEREXIT_STATUS_VBUK_INVOICE
    You can use this user exit to influence billing status at header level.
    User exits in the screens
    Additional header data is on screen SAPMV45A 0309, additional item data on screen SAPMV45A 0459. These screens contain the Include screens SAPMV45A 8309 or SAPMV45A 8459 as user exits.
    Fields which are also to be included in the sales document for a specific installation should be included on the Include screens for maintaining. If an application-specific check module is needed for the fields, this can be included in the Include MV45AIZZ. The module is called up in the processing logic of the Include screens.
    For field transports, you do not have to make changes or adjustments.
    Example
    A new field, VBAK-ZZKUN, should be included in table VBAK.
    If the check is defined via the Dictionary (fixed values or check table) the field must be included with the fullscreen editor in the Include screen SAPMV45A 8309. In this case, no change has to be made to the processing logic.
    User Exits in Program MV45AFZ4
    USEREXIT_MOVE_FIELD_TO_KOMK
    You can use this user exit to add or edit additional header fields in the communication structure - KOMK- for free goods determination. For more information, see the New Fields for Free Goods Determination IMG activity.
    USEREXIT_MOVE_FIELD_TO_KOMP
    You can use this user exit to add or edit additional item fields in the communication structure KOMP for free goods determination. For more information see the New Fields for Free Goods Determination IMG activity.
    User Exits in the SAPFV45PF0E and SAPFV45PF0C Programs
    EXIT_SAPFV45P_001
    You can use this user exit to decide whether intercompany billing data is used in the profitability segment for cross-company code sales, or whether the data comes from external billing (external customer, sales data from the selling company code.
    Regards
    Eswar

  • How to stop the process chain showing status as yellow but no process step

    Dear Friends,
    How to stop the process chain showing status as yellow but no process step is running in that process chain.
    We have manually triggered the process chain for sales it finished successfully till load data but the next step is create index and DB statistics. Both of this steps are in unscheduled position only and no background job for this.
    Please guide me.
    Thanking you in advance.
    Regards,
    Shubhangi

    Hi,
      At times even I have faced this situation.  During those times, running the Function module RSPC_PROCESS_FINISH by passing the right parameters came to my rescue.
    Regards,
    Raj

  • Parallel Index creation takes more time...!!

    OS - Windows 2008 Server R2
    Oracle - 10.2.0.3.0
    My table size is - 400gb
    Number of records - 657,45,95,123
    my column definition first_col varchar2(22) ; -> I am creating index on this column
    first_col -> actual average size of column value is 10
    I started to create index on this column by following command
    CREATE INDEX CALL_GROUP1_ANO ON CALL_GROUP1(A_NO) LOCAL PARALLEL 8 NOLOGGING COMPRESS ;
    -> In my first attempt after three hours I got an error :
    ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
    So I increased the size of temp tablespace to 380GB ,Because i expect the size of first_col index this much.
    -> In my second attempt Index creation is keep going even after 17 hours...!!
    Now the usage of temp space is 162 GB ... still it is growing..
    -> I checked EM Advisor Central ADDM :
    it says - The PGA was inadequately sized, causing additional I/O to temporary tablespaces to consume significant database time.
    1. why this takes this much of Temp space..?
    2. It is required this much of time to CREATE INDEX in parallel processing...? more than 17 hrs
    3. How to calculate and set the size of PGA..?

    OraFighter wrote:
    Oracle - 10.2.0.3.0
    My table size is - 400gb
    Number of records - 657,45,95,123
    my column definition first_col varchar2(22) ; -> I am creating index on this column
    first_col -> actual average size of column value is 10
    I started to create index on this column by following command
    CREATE INDEX CALL_GROUP1_ANO ON CALL_GROUP1(A_NO) LOCAL PARALLEL 8 NOLOGGING COMPRESS ;
    Now the usage of temp space is 162 GB ... still it is growing..The entire data set has to be sorted - and the space needed doesn't really vary with degree of parallelism.
    6,574,595,123 index entries with a key size of 10 bytes each (assuming that in your choice of character set one character = one byte) requires per row approximately
    4 bytes row overhead 10 bytes data, 2 bytes column overhead for data, 6 bytes rowid, 2 bytes column overhead for rowid = 24 bytes.
    For the sorting overheads, using the version 2 sort, you need approximately 1 pointer per row, which is 8 bytes (I assumed you're on 64 bit Oracle on this platform) - giving a total of 32 bytes per row.
    32 * 6,574,595,123 / 1073741824 = 196 GB
    You haven't said how many partitions you have, but you might want to consider creating the index unusable, then issuing a rebuild command on each partition in turn. From "Practical Oracle 8i":
    <blockquote>
    In the absence of partitioned tables, what would you do if you needed to create a new index on a massive data set to address a new user requirement? Can you imagine the time it would take to create an index on a 450M row table, not to mention the amount of space needed in the temporary segment. It's the sort of job that you schedule for Christmas or Easter and buy a couple of extra discs to add to the temporary tablespace.
    With suitably partitioned tables, and perhaps a suitably friendly application, the scale of the problems isn't really that great, because you can build the index on each partition in turn. This trick depends on a little SQL feature that appears to be legal even though I haven't managed to find it in the SQL reference manual:
         create index big_new_index on partitioned_table (colX)
         local
         UNUSABLE
         tablespace scratchpad
    The key word is UNUSABLE. Although the manual states that you can 'alter' an index to be unusable, it does not suggest that you can create it as initially unusable, nevertheless this statement works. The effect is to put the definition of the index into the data dictionary, and allocate all the necessary segments and partitions for the index - but it does not do any of the real work that would normally be involved in building an index on 450M rows.
    </blockquote>
    (The trick was eventually documented a couple of years after I wrote the book.)
    Regards
    Jonathan Lewis

  • Why index creation is slower on the new server?

    Hi there,
    Here is a bit of background/env info:
    Existing prod server (physical): Oracle 10gR2 10.2.0.5.8 on ASM
    RAM: 96GB
    CPUs: 8
    RHEL 5.8 64bit
    Database size around 2TB
    New server:
    VMWare VM with Oracle 10gR2 10.2.0.5.8 on ASM
    RAM 128GB
    vCPUs: 16
    RHEL 5.8 64bit
    Copy of prod DB (from above server) - all init param are the same
    I noticed that Index creation is slower on this server. I ran following query:
    SELECT name c1, cnt c2, DECODE (total, 0, 0, ROUND (cnt * 100 / total)) c3
      FROM (SELECT name, VALUE cnt, (SUM (VALUE) OVER ()) total
              FROM v$sysstat
             WHERE name LIKE 'workarea exec%')
    C1
    C2
    C3
    workarea executions - optimal
    100427285
    100
    workarea executions - onepass
    2427
    0
    workarea executions - multipass
    0
    0
    Following bitmap index takes around 40mins in prod server while it takes around 2Hrs on the VM.
    CREATE BITMAP INDEX MY_IDX ON
    MY_HIST(PROD_YN)  TABLESPACE TS_IDX PCTFREE 10
    STORAGE(INITIAL 12582912 NEXT 12582912 PCTINCREASE 0 ) NOLOGGING
    This index is created during a batch-process and the dev team is complaining of slowness of the batch on new server. I have found this one statement responsible for some of the grief. There may be more and I am investigating.
    I know that adding "parallel" option may speedup but I want find out why is it slow on the new server.
    I tried creating a simple index on a large table and it took 2min in prod and 3.5min on this VM. So I guess index creation is slower on this VM in general. DMLs (select/insert/delete/update) seem to work with better elapsed time.
    Any clues what might be causing this slowness in index creation?
    Best regards

    I have been told that the SAN in use by the VM has capacity of 10K IOPS. Not sure of this info helps. I don't know more than this about the storage.
    What else do I need to find out? Please let me know - I'll check with my Sys Admin and update the thread.
    Best regards

  • Status of Info packages and process changes become yellow after Unicode con

    Dear Friends,
    We have been facing very critical issue in Production system after Unicode conversion.
    Sequences of Details are provided for your reference.
    1.  We suspended all BW loads for 48(2 days) hours before Unicode starts and Handover the BW system to Basis.
    2.  Basis started doing Unicode conversion on BW and they found some standard jobs ( BI_* , Ex: BI_PROCESS_LOADING) which are released status and they assumed that these jobs would run any time so that they suspended all of them but they didnu2019t inform to BW team that they had done. We didnu2019t know that when they did that like pre-Unicode installation, during the Unicode conversion and post Unicode.
    3. Basis handover the system to BW team for Unicode testing (we called five finger testing) and we didnu2019t find any issues in the testing. Again we handover the system to Basis for post installations settings for Unicode and back up things. That time we didnu2019t release any jobs to load data to BW because Basis didnu2019t inform us that we can start loads. 
    4. After post installation is done and basis told us that you can start loads from Monday. Entire Unicode activity had been done on weekend.
    5. Before releasing all BW loads to run regularly, I found the all completed process chains status become yellow and I did further detailed analysis to find why; then found some more strange things those are as follows.
              1. Only info packages in process chains are become yellow and other process variants are fine like activation,
                   DTPu2019s, index and DB stats.
              2. Requests in PSA which were completed successfully are become yellow.
              3. Info packages in the process chains are successfully completed are not updating correct status instead of green it
                  showing yellow because of that other dependent objects are not triggering.
    We all thought that suspension of standard jobs must have been caused the problem.
    How can we change the status of IPs, PSA and process chains automatically?
    We reactivate all PCu2019s and released to run tomorrow.
    Could you please share your thoughts about the issue and help us resolve?
    Thanks in Advance,
    Nageswar.

    Nageswar,
    A couple of questions :
    1. If your previous  PC runs were in yellow status - it would not impact furute PC runs..
    2. If your IPAKs are in yellow - then you might have an issue with deltas happening since the last status would have been shown as incomplete.
    3. Suspension of the standard jobs will not affect the status since the jobs were suspended but not the actual loads i.e the loads were not stopped halfway but only the released jobs were stopped - you might have to schedule the process chains again to get back the old schedule.
    Usually in a Unicode conversion - the data is exported out of the DB and imported back - this might have caused some inconsistency in the database for the RS tables used for the monitors.
    Which version of BW are you in ..?
    were there any errors in the export / import procedure for unicode conversion ?
    As part of testing did you check if loads went through properly ..?
    Was stats run on the tables after unicode ?
    Also if possible regenerate all the indices on the files.
    Rerun stats and indices for all RS* tables

Maybe you are looking for

  • How do I change the pencil tool to smooth or ink in CC?

    In most other versions of flash that I'm aware of, you could change the settings on the pencil tool to smooth or ink, rather than straighten, which straightens out all of your lines and puts in fun corners and makes shapes into rectangles for you. In

  • When I try to restore my iPod touch 4G I get error 3194 and I have the latest version of iTunes

    When I try to restore my iPod touch 4G I get error 3194 and I have the latest version of iTunes

  • Logic needed for FI BDC

    Hi , I ned to write Bdc for the FI transaction f-53. In that BDC various entry is uploaded in trnsaction thr flat file . After Upload Document No is generted and it should stored in the excel file and at the last col of th flat file . How to do this

  • "Roll Back" functionality

    Hi everyone, SAP MDM offers change tracking. Unfortunately I read in this forum, that MDM does not seem to offer a roll-back functionality. In other word, there is no standard function for undoing changes. Is that correct? Would I have to write my ow

  • External hard drive not found when holding option key at startup

    So here's the deal. My MacBooks DVD drive crappedmout. I opened up to check that the drive was hooked up correctly and all. Looked fine so I put it all back together and tuned on the Mac. My hard drive wasn't recognized so I Got the ? Flashing folder