BAPI BAPI_INCOMINGINVOICE_CREATE   to capture processing PA Segment info

I am using the BAPI BAPI_INCOMINGINVOICE_CREATE to post the Invoices using MIRO.
But this  MIRO based BAPI isnu2019t equipped to deal with capturing and processing PA Segment information for G/L account tab as I  need to accept PA segment values ( Minor Product code and Brand )  .
I would like to know , how to solve this issue, is there any User Exit available or other alternative BAPI or other way to fix the same.

According to this article we need to pass AccountingData table to the BAPI for an unplanned account assignment.
Also in this table  amount, quantity and GL account information must be passed but in my case I do not want GR/IR account to be cleared with the PO amount for the subsequent invoices.
I have gone through the documentation of BAPI but could not get a solution. Please let me know if there is any parameter which stops GR/IR account to be debited with the PO amount.

Similar Messages

  • Location of Capture Process and Perf Overhead

    Hi,
    We are just starting to look at Streams technology. I am reading the doc and it implies that the capture process is run on the source database node. I am concerned of the overhead on the OLTP box. I have a few questions I was hoping to get clarification on.
    1. Can I send the redo log to another node/db with data dictionary info and run the capture there? I would like to offload the perf overhead to another box and I thought Logminer could do it, so why not Streams.
    2. If I run the capture process on one node/db can the initial queue I write to be on another node/db or is it implicit to where I run the capture process? I think I know this answer but would like to hear yours.
    3. Is there any performance atomics on the cost of the capture process to an OLTP system? I realize there are many variables but am wondering if I should even be concerned with offloading the capture process.
    Many thanks in advance for your time.
    Regards,
    Tom

    In the current release, Oracle Streams performs all capture activities at the source site. The ability to capture the changes from the redo logs at an alternative site is planned for a future release. Captured changes are stored in an in-memory buffer queue on the local database. Multi-cpu servers with enough available memory should be able to handle the overhead of capture.

  • Excise invoice capture process

    Hi,
      I want to know about excise invoice capture process for depot plant  which t. cod eis use for depot plant how to do the part1 and part2  and also reversal process for the same.
    also what is diff. between excis einvoice capture process for depot and non depot plant.
    regards,
    zafar

    Hi Zafar,
    There are no part 1 and part 2 in RG23D for depot scenario. You can update RG23D at the time of MIGO or J1IG "Capture excise invoice for depot".
    For cancelling you can use the same transaction. And to send the goods out from Depot plant use T-code J1IJ for updating RG23D.
    Rest process remains the same Extraction J2I5 and print through J2I6.
    BR

  • Error while using bapi BAPI_INCOMINGINVOICE_CREATE to post MIRO

    Hi Friends,
             Im using bapi BAPI_INCOMINGINVOICE_CREATE to post MIRO.
             im passing data to table GLACCOUNTDATA.
             Below are the table fields im paasing
         INVOICE_DOC_ITEM " '000001' deafault always
         GL_ACCOUNT  "Which is constant for all in my case               
         ITEM_AMOUNT      " Total PO net amt + Frieght charges header level      
         DB_CR_IND      " 'S' always default     
         COMP_CODE      " 'RPPL' always default          
         TAX_CODE      " 'V0'     deafault always     
         PROFIT_CTR       " for ex 1100180. based on plant
              While posting this bapi is trhrowing error as below
         'profit centre  RPPL/1100180 does not exist for 01.12.2008'
         where 01.12.2008 is the MIRO posting date which im passing in header.
            We checked dates for profit centres they are correct.
            Awaiting the reply ASAP.
    Regards,
    Venky

    Hi,
    It would be better if you do a recheck on data input for BAPI. If you sure the data are ok but the BAPI still gives error message, then I suggest to post to OSS.
    Regards,
    Teddy Kurniawan

  • Instantiation and start_scn of capture process

    Hi,
    We are working on stream replication, and I have one doubt abt the behavior of the stream.
    During set up, we have to instantiate the database objects whose data will be transferrd during the process. This instantiation process, will create the object at the destination db and set scn value beyond which changes from the source db will be accepted. Now, during creation of capture process, capture process will be assigned a specific start_scn value. Capture process will start capturing the changes beyond this value and will put in capture queue. If in between capture process get aborted, and we have no alternative other than re-creation of capture process, what will happen with the data which will get created during that dropping / recreation procedure of capture process. Do I need to physically get the data and import at the destination db. When at destination db, we have instantiated objects, why not we have some kind of mechanism by which new capture process will start capturing the changes from the least instantiated scn among all instantiated tables ? Is there any other work around than exp/imp when both db (schema) are not sync at source / destination b'coz of failure of capture process. We did face this problem, and could find only one work around of exp/imp of data.
    thanx,

    Thanks Mr SK.
    The foll. query gives some kind of confirmation
    source DB
    SELECT SID, SERIAL#, CAPTURE#,CAPTURE_MESSAGE_NUMBER, ENQUEUE_MESSAGE_NUMBER, APPLY_NAME, APPLY_MESSAGES_SENT FROM V$STREAMS_CAPTURE
    target DB
    SELECT SID, SERIAL#, APPLY#, STATE,DEQUEUED_MESSAGE_NUMBER, OLDEST_SCN_NUM FROM V$STREAMS_APPLY_READER
    One more question :
    Is there any maximum limit in no. of DBs involved in Oracle Streams.
    Ths
    SM.Kumar

  • Bapi for Inbound EDI Processing

    Hi All,
    There is a BAPI for creating Goods Movement BAPI_GOODSMVT_CREATE. I want to use this BAPI for triggering Inbound Processing. I need help configuring the interface for creating goods movement using BAPI.
    I have successfully configured the inbound processing using process code WMMB, which triggers function module L_IDOC_INPUT_WMMBXY. But this FM or rather the idoc type WMMBID02 does not have any fields for Catch weight items and I do not want to create any extensions on this IDOC type.
    So was wondering if the BAPI BAPI_GOODSMVT_CREATE can be configured for EDI Inbound Processing.

    hi,
    try to use message type MBGMCR, it calls function module IDOC_INPUT_MBGMCR and
    then BAPI_GOODSMVT_CREATE...
    regards,darek

  • Internal Error when creating Capture Process

    Hi,
    I get the following when trying to create my capture process:
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    2 3 queue_table => 'capture_queue_table',
    queue_name => 'capture_queue');
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'apply_queue_table',
    queue_name => 'apply_queue');
    END;
    4 5 6 7 8 9 10 11
    BEGIN
    ERROR at line 1:
    ORA-00600: internal error code, arguments: [kcbgtcr_4], [32492], [0], [1], [],
    ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 408
    ORA-06512: at line 2
    Any ideas?
    Cheers,
    Warren

    Make sure that you have upgraded to the 9.2.0.2 patchset and, as part of the migration to 9202, that you have run the catpatch.sql script.

  • Capture process: Can it write to a queue table in another database?

    The capture process reads the archived redo logs. It then writes the appropriate changes into the queue table in the same database.
    Can the Capture process read the archived redo logs and write to a queue table in another database?
    HP-UX
    Oracle 9.2.0.5

    What you are asking is not possible directly in 9i i.e. capture process cannot read the logs and write to a queue somewhere else.
    If the other database is also Oracle with platform and version compatibility then, you can use the 10g downstream capture feature to accomplish this.

  • What values to pass in BAPI  BAPI_INCOMINGINVOICE_CREATE for Business palce

    Hi BAPI gurus,
    I can create MIRO doc using bapi BAPI_INCOMINGINVOICE_CREATE
    But i want to create MIRO along with the values Business place and section in Basic data  tab of MIRO t-code.
    I am not understanding which are the fields in bapi header to populate Business place and section.
    How to create MIRO along with passing values to Business place and section .
    helpfull asweres will be rewarded.

    Hi Friend,
    Try to pass in field BUSINESS_PLACE of structure BAPI_INCINV_CREATE_HEADER to that BAPI.
    Hope it will work.
    Regards
    Krishnendu

  • Capture Process Error

    Hi,
    We are working on Oracle 9i bi-directional Stream replication. After set up, and sufficient amount of testing from our side, we are facing fatal error in
    Capture process in one of the database. Both the db srvr are having similar set up parameters, similar hardware, and almost everything is same. But we are facing this error in only one of them.
    The error is :
    Dump file e:\oracle\admin\repf\udump\repf_cp01_1620.trc
    Thu Apr 03 15:42:53 2003
    ORACLE V9.2.0.2.1 - Production vsnsta=0
    vsnsql=12 vsnxtr=3
    Windows 2000 Version 5.0 Service Pack 2, CPU type 586
    Oracle9i Enterprise Edition Release 9.2.0.2.1 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.2.0 - Production
    Windows 2000 Version 5.0 Service Pack 2, CPU type 586
    Instance name: repf
    Redo thread mounted by this instance: 1
    Oracle process number: 19
    Windows thread id: 1620, image: ORACLE.EXE (CP01)
    *** 2003-04-03 15:42:53.000
    *** SESSION ID:(21.548) 2003-04-03 15:42:53.000
    TLCR process death detected. Shutting down TLCR
    error 1280 in STREAMS process
    ORA-01280: Fatal LogMiner Error.
    OPIRIP: Uncaught error 447. Error stack:
    ORA-00447: fatal error in background process
    ORA-01280: Fatal LogMiner Error.
    Dump file e:\oracle\admin\repf\udump\repf_cp01_1904.trc
    Tue Apr 01 18:44:27 2003
    ORACLE V9.2.0.2.1 - Production vsnsta=0
    vsnsql=12 vsnxtr=3
    Windows 2000 Version 5.0 Service Pack 2, CPU type 586
    Oracle9i Enterprise Edition Release 9.2.0.2.1 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.2.0 - Production
    Windows 2000 Version 5.0 Service Pack 2, CPU type 586
    Instance name: repf
    Redo thread mounted by this instance: 1
    Oracle process number: 19
    Windows thread id: 1904, image: ORACLE.EXE (CP01)
    *** 2003-04-01 18:44:27.000
    *** SESSION ID:(18.7) 2003-04-01 18:44:27.000
    error 604 in STREAMS process
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01423: error encountered while checking for extra rows in exact fetch
    ORA-01089: immediate shutdown in progress - no operations are permitted
    ORA-06512: at "SYS.LOGMNR_DICT_CACHE", line 1600
    ORA-06512: at "SYS.LOGMNR_GTLO3", line 33
    ORA-06512: at line 1
    OPIRIP: Uncaught error 1089. Error stack:
    ORA-01089: immediate shutdown in progress - no operations are permitted
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01423: error encountered while checking for extra rows in exact fetch
    ORA-01089: immediate shutdown in progress - no operations are permitted
    ORA-06512: at "SYS.LOGMNR_DICT_CACHE", line 1600
    ORA-06512: at "SYS.LOGMNR_GTLO3", line 33
    ORA-06512: at line 1
    Thanx,
    Kamlesh Chaudhary

    If you are configuring Streams environment you dont have to specify the logminer tablespace. So i did not specify it manually when i was setting up my Capture process, and i did not change it later.
    Prior the 1280 fatal logminer error i have the following errors:
    ORA-00353: log corruption near block string change string time string
    ORA-00354: corrupt redo log block header.
    I've checked the hard drive, and it it correct.
    Any suggestions?

  • Capture process status waiting for Dictionary Redo: first scn....

    Hi
    i am facing Issue in Oracle Streams.
    below message found in Capture State
    waiting for Dictionary Redo: first scn 777777777 (Eg)
    Archive_log_dest=USE_DB_RECOVERY_FILE_DEST
    i have space related issue....
    i restored the archive log to another partition eg. /opt/arc_log
    what should i do
    1) db start reading archive log from above location
    or
    2) how to move some archive log to USE_DB_RECOVERY_FILE_DEST from /opt/arc_log so db start processing ...
    Regard's

    Hi -
    Bad news.
    As per note 418755.1
    A. Confirm checkpoint retention. Periodically, the mining process checkpoints itself for quicker restart. These checkpoints are maintained in the SYSAUX tablespace by default. The capture parameter, checkpoint_retention_time, controls the amount of checkpoint data retained by moving the FIRST_SCN of the capture process forward. The FIRST_SCN is the lowest possible scn available for capturing changes. When the checkpoint_retention_time is exceeded (default = 60 days), the FIRST_SCN is moved and the Streams metadata tables previous to this scn (FIRST_SCN) can be purged and space in the SYSAUX tablespace reclaimed. To alter the checkpoint_retention_time, use the DBMS_CAPTURE_ADM.ALTER_CAPTURE procedure.
    Check if the archived redologfile it is requesting is about 60 days old. You need all archived redologs from the requested logfile onwards; if any are missing then you are out of luck. It doesnt matter that there have been mined and captured already; capture still needs these files for a restart. It has always been like this and IMHO is a significant limitation for streams.
    If you cannot recover the logfiles, then you will need to rebuild the captiure process and ensure that any gap in data captures has been resynced manually using tags tofix the data.
    Rgds
    Mark Teehan
    Singapore

  • Create multiple capture processes for same table depending on column value

    Hi,
    is it possible to create multiple realtime downstream capture processes to capture changes for the same table depending on column value?
    Prakash

    i found it - by using subset rules
    prakash

  • Rman-08137 can't delete archivelog because the capture process need it

    When I use the rman utility to delete the old archivelog on the server ,It shows :Rman-08137 can't delete archivelog because the capture process need it .how to resolve the problem?

    It is likely that the "extract" process still requires those archive logs, as it is monitoring transactions that have not yet been "captured" and written out to a GoldenGate trail.
    Consider the case of doing the following: ggsci> add extract foo, tranlog, begin now
    After pressing "return" on that "add extract" command, any new transactions will be monitored by GoldenGate. Even if you never start extract foo, the GoldenGate + rman integration will keep those logs around. Note that this GG+rman integration is a relatively new feature, as of GG 11.1.1.1 => if "add extract foo" prints out "extract is registered", then you have this functionality.
    Another common "problem" is deleting "extract foo", but forgetting to "unregister" it. For example, to properly "delete" a registered "extract", one has to run "dblogin" first:
    ggsci> dblogin userid <userid> password <password>
    ggsci> delete extract foo
    However, if you just do the following, the extract is deleted, but not unregistered. Only a warning is printed.
    ggsci> delete extract foo
    <warning: to unregister, run the command "unregister...">
    So then one just has to follow the instructions in the warning:
    ggsci> dblogin ...
    ggsci> unregister extract foo logretention
    But what if you didn't know the name of the old extracts, or were not even aware if there were any existing registered extracts? You can run the following to find out if any exist:
    sqlplus> select count(*) from dba_capture;
    The actual extract name is not exactly available, but it can be inferred:
    sqlplus> select capture_name, capture_user from dba_capture;
    <blockquote>
    CAPTURE_NAME CAPTURE_USER
    ================ ==================
    OGG$_EORADF4026B1 GGS
    </blockquote>
    In the above case, my actual "capture" process was called "eora". All OGG processes will be prefixed by OGG in the "capture_name" field.
    Btw, you can disable this "logretention" feature by adding in a tranlog option in the param file,
    TRANLOGOPTIONS LOGRETENTION DISABLED
    Or just manually "unregister" the extract. (Not doing a "dblogin" before "add extract" should also work in theory... but it doesn't. The extract is still registered after startup. Not sure if that's a bug or a feature.)
    Cheers,
    -Michael

  • To provide financial data with segment info for Statutory consolidation

    Hello.
    I'd like to know which is the better solution in order to provide company P/L and B/S data with segment info for Statutory consolidation purpose like SEM-BCS or BO-Finance.
    (Segment info means like business division for disclosing)
    Sometimes PCA accounting info is used and roll-up into FI-SL.
    (FI posts Profit center directly or allocates into segment for consolidation)
    Do you have any other way to provide financial data with segment from FI?
    New GL is good solution??
    Please kindly help.
    Thanks,
    SEM_BW

    Hi
    Development of New GL and Document splitting is based on to provide the solution for segment reporting to meet the local legal requirements and also to meet company aspirations.  Create n number of segments for n number of business requirements.
    Cheers
    Srinivas

  • Can I configure capture process of streams on physical standby

    we have high oltp system that has dataguard setup with physical and logical. We are planning to build a datawarehouse and have streams setup for data feeding into the datawarehouse. Instead of taxing the primary, i was wondering if i could set up the physical standby as the source for my streams (basically configure the capture process on physical standby).
    Appreciate your help in advance!
    Thanks

    Thanks for the reply Tekicora
    This means then On the primary, I will have another destination that I send the archives to (In addition to Physical) that will be my source database for streams where I can configure capture process. if this understanding is right, then i have the following questions
    1) Can i use cascaded standby to relieve my primary from having another log destination and use that database for the source
    2) Do you know if PSB can be used as source in 11g? because we are planning to move to 11g soon.
    Thanks
    Smitha

Maybe you are looking for