Oracle Interfaces

Hi All,
What are oracle interfaces and how these interfaces are used in Oracle apps.And where could I find the documentation for this.
Please Help,
SHD

Interfaces are used to push/pull data form/to Oracle Apps instance.
For 11i, all interfaces are documented at http://irep.oracle.com
For R12.0.x, use the "Integration Repository" responsibility in your instance. For R12.1.x, use "Integrated SOA Gateway" responsibility in your instance.
Oracle Integration Repository Documentation Resources Release 12 (Doc ID 396116.1)
HTH
Srini

Similar Messages

  • What are the common APIs we use in oracle interface

    What are the common APIs we use in oracle interface,and any APIs are there for validation Of data During Tranfer to interface table.
    How to find Api s in oracle applications in Individual modules

    For 11i, all public APIS are listed at http://irep.oracle.com
    For R12, use the "Integration Repository" responsibility in your R12 instance to list the APIs available in that instance
    HTH
    Srini

  • Oracle Interface developer  @ Sterling Heights, MI

    Title: ETP - Oracle Interface developer
    Comments : US Citizen Only
    My name is Rk and I'm a Sr Technical Resource Manager at Charter Global Inc and we are looking for ETP - Oracle Interface developer .
    Please respond me , if you are qualified, available, interested, planning to make a change.
    Responsibilities include:
    * Develop Interfaces using PL/SQL, APIs, and Open Interfaces.
    * Provide on-going technical support for the Oracle 11i especially in the Interfaces area.
    * Troubleshoot user problems via a thorough understanding of Oracle Apps setups, desktop folder customization, System Administrator tasks, application functionality, and back-end tables.
    * Maintain effective communication with both diverse client and CSC teams; work closely with clients to resolve problems, answer questions, and provide solutions.
    * Identify and understand business requirements and use AIM template to document technical designs.
    * Participate in all phases of the Software Development Life Cycle
    *Facilitate collaboration, knowledge transfer and process improvement.
    * US Citizen required.
    * Sterling Heights, MI - Remote okay (If working remotely periodic on-site client visits required.)
    Basic Qualifications:
    Minimum two years experience required in all of the following:
    Min 2 yrs experience in developing PL/SQL code
    Min 2 yrs experience in developing interfaces to Oracle packaged applications
    Min 2 yrs experience working on at least two (2) or more large Oracle Application implementation projects
    Min 2 yrs experience using Oracle AIM methodology & toolkit
    Min 2 yrs experience with ETL tool helpful (i.e., SmartDB)
    Remote Comments : May work remotely; if working remotely periodic on-site client visits required.
    Work Location : Sterling Heights, MI - REMOTE okay
    Email: [email protected]
    RK Kandadai
    Sr Technical Resource Manager
    Charter Global, Inc.
    (866) 570-1818 x 331
    Fax: 770-206-2327
    www.charterglobal.com <http://www.charterglobal.com/>

    Hi,
    From your post, it seems that you need to upload the results of the query to Oracle Support.
    You can spool the output to a file and upload it.
    Something like...
    SQL>spool output.txt
    SQL>set lines 200
    SQL>set pages 200
    --execute the queries
    SQL>spool off
    I don't know about SQL Developer but tools such as PL/SQL Developer and Toad will allow you to export the query results directly to MS Excel.
    Regards,
    Sujoy

  • "quoted string not properly terminated" error in File to Oracle interface

    We have an interface at our site that is a simple file to Oracle interface. We used the sqlldr LKM and the SQL Control Append IKM. The interface bombs out when one of the unmapped Oracle fields has this for a string literal in it: '--A'. It's a size 3 varchar field in Oracle. We can put in other literals fine in it, but using the '--A' one brings back this error in the Insert portion of the interface:
    1756 : 42000 : java.sql.SQLException: ORA-01756: quoted string not properly terminated
    java.sql.SQLException: ORA-01756: quoted string not properly terminated
    I ran the sql query that ODI bombs out on and the record inserts fine inside sql developer.
    Anybody else experience this and if so what was the solution to get past this?
    We're using ODI 10.1.3.5.5.

    Hi A,
    I tried this but it didn't work. I am puzzled as to why OBIEE prints any special character after a % twice. For example %& becomes %&& or %' become %'' . I guess it is the Evaluate function that is fiddling with the % .
    Thanks.
    Edited by: 900740 on Feb 9, 2012 9:22 AM

  • ACU COBOL과 ORACLE INTERFACE

    제품 : PRECOMPILERS
    작성날짜 : 1997-06-25
    ACU COBOL과 ORACLE INTERFACE
    ============================
    Machine : Sequent, HP, Pyramid, Ticom등
    o Development
    ACUCOBOL과 ORACLE 7을 interface하기위한 module을 만들기 위해 먼저
    "direct.c"에 31가지의 기능을 등록한다.
    Makefile은 "proc.mk"이며, 이를 이용하여 interface용 runtime을 만든다.
    interface module은 구분을 위하여 "rtsora"로 rename되어 있으며 "runcbl"로
    해도 상관 없다.
    o Testing
    test는 Oracle directory의 "procob/demo"를 이용 test하였다.
    8개의 test programs이 compile되었고, "rtsora"라는 runtime으로 실행된다.
    Pre-compile및 compile, 그리고 실행순서
    1) Pre-compile :
    procob iname=samplecob.pco oname=samplecob.cob ireclen=132 oreclen=132
    select_error=no
    혹은 make -f procob.mk samplecob
    2) Compile : "-da4" option을 사용하는 이유는 interface하는 data간의 size를
    ( align data )맞추어 주기 위해 사용된다.
    기타 option들은 그대로 사용 가능하다.
    ccbl -da4 -o samplecob.obj samplecob.cob
    3) 실행
    rtsora samplecob.obj
    cf.
    * Direct.C부분은 Oracle RDBMS와 함께 통신에 사용되는 응용프로그램을 31개의
    Oracle 기능을 지원하기 위해 수정되었다.
    makefile인 procob.mk와는 별도로 Makefile이라는 화일을 만들어 사용하였다.
    * Oracle의 PRO*COBOL 혹은 PRO*C가 제공하는 모든 Archieve들이 새로운
    runtime을 만들기 위해 필요하다.
    * Acucobol의 ccbl로 pre-compile된 source를 compile할 경우 주의할 사항은
    pre-compiler가 현재는 MF-COBOL용으로 set되어 있기 때문에 compile시 문법상
    errorr가 발생할 수 있다. 그러나 이는 간단한 source editing으로 해결된다.
    수정될 사항 :
    1) accept구문에서 comp로 선언된 변수를 accept받는다.
    2) Pre-compile된 cobol source문장내에서 01 level 의 변수값들이
    A area가 아닌 B area에 위치한다.
    3) Procedure Division구문에서 GIVING구는 지원하지 않는다.
    Acucobol-85의 Procedure Division구문의 Syntax
    Procedure Division
    [ {USING } {parameter1 . . . . } .
    {CHAINING }
    * interface module인 "rtsora"를 만들기 위한 Makefile기술
    1)Makefile의 내용 ( Makefile )
    # Makefile to create new version of "rtsora" based on
    # changes to "sub.c"
    CFLAGS= -O
    LDFLAGS= -I. -O -s
    SUBS=sub.o filetbl.o
    LIBS=-L/ORACLE_HOME $ORACLE_HOME/lib/libsql.a \
    $ORACLE_HOME/lib/osntab.o $ORACLE_HOME/lib/libsqlnet.a \
    $ORACLE_HOME/lib/libora.a -lnls6 -lcv6 -lsqlnet \
    $ORACLE_HOME/lib/libora.a -lcore \
    `cat $ORACLE_HOME/rdbms/lib/sysliblist `
    rtsora: $(SUBS)
    cc $(LDFLAGS) -o rtsora $(SUBS) runcbl.a $(LIBS)
    sub.o: sub.c sub85.c config85.c direct.c
    cc $(CFLAGS) -c sub.c
    # 기종별로 LIBS에 추가되는 Option과 Archieve화일들은 조금씩 다를 수 있다.
    상기에서 LIBS에 추가되는 ORACLE의 Archieve들과 Option들은 procob.mk
    혹은 proc.mk화일내에 있는 내용들을 참조하면 된다.
    /* DIRECT.C for ORACLE 7 & ACUCOBOL Interface용*/
    struct EXTRNTABLE WNEAR EXTDATA[] = {
    { NULL,              NULL }
    /* ORACLE Function등록 */
    #define VOID void
    #include "$ORACLE_HOME/proc/lib/SQLDA.H"
    typedef unsigned int size_t;
    extern VOID sqlab1();
    extern VOID sqladr();
    extern VOID sqlad1();
    extern SQLDA *sqlald();
    extern VOID sqlbs1();
    extern VOID sqlbex();
    extern VOID sqlcls();
    extern VOID sqlcom();
    extern VOID sqlexe();
    extern VOID sqlfcc();
    extern VOID sqlfch();
    extern VOID sqlgb1();
    extern VOID sqlgd1();
    extern VOID sqllo1();
    extern VOID sqllda();
    extern size_t sqllen();
    extern VOID sqloca();
    extern VOID sqlopn();
    extern VOID sqlora();
    extern VOID sqlos1();
    extern VOID sqlosq();
    extern VOID sqlpcs();
    extern VOID sqlrol();
    extern VOID sqlsca();
    extern VOID sqlscc();
    extern VOID sqlsch();
    extern VOID sqlsqs();
    extern VOID sqltfl();
    extern VOID sqltoc();
    extern VOID sqlwnr();
    extern VOID sqlgri();
    struct DIRECTTABLE LIBDIRECT[] = {
    { "SQLAB1",FUNC sqlab1,C_void },
    { "SQLADR",FUNC sqladr,C_void },
    { "SQLAD1",FUNC sqlad1,C_void },
    { "SQLALD",FUNC sqlald,C_pointer },
    { "SQLBS1",FUNC sqlbs1,C_void },
    { "SQLBEX",FUNC sqlbex,C_void },
    { "SQLCLS",FUNC sqlcls,C_void },
    { "SQLCOM",FUNC sqlcom,C_void },
    { "SQLEXE",FUNC sqlexe,C_void },
    { "SQLFCC",FUNC sqlfcc,C_void },
    { "SQLFCH",FUNC sqlfch,C_void },
    { "SQLGB1",FUNC sqlgb1,C_void },
    { "SQLGD1",FUNC sqlgd1,C_void },
    { "SQLLO1",FUNC sqllo1,C_void },
    { "SQLLDA",FUNC sqllda,C_void },
    { "SQLLEN",FUNC sqllen,C_unsigned },
    { "SQLOCA",FUNC sqloca,C_void },
    { "SQLOPN",FUNC sqlopn,C_void },
    { "SQLORA",FUNC sqlora,C_void },
    { "SQLOS1",FUNC sqlos1,C_void },
    { "SQLOSQ",FUNC sqlosq,C_void },
    { "SQLPCS",FUNC sqlpcs,C_void },
    { "SQLROL",FUNC sqlrol,C_void },
    { "SQLSCA",FUNC sqlsca,C_void },
    { "SQLSCC",FUNC sqlscc,C_void },
    { "SQLSCH",FUNC sqlsch,C_void },
    { "SQLSQS",FUNC sqlsqs,C_void },
    { "SQLTFL",FUNC sqltfl,C_void },
    { "SQLTOC",FUNC sqltoc,C_void },
    { "SQLWNR",FUNC sqlwnr,C_void },
    { "SQLGRI",FUNC sqlgri,C_void },
    { NULL, NULL, 0 }
    /* */

    There are standard interface programs between standard modules, AP to GL, AR to GL and the likes. We write them when we need the transfer data customized or if data is being transfered from a non Oracle sub-ledger into Oracle.

  • Error while executing File to Oracle interface.

    Hello All,
    I am a started in ODI. I have created an interface using File & Oracle Technology Data servers. I am using LKM File to Oracle (SQLLDR) and IKM Oracle Incremental Update(MERGE) with CKM for Oracle.
    The interface is simple one to one where data is to be loaded in one table. I am getting error at step "Call sqlldr via jython". Pasting the description ----
    import os
    if os.system(r"sqlldr control=E:/Repository/Work/TestDataStore.ctl log=E:/Repository/Work/TestDataStore.log userid=system/<@=snpRef.getInfo("DEST_PASS") @>@ORCL > E:/Repository/Work/TestDataStore.out") <> 0 :
         raise "OS command has signalled errors"
    Pasting the error ----
    org.apache.bsf.BSFException: exception from Jython: Traceback (innermost last):
    File "<string>", line 3, in ?
    OS command has signalled errors
         at org.apache.bsf.engines.jython.JythonEngine.exec(Unknown Source)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlC.treatTaskTrt(SnpSessTaskSqlC.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)
    I have no clue as to what is wrong. can anybody help me out here please?
    Thanks,
    Dipal

    Cezar,
    I was able to solve the issue. Apparently, bigger columns didnt matter since I was not inserting the data from that column. I already had sql loader installed. The issue was the ORACLE_HOME for database was not the first one in the list.
    This are the steps I went through for troubleshooting.
    - Ran Jython.bat from bin
    - Executed the code from "Call sqllder via jython" replacing <@=snpRef.getInfo("DEST_PASS") @>
    - Got error - "The procedure entry point snlinAddrLocalhost could not be located in the dynamic link library oranl10.dll"
    Hope this helps anybody facing similar issue.
    Thanks for your help.
    Dipal

  • R3 to Oracle interface

    Hi,
    To create a B2B scenario, R3 --> XI --> Oracle Database......What are the adaptors used at the sender end and at the receiver end.
    Is it right to use BAPI/IDOC at the sender end (for R3) and JDBC at the Receiver end.
    But B2B scenarios are done through the firewalls.  So, are IDOC/RFC and JDBC adaptors have security configurations?
    If not, what adaptors can be used?
    Thanks in Advance, I will assign points.

    Hi San,
    below sap Help will give you an Good idea for Reaciver Jdbc Adapter when you r desiging interface between SAPr/3 --> Oralce.
    http://help.sap.com/saphelp_nw04/helpdata/en/2e/96fd3f2d14e869e10000000a155106/content.htm.
    Here are few web logs will give you an idea
    /people/laxman.molugu/blog/2006/08/13/integration-with-databases-made-easy-150-part-1
    /people/sriram.vasudevan3/blog/2005/02/14/calling-stored-procs-in-maxdb-using-sap-xi
    /people/alessandro.berta/blog/2005/10/04/save-time-with-generalized-jdbc-datatypes
    http://help.sap.com/saphelp_nw04/helpdata/en/2e/96fd3f2d14e869e10000000a155106/content.htm..
    /people/thorsten.nordholmsbirk/blog/2006/07/25/structuring-integration-repository-content--part-1-software-component-versions
    Hope these Info will help you.
    Regd's
    Hari

  • XML to Oracle interface has same insert count regardless of XML input file

    Not sure how to get around this one... I have an interface that simply transfers the contents of an XML file to an Oracle table. I keep getting the same insert count and records regardless of the XML file i'm using.
    I was using a properties file with this, but then just decided to use memory to process the file. So, in the physical architecture of the XML topology object, I just specify the file name, xsd and schema.
    Any thoughts on what's happening/what to check?

    Hi,
    I'm having a similar problem.
    I've settted up an XML topology using the following jdbc driver and jdbc url:
    com.sunopsis.jdbc.driver.xml.SnpsXmlDriver
    jdbc:snps:xml?f=#XML_Import_Test.CurrFileName&ro=true&dod=true&lf=c:/odilog.log&ll=255
    Then in the designer I've created a Package with the following steps:
    1) Declare Step for the CurrFileName variable
    2) A Set Variable Step for the variable TempFileName. Here I place the value of the file that I would like to process
    3) A refresh Step for the CurrFileName variable with the following refresh code: select '#TempFileName' from dual
    4) an Interace that maps the xml file that I would like to import to an Oracle Db Table
    Now, when I run the package from the designer I've to hit two times the execute button to have the data in the db. The first time that I execute the interface the process is using the previous processed file.
    Any ideas on what could be the problem? what I'm doing wrong?
    Thanks a lot for all the help
    Ben

  • Interfacing to Oracle GL

    What are the different options to import gl entries to Oracle gl from non-oracle applications. During my research I came accross the Journal Import function which uses the GL_INTERFACE table. Is this the only available option?
    Thanks

    If you don't want to corrupt your database you have to use Oracle interface tables. Even ADI use those tables. In GL module you have GL_interface and GL_budget_interface. AP you have ap_invoices_interface and ap_invoice_lines_interface. To know more about the interface read the Oracle financial open interface book.

  • Linux program interface to oracle

    i have perl scripts that i will be modifying to use an oracle interface. i don't care about having a user interface client, just something that will allow my scripts to connect automatically. what is the absolute minimum i need as far as software download to accomplish this, please?

    thank you for your responses, sorry for the delay getting back. seems i wasn't quite as clear as i'd hoped...
    i have done this before (connecting a perl script to an oracle server) but it has been quite some time back and i no longer have access to that server; also, at the time i had to download an entire oracle server for linux in order to get the oracle client software that allowed me to connect to the server (some 2.2gb of download/installation).
    thanks for the reminder about DBI and DBD:Oracle, but what i was looking for (and didn't express clearly) is the absolute minimum oracle software required to allow my perl scripts to connect to an existing oracle server, whether that be oracle-instantclient (which seems to be massive overkill at some 33mb) or something else i haven't run across yet... or does DBD:Oracle now have the complete authentication/connectivity functionality included, no longer requiring a download of any oracle software?

  • MaxDB backup through Backint on Oracle for TSM

    Hello Gurus:
    We are using TSM backint interface for Oracle database. We are in the process of implementing SCM. We would like to schedule MaxDB backup through backint interface for Oracle.
    I checked some forums and configured but backint is not recognizing MaxDB parameter file. Please see below parameter file and error logs.
    *BSI File:*
    BACKINT D:\usr\sap\SCD\SYS\exe\uc\NTAMD64\backint.exe
    INPUT D:\sapdb\data\wrk\backint\sapdb.in
    OUTPUT D:\sapdb\data\wrk\backint\sapdb.out
    ERROROUTPUT D:\sapdb\data\wrk\backint\sapdb.err
    PARAMETERFILE D:\sapdb\data\wrk\LCD\maxdb_config.par
    ORIGINAL_RUNDIRECTORY L:\sapdb\LCD\sapdata
    *Parameter File - MaxDB_config*
    STAGING AREA:     D:\TEMP\STAGE1 1024000 KB
    FILES PER BACKINT CALL:     2
    BACKINT:     D:\usr\sap\SCD\SYS\exe\uc\NTAMD64\backint.exe
    PARAMETERFILE OF BACKINT:     D:\oracle\SCD\102\database\initSCD.utl
    HISTORY FILE:     D:\sapdb\data\wrk\BackintHistory
    INPUTFILE OF BACKINT:     D:\sapdb\data\wrk\backint\backint.in
    OUTPUTFILE OF BACKINT:     D:\sapdb\data\wrk\backint\backint.out
    ERRORFILE OF BACKINT:     D:\sapdb\data\wrk\backint\backint.err
    MAXIMAL DELAY OF BACKINT CALL:     30
    Error Logs:
    2010-03-30 17:38:35
    Using environment variable 'TEMP' with value 'C:\Windows\TEMP' as directory for temporary files and pipes.
    Using connection to Backint for MaxDB Interface.
    2010-03-30 17:38:35
    Checking existence and configuration of Backint for MaxDB.
        Using environment variable 'BSI_ENV' with value 'D:\sapdb\data\wrk\LCD\bsi.env' as path of the configuration file of Backint for MaxDB.
        Reading the Backint for MaxDB configuration file 'D:\sapdb\data\wrk\LCD\bsi.env'.
            Found keyword 'BACKINT' with value 'D:\usr\sap\SCD\SYS\exe\uc\NTAMD64\backint.exe'.
            Found keyword 'INPUT' with value 'D:\sapdb\data\wrk\backint\sapdb.in'.
            Found keyword 'OUTPUT' with value 'D:\sapdb\data\wrk\backint\sapdb.out'.
            Found keyword 'ERROROUTPUT' with value 'D:\sapdb\data\wrk\backint\sapdb.err'.
            Found keyword 'PARAMETERFILE' with value 'D:\sapdb\data\wrk\LCD\maxdb_config.par'.
            Found keyword 'ORIGINAL_RUNDIRECTORY' with value 'L:\sapdb\LCD\sapdata'.
        Finished reading of the Backint for MaxDB configuration file.
        Using 'D:\usr\sap\SCD\SYS\exe\uc\NTAMD64\backint.exe' as Backint for MaxDB program.
        Using 'D:\sapdb\data\wrk\backint\sapdb.in' as input file for Backint for MaxDB.
        Using 'D:\sapdb\data\wrk\backint\sapdb.out' as output file for Backint for MaxDB.
        Using 'D:\sapdb\data\wrk\backint\sapdb.err' as error output file for Backint for MaxDB.
        Using 'D:\sapdb\data\wrk\LCD\maxdb_config.par' as parameter file for Backint for MaxDB.
        Using '300' seconds as timeout for Backint for MaxDB in the case of success.
        Using '300' seconds as timeout for Backint for MaxDB in the case of failure.
        Using 'L:\sapdb\LCD\sapdata\dbm.knl' as backup history of a database to migrate.
        Using 'L:\sapdb\LCD\sapdata\dbm.ebf' as external backup history of a database to migrate.
        Checking availability of backups using backint's inquire function.
    Check passed successful.
    2010-03-30 17:38:35
    Checking medium.
    Check passed successfully.
    2010-03-30 17:38:35
    Preparing backup.
        Setting environment variable 'BI_CALLER' to value 'DBMSRV'.
        Setting environment variable 'BI_REQUEST' to value 'NEW'.
        Setting environment variable 'BI_BACKUP' to value 'FULL'.
        Constructed Backint for MaxDB call 'D:\usr\sap\SCD\SYS\exe\uc\NTAMD64\backint.exe -u LCD -f backup -t file -p D:\sapdb\data\wrk\LCD\maxdb_config.par -i D:\sapdb\data\wrk\backint\sapdb.in -c'.
        Created temporary file 'D:\sapdb\data\wrk\backint\sapdb.out' as output for Backint for MaxDB.
        Created temporary file 'D:\sapdb\data\wrk\backint\sapdb.err' as error output for Backint for MaxDB.
        Writing '
    .\pipe\BACKscd #PIPE' to the input file.
    Prepare passed successfully.
    2010-03-30 17:38:35
    Starting database action for the backup.
        Requesting 'SAVE DATA QUICK TO '
    .\pipe\BACKscd' PIPE BLOCKSIZE 8 NO CHECKPOINT MEDIANAME 'Back123'' from db-kernel.
    The database is working on the request.
    2010-03-30 17:38:35
    Waiting until database has prepared the backup.
        Asking for state of database.
        2010-03-30 17:38:35 Database is still preparing the backup.
        Waiting 1 second ... Done.
        Asking for state of database.
        2010-03-30 17:38:36 Database has finished preparation of the backup.
    The database has prepared the backup successfully.
    2010-03-30 17:38:36
    Starting Backint for MaxDB.
        Starting Backint for MaxDB process 'D:\usr\sap\SCD\SYS\exe\uc\NTAMD64\backint.exe -u LCD -f backup -t file -p D:\sapdb\data\wrk\LCD\maxdb_config.par -i D:\sapdb\data\wrk\backint\sapdb.in -c >>D:\sapdb\data\wrk\backint\sapdb.out 2>>D:\sapdb\data\wrk\backint\sapdb.err'.
        Process was started successfully.
    Backint for MaxDB has been started successfully.
    2010-03-30 17:38:36
    Waiting for end of the backup operation.
        2010-03-30 17:38:36 The backup tool process has finished work with return code 2.
        2010-03-30 17:38:36 The backup tool is not running.
        2010-03-30 17:38:36 The database is working on the request.
        2010-03-30 17:38:36 The database is working on the request.
        2010-03-30 17:38:41 The database is working on the request.
        2010-03-30 17:38:51 The database is working on the request.
        2010-03-30 17:39:06 The database is working on the request.
        2010-03-30 17:39:26 The database is working on the request.
        2010-03-30 17:39:37 Canceling Utility-task after a timeout of 60 seconds elapsed ... OK.
        2010-03-30 17:39:38 The database has finished work on the request.
        Receiving a reply from the database kernel.
        Got the following reply from db-kernel:
            SQL-Code              :-903
    The backup operation has ended.
    2010-03-30 17:39:38
    Filling reply buffer.
        Have encountered error -24920:
            The backup tool failed with 2 as sum of exit codes. The database request was canceled and ended with error -903.
        Constructed the following reply:
            ERR
            -24920,ERR_BACKUPOP: backup operation was unsuccessful
            The backup tool failed with 2 as sum of exit codes. The database request was canceled and ended with error -903.
    Reply buffer filled.
    2010-03-30 17:39:38
    Cleaning up.
        Copying output of Backint for MaxDB to this file.
        **-- Begin of output of Backint for MaxDB (D:\sapdb\data\wrk\backint\sapdb.out)--**
                                     **Data Protection for SAP(R)**
                         **Interface between BRTools and Tivoli Storage Manager***
                         **- Version 6, Release 1, Modification 0.0  for Win x64 -**
                               **Build: 358  compiled on Nov  4 2008**
                    **(c) Copyright IBM Corporation, 1996, 2008, All Rights Reserved.**
            **BKI8310E: The keyword MAXIMAL is not allowed.**
            **BKI1001E: syntax error in file 'D:\sapdb\data\wrk\LCD\maxdb_config.par'. Exiting program.**
            **BKI0020I: End of program at: 03/30/10 17:38:36 .**
            **BKI0021I: Elapsed time: 00 sec .**
            **BKI0024I: Return code is: 2.**       
    End of output of Backint for MaxDB (D:\sapdb\data\wrk\backint\sapdb.out)----
        Removed Backint for MaxDB's temporary output file 'D:\sapdb\data\wrk\backint\sapdb.out'.
        Copying error output of Backint for MaxDB to this file.
    Begin of error output of Backint for MaxDB (D:\sapdb\data\wrk\backint\sapdb.err)----
    End of error output of Backint for MaxDB (D:\sapdb\data\wrk\backint\sapdb.err)----
        Removed Backint for MaxDB's temporary error output file 'D:\sapdb\data\wrk\backint\sapdb.err'.
        Removed the Backint for MaxDB input file 'D:\sapdb\data\wrk\backint\sapdb.in'.
    Have finished clean up successfully.
    Any help will be appreciated.
    Thanks,
    Miral.

    > We are using TSM backint interface for Oracle database. We are in the process of implementing SCM. We would like to schedule MaxDB backup through backint interface for Oracle.
    > *BSI File:*
    >
    > BACKINT D:\usr\sap\SCD\SYS\exe\uc\NTAMD64\backint.exe
    > *Parameter File - MaxDB_config*
    > STAGING AREA:     D:\TEMP\STAGE1 1024000 KB
    > FILES PER BACKINT CALL:     2
    > BACKINT:     D:\usr\sap\SCD\SYS\exe\uc\NTAMD64\backint.exe
    >
    >     Using 'D:\usr\sap\SCD\SYS\exe\uc\NTAMD64\backint.exe' as Backint for MaxDB program.
    > 2010-03-30 17:39:38
    > Cleaning up.
    >     Copying output of Backint for MaxDB to this file.
    >                                  **Data Protection for SAP(R)**
    >                      **Interface between BRTools and Tivoli Storage Manager***
    >                      **- Version 6, Release 1, Modification 0.0  for Win x64 -**
    >         **BKI8310E: The keyword MAXIMAL is not allowed.**
    >         **BKI1001E: syntax error in file 'D:\sapdb\data\wrk\LCD\maxdb_config.par'. Exiting program.**
    >         **BKI0020I: End of program at: 03/30/10 17:38:36 .**
    >         **BKI0021I: Elapsed time: 00 sec .**
    >         **BKI0024I: Return code is: 2.**       
    >
    >     -
    End of output of Backint for MaxDB (D:\sapdb\data\wrk\backint\sapdb.out)----
    >     Removed Backint for MaxDB's temporary output file 'D:\sapdb\data\wrk\backint\sapdb.out'.
    >     Copying error output of Backint for MaxDB to this file.
    >     -
    Begin of error output of Backint for MaxDB (D:\sapdb\data\wrk\backint\sapdb.err)----
    @Markus: thanks for the hint with the quote-formatting option!
    Concerning the issue:
    Sorry, but you misunderstood the way how the general BACKINT for Oracle interface is used with MaxDB.
    See, MaxDB comes with a own BACKINT executable.
    This is a enhanced BACKINT tools that allows pipes as data input channels - which is not supported by the Backint for Oracle.
    The MaxDB Backint serves as a adapter program between the MaxDB Kernel and the backint for Oracle program.
    So instead of
    BACKINT:     D:\usr\sap\SCD\SYS\exe\uc\NTAMD64\backint.exe
    you should point it to the MaxDB provided adapater program.
    I propose to revisit the documentation on that topic [Connecting to a Backint for Oracle Interface |http://maxdb.sap.com/doc/7_7/45/746a5712e14022e10000000a1553f6/content.htm].
    best regards,
    Lars

  • ORACLE SERVER AND UNIX TP MONITOR-2

    제품 : ORACLE SERVER
    작성날짜 : 1995-01-24
    Subject: Oracle Server and UNIX Transaction Processing Monitors-2
    Page(3/4)
    This file contains commonly asked questions about Oracle7 Server and UNIX
    Transaction Processing Monitors (TPMs). The topics covered in this article are
         o Oracle Parallel Server and TP Monitors
         o Oracle and DCE-based TP Monitors
         o Other commonly asked questions
    The questions answered in part 3 provide additional detail to the information
    provided in part 1.
    Oracle Parallel Server and TP Monitors
    ======================================
    How does Oracle Parallel Server (OPS) work with TP Monitors?
    If you are using Oracle-managed transactions, there are no special
    considerations. But if you are using TPM-managed transactions, and
    thus need to use the XA interface, then Oracle requires release 7.1.3
    or later and a special version of the Distributed Lock Manager, called
    the session-based lock manager. This version of the DLM is not yet
    available for all platforms. To understand this restriction, let's take
    a look at one of the technical details of XA.
    The XA specification requires that the Resource Manager be able to
    move a transaction from one process to another, and even to be
    able to commit in a separate process. In Oracle, transactions are
    attached to sessions, so that means that we also have to be able to
    move sessions. Therefore, the session/transaction can't have any state
    which is tied to a particular process. The first generation distributed
    lock managers were all built to use the process id as the lock owner,
    which doesn't work for locks which need to move with the transaction.
    Oracle and DCE-based TP Monitors
    ================================
    How does Oracle interface to the Encina TP monitor? To CICS/6000? I've
    heard that they require OSF DCE facilities in order to run?
    Oracle interfaces to Encina and CICS/6000 just as it does to any other
    TP Monitor. The TP Monitor issues XA commands to control transactions, and
    Oracle executes the commands. Encina and CICS/6000 do use DCE features for
    their own operation. However, this use is transparent to the Oracle Server.
    What DCE facilities can Oracle products take advantage of when working with
    a DCE-based TP Monitor?
    The two most commonly mentioned DCE features which might be useful
    to Oracle users are multi-threading and security. We look at these in
    the subsequent questions in this section.
    Encina documentation suggests that a Resource Manager such as Oracle can
    be either single-threaded or multi-threaded? Which way is Oracle XA
    implemented?
    The Oracle XA implementation is single-threaded, as is any Oracle client.
    Within a single process, at most one thread can access Oracle at a time.
    Does that mean that only a single Encina application can access an instance
    of Oracle transactionally at any given moment?
    No. Oracle XA is only single-threaded within a single application server
    process. Multiple applications can access Oracle simultaneously using XA
    by using different application processes. Encina allows
    (1) serial reuse of a single server by different clients. There are
    two options for this. The server can use long term reservation
    but be defined to be in shared or concurrent access mode, which
    allows the server to be used by another client as soon as an RPC
    completes. Alternatively, the server can use default reservation
    and exclusive mode, which allows the server to be used by another
    client as soon as the current transaction ends.
    (2) concurrent execution by multiple servers, even if they are accessing
    the same Oracle database. These may be executing the same or different
    procedures.
    These two features should let you get as much concurrency as you need.
    Why isn't the Oracle XA library multi-threaded?
    The XA specification specifically states that its use of the phrase
    "thread of control" means a process. If an RM were to multi-thread its
    XA, it would be in violation of the specification. This restriction
    was put place in because at the time the specification was written,
    there were numerous thread packages: if the TM used one, the application
    another, and perhaps the RM yet a third, there's no way it could work.
    As threads standards settle down, the later versions of XA will probably
    relax this restriction.
    Will Oracle change if the XA specification changes?
    Very likely. The exact time frame will of course depend on the priority of
    all work items at that time.
    Does Oracle use DCE security via the TP Monitors?
    The integrity of the connection between a DCE TP Monitor client and DCE
    TP Monitor server is protected by the DCE security functionality.
    Theoretically, the TP Monitor could make the DCE-protected client security
    information available to Oracle. Unfortunately, there's no standard way
    for a TP Monitor to pass security information information to a Resource
    Manager such as Oracle. Oracle is leading an effort to extend the X/Open
    model to allow use of the security information provided by the Monitor.
    In the meantime, the basic DCE security features such as encryption are
    useful within TP Monitors.
    Effective use of DCE security would normally also mean that the security of
    the TP Monitor client be passed through the TP Monitor, through the Oracle
    client (application server), to the Oracle Server, and possibly on
    to other Oracle Servers through database links. The ability to transfer
    security information to other processes, called delegation, is missing
    in DCE version 1.0. DCE version 1.1, expected to emerge in late 1994,
    has some delegation features. Oracle is examining these features to see
    how they might be used.
    Are there any special considerations for CICS/6000?
    There are two:
    (1) It is inefficient to run without XA. CICS/6000 is designed to
    use XA. It uses XA so that the CICS server can log on to Oracle
    when it starts, after which it makes that Oracle connection available
    to any transaction it executes. If you don't use XA, the CICS server
    does not itself log on to Oracle so each transaction has to log on
    and log off - a very expensive mode of operation. Also, it is very
    un-cics-like in that the application does the log{on,off} and also
    commits - in a mainframe CICS database program CICS would implicitly
    do these operations. Oracle does not recommend this mode because of the
    performance penalty.
    (2) CICS servers are generic and dynamically load application modules.
    In order for these modules to access the Oracle connection made by
    CICS, the applications must be built with a shared object version of
    the Oracle libraries. This is an installation option on platforms which
    support CICS/6000 and other products using its architecture such as
    CICS 9000.
    Other commonly asked questions
    ==============================
    What other Resource Managers can be included in an Oracle XA transaction?
    Several other relational database vendors have an XA implementation
    available or in progress. There is an XA C-ISAM product from
    Gresham Telecomputing. There are also Resource Managers contained
    within some of the TP Monitors which can be coordinated in the same
    transaction. For example, CICS/6000 has VSAM files and other data
    stores, Encina has its RQS queuing system, and Tuxedo has its /Q queuing
    system.
    What is Recoverable Queuing Service (RQS) and how does it interoperate with
    Oracle7 and Encina? What about /Q?
    Recoverable Queuing Service is a feature provided by Encina which allows
    transactional, distributed queuing (enqueue/dequeue). Tuxedo has a similar
    product called /Q. Because these products are themselves coordinated by the
    TM component of the TP Monitor, their queue operations are atomically
    coordinated with with operations on XA Resource Managers such as Oracle7
    Server. That is, they can atomically put something on one of their queues
    and commit an Oracle transaction, then at some later time dequeue an
    entry atomically with doing some other Oracle transaction. The queue
    system guarantees that the message will not be lost or transmitted twice.
    Can I mix TP Monitor applications with standard Oracle7 Server applications?
    Yes, you can have existing Oracle applications connected to the database
    with alongside TPM applications against the same database. The TPM does
    not manage the whole database, just those transactions which are started
    by the TPM. The Oracle Server will properly handle concurrency control
    between the transactions managed by itself and those managed by the TPM.
    Is Oracle planning to change its tools to be more suitable for TP Monitors?
    With Oracle Procedure Builder 1.5, to be available with CDE2,
    Oracle will provide a foreign function interface that allows you to
    dynamically set up PL/SQL calls that access C functions. In other
    words, you can access C routines in Windows DLLs from within your
    PL/SQL procedures. This will allow PL/SQL under Windows easy access to
    TP Monitor APIs.
    Does Oracle7 Server itself use XA-compliant TPMs as the interface to
    foreign RMs?
    No, for this purpose Oracle Server uses the SQL*Connect products or the new
    Transparent and Procedural Gateway products.
    Does Oracle7 Server use XA to coordinate Oracle7-only distributed
    transactions?
    No, it uses an internal mechanism.
    Can database links be used with XA?
    If an Oracle7 database is running under XA, it can access other Oracle7
    databases through database links, with some restrictions. First, the
    access to the other database must use SQL*Net V2 and be running MTS.
    Second, it must currently be to another Oracle7 database. Assuming those
    restrictions, the Oracle 7 database can do distributed update to another
    Oracle 7 database by using a database link, whether it is started by an
    Oracle application or a TP Monitor application. The TPM will see Oracle
    as only a single RM, but Oracle7 will propagate all the transaction
    commands to the other database, including the two-phase commit. If
    the transaction is started by a TP Monitor application and is using XA,
    it can also update non-Oracle resources managed by the TPM. If it
    is started from an Oracle application, it can only include resources
    managed by Oracle.
    Here's a sample configuration:
    | TPM | | TPM |
    | client | | client |
    | |
    | |
    | TPM |
    | |
    | |
    | Oracle | Forms, Forms, | Oracle | | non-XA | | XA |
    | client | Plus, Plus, | client | | TPM | | TPM |
    --------- Pro, Pro, --------- | server | | server |
    | Financials, Financials, | |(note 1)| ----------
    | etc. etc. | ---------- |
    | | | |
    | SQL | SQL | SQL | XA
    | commit | commit | commit | commit
    | | | |
    | Oracle | | Oracle | | Oracle | | Oracle |
    | server | | server | | server | | server |
    | | | |
    | | | |
    | | | |
    | Database 1 | | Database 2 |
    | | | |
    | A | A
    | | dblink to database 1 | |
    | ------------------------------------ |
    | |
    dblink to database 2
    Note 1: Oracle will work having both XA and non-XA servers but some TPMs
    may have restrictions on this.
    Are multiple direct connections possible from a Pro* program?
    Using XA, you can not only specify multiple direct connections to Oracle7
    databases, you can also update them both in the SAME transaction. The
    way to do this is to use a precompiler feature called a named database.
    When you use a named database, you qualify the SQL statement with the
    database name. For example, you write EXEC SQL AT dbname UPDATE emp ....
    We have a complementary feature in the xa open string to let the user
    associate the name with a particular RM instance, called the DB clause.
    You will also want to use the SqlNet clause in the open string so you
    can give the two different SIDs. This clause does not require the use of
    the SQL*Net product, it is just a naming convention. For more information,
    see Oracle7 Server for UNIX Administrator's Reference Guide.
    Some TP Monitors may not support having multiple Resource Mangers in the
    same server; check with the TPM vendor.
    Is there any collateral available for XA or TP Monitors?
    Oracle At Work 52684.0692
    Oracle7 Server for UNIX Administrator's #A10324-1
    Reference Guide
    Guide to Oracle's Products and Services #A10560
    Oracle7 Server and CICS/6000               #A14200
    Where can I get more information on the DTP model?
    X/Open's address is
    X/Open company Ltd (Publications)
    P O Box 109
    Penn
    High Wycombe
    Bucks HP10 8NP
    Tel: +44 (0)494 813844
    Fax: +44 (0)494 814989
    Request
    G307 Distributed Transaction Processing: Reference Model Version 2
    X/Open Guide G307 ISBN 1-859120-19-9 28cm.44p.pbk.220g.11/93
    Page(4/4)
    This file contains commonly asked questions about Oracle Server and UNIX
    Transaction Processing Monitors (TPMs). The topics covered in this article are
         o Performance with Oracle Server and TP monitors
         o Performance using Oracle's XA Library
    The questions answered in part 4 provide additional detail to the information
    provided in part 1.
    Performance with Oracle Server and TP Monitors
    ==============================================
    I have heard that Transaction Processing Monitors (TPMs) will increase
    Oracle Server performance. Is this true?
    Several hardware and TPM vendors have made the claim that TPMs
    will increase RDBMS performance. This claim is based on TPC-A
    benchmarks. The key point to understand about TPC-A is that it
    requires, for every transaction-per-second, ten times that many
    users to be connected. For example, to get 600 TPS, you need 6000
    users. The next question will answer in more detail how the the
    three-tier architecture addresses this requirement, but first let's
    look more generally at what TP Monitors can and can't do to improve
    performance.
    TP Monitors can provide better performance:
    (1) When there are more than several hundred users connected.
         This is because of the TP Monitor's role in the three-tier
         architecture, described in the next question. In this
         architecture, terminal handling is offloaded to one or more
         separate machines, freeing up those cycles to do database work.
         Note that this does NOT mean that Oracle itself runs faster,
         just that we've given it more CPU cycles to use.
    (2) When, because of the high potential concurrency of requests,
         significant resource contention exists. Use of a TP Monitor can
         limit the degree of concurrency and thus reduce contention.
    TP Monitors can not provide better performance:
    (1) For existing applications. The applications must be designed
         to fit the TP Monitor architecture.
    (2) For applications which are highly interactive in their use of
         the database. These applications put many messages
         through the transport system, and the TP Monitor is not as
         efficient as SQL*Net for point-to-point communication.
    (3) For CPU intensive single-query decision support. When executing
         a single large command, Oracle query facilities work efficiently,
         especially with the use of Oracle Parallel Query, available in 7.1.
    How does the three-tier solution help TPC-A, or other situations with
    thousands of on-line users?
    The TPC-A test calls for a large number of users to produce a given
    result. In the high-end results we produced in June, 1992, for example,
    6150 terminals were simulated to produce 618 TPC-A transactions.
    Thus, terminal concentration accounts for a large portion of the total
    processing time used.
    First, let's look at how the Multi Threaded Server would work for
    this benchmark. In this case, there are many client processes,
    but only a few server processes, which handle client requests on a
    first-come first serve basis. When they are done with a request,
    they take another client's request.
    ORACLE7 CLIENT/SERVER ARCHITECTURE WITH MULTI THREADED SERVER
    | Client | | Server |
    | __________ |______________|_____ _____________ _____________ |
    | | Client | | SQL*Net | |_|Dispatcher | | | |
    | | Process| | | ____| Process |___| | |
    | |________| | | | __|___________| | | |
    |____________| | | | | | | | |
    | | | | | | Oracle7 | |
    ______________ | | | __|__|____ | Server | |
    | Client | | | | __|_|_____ | | | |
    | __________ | | | | | Shared | |____| | |
    | | Client | | SQL*Net | | | | Server |_|____| | |
    | | Process|_|______________|__| | | Process|_| | | |
    | |________| | | | |________| |___________| |
    |____________| | | |
    | | |
    ______________ | | |
    | Client | | | |
    | __________ | | | |
    | | Client | | SQL*Net | | |
    | | Process|_|______________|____| |
    | |________| | | |
    |____________| | |
    |_______________________________________|
    Client processes = N Dispatcher processes >= 1
    Shared server processes >= 1
    If there are 500 clients in this environment, there will be one or more
    dispatcher processes, dynamically tunable, and one or more shared
    server processes, dynamically tunable, on the server. The reduction
    in the total number of processes handled by the server system
    results in more processing time available for RDBMS activity. Thus
    higher RDBMS transaction throughput can be obtained on the
    server system.
    But the problem for the TPC-A, and for certain large customer
    configurations, is not the only ability of the Oracle Server to
    process transactions, but also the ability of the operating
    system to handle huge numbers of incoming connections.
    There is one incoming connection for each client. Most UNIX
    operating systems have a limit on how many such connections they can
    handle. Even if a particular operating system allows a large number of
    connections, each takes some amount of overhead to manage.
    In order to service all 6150 terminals, we selected a 3-tier hardware
    environment where the middle tier, using a TPM, acted as a terminal
    concentrator. The high-end TPC-A architecture looked like the following.
    The Application Servers, which contain the Pro*C statements used to
    perform the transaction also run on the terminal concentrator machine
    in order to offload as much work from the database serve as possible.
    They send the compiled SQL over SQL*Net to the Oracle7 Server processes.
    ORACLE7 TPS-A CLIENT/SERVER ARCHITECTURE
    | Client | | Terminal | | Server |
    | ________ | | Concentrator | | |
    | | Client | |TPM | | | |
    | | Process|_|_____|__ _____ | | |
    | |________| |Comm | | | | | | |
    |____________| | | | | | | |
    | |__| | | | |
    ____________ | | TPM | | | |
    | Client | | ___| | _______ | | ________ _______ |
    | ________ | | | | |_| |__|_______|__| Oracle | | | |
    | | Client | |TPM | | | | |Appl. | |SQL*Net| | Server |__| | |
    | | Process|_|_____|_| |_____| |Server | | | | Process| | | |
    | |________| |Comm | |_______| | | |________| | | |
    |____________| | | | | | |
    |_______________________| | | | |
    | | | |
    ____________ _______________________ | |Oracle7| |
    | Client | | Terminal | | |Server | |
    | ________ | | Concentrator | | | | |
    | | Client | |TPM | | | | | |
    | | Process|_|_____|__ _____ | | __________ | | |
    | |________| |Comm | | | | _______ |SQL*Net| | Oracle | | | |
    |____________| | | | |_| |__|_______|__| Server |__| | |
    | |__| | |Appl. | | | | Process| | | |
    ____________ | | TPM | |Server | | | |________| |_______| |
    | Client | | ___| | |_______| | | |
    | ________ | | | | | | | |
    | | Client | |TPM | | | | | | |
    | | Process|_|_____|_| |_____| | | |
    | |________| |Comm | | | |
    |____________| | | | |
    |_______________________| |________________________|
    Clients = 6150 Terminal concentrators = 17
    TP Monitor instances = 17
    Application server processes Oracle Server processes
    = 17*8 = 17*8
    The TPM is the software component of the terminal concentrator. In this role
    it offloads terminal handling from the the machine running Oracle Server.
    Since more than one terminal concentrator can be configured, whereas the
    database in this case had to run on a single machine, concentrator machines
    can be added until the performance of the back-end machine was optimized.
    This three-tier solution resulted in the outstanding transaction throughput
    announced with Oracle7 Server. Even with Oracle Parallel Server, it may pay
    to offload the terminal handling so that the cluster can be exclusively used
    for database operations.
    Can you summarize the performance discussion for me?
    Depending on the number of users required, different architectures may be
    used in a client/server environment to maximize performance:
    1) For a small number of users, the traditional Oracle two-task
    architecture can be used. In this case, there is a one-to-one
    correspondence between client processes and server processes. It's
    simple, straightforward, and efficient.
    2) For a large number of users, Multi Threaded Server might be a better
    approach. Although some tuning may be required, Multi Threaded Server
    can handle a relatively large number of users for each machine size
    compared to the traditional Oracle approach. Using this approach,
    customers will be able to handle many hundreds of users on many
    platforms. Furthermore, current Oracle applications can move to this
    environment without change.
    3) For a very large number of users, where transactions are simple and
    terminal input concentration is the overriding performance issue, a
    3-tier architecture incorporating a TPM may be useful. In this case,
    terminal concentration is handled by the TPM in the middle tier. As
         you might expect, it is a more complex environment requiring more
         system management. For existing Oracle customers, significant Oracle
    application modifications will be required.
    Oracle provides all of these choices.
    Performance using Oracle's XA Library
    =====================================
    Are there any performance implications to using the XA library (in other
    words, to using TPM-managed transactions)?
    (1) The XA library imposes some performance penalty. You should use
    TPM-managed transactions only if you actually need them. Even if you
    are getting the one-phase commit optimization, the code path is
    longer because we need to map back and forth between external
    formats and internal ones. Also, prior to 7.1, XA requires you
    to release all cursors at the end of a transaction, which results
    in extra parsing. Even with shared cursors, there is time spent
    looking up the one you need and re-validating it. This has been
    improved for 7.1.
    (2) If you need to use two-phase commit, this will incur additional cost
    since extra I/Os are required. If you do need 2PC, you need to account
    for that when sizing the application.
    (3) Although some TPMs allow parallel execution of services (such as Tuxedo's
    "tpacall"), this will not normally enhance performance unless different
    resource managers are being used. In fact, Oracle Server must serialize
    accesses to the same transaction by the same Oracle instance, and the
    block/resume code will in fact degrade performance in that case compared
    to running the services sequentially.

    hello,
    the role is the same on all plattforms. the reports server takes requests for running reports, spawns an engine that executes the request. in addition to that, the server also provides scheduling services and security features for the reports environment.
    regards,
    the oracle reports team

  • Item import to oracle inventory from legacy system

    Hi,
    We have a requirement where we need to import a very large number of items (close to 500,000) from a legacy system to oracle inventory.
    We are using oracle open interface for this. We have designed a temp table for this purpose in order to validate data before moving it to the oracle interface tables. The data to be migrated is present in a separate data base and we need to move this data to Oracle temp table.
    Can we use DB link to migrate this kind of data(in terms of volume)? Also what are the best practices that need to be followed while importing data to Oracle inventory? Is there any other way that this can be done?
    Please share your thoughts and suggestions on this
    Thanks
    AM

    Hi,
    Some times DB link will fail, Please create custom package and try to interface with legacy system for periodically fetching the data into your tables. If the data is bulk please use API instead of Interface tables.
    Thanks.

  • How to load the data from a staging table to interface table

    Hi..
    I have a staging table having these many columns
    invoice_number,invoice_date,vendor_name,vendor_site_code,description,line-amount,line-description,segment1,segment2,segment3,segment4,segment5
    I want to insert data into oracle interface tables
    1st table is ap_invoices_interface which is primary
    and 2nd is ap_invoice_lines_interfaces.
    According to the invoice_id I have to insert the sum of amount in the amount column of primary table
    can anyone plz give the codes .
    any help appreciate

    Hi,
    you need to write pl/sql procedure or package for validiating the data and inserting.
    first u need to know wat r the mandatory colums. and write the code igiving here a simple example
    Create or replace procedure xxstg_po_vendors_int(errbuf out varchar2,retcode out number) IS
    Cursor po_cur IS Select sno,VENDOR_NAME,SUMMARY_FLAG,ENABLED_FLAG From xxstg_po_vendor;
    l_SUMMARY_FLAG Varchar(1);
    l_ENABLED_FLAG varchar(1);
    l_VENDOR_NAME varchar2(240);
    l_err_msg varchar2(240);
    l_flag varchar2(2);
    l_err_flag varchar2(2);
    Begin
    Delete from Ap_suppliers_INT;
    commit;
    for rec_cur in po_cur loop
    l_flag :='A';
    l_err_flag:= 'A';
    Begin
    select summary_flag into l_SUMMARY_FLAG from po_vendors
    where summary_flag = rec_cur.summary_flag;
    Exception
    when others then
    l_summary_flag:= null;
    l_flag:='E';
    l_err_msg:= 'Summary_flag Does not Exist';
    END;
    FND_FILe.PUT_LINE(FND_FILE.LOG,'Inserting data into interface table'||l_flag);
    Begin
    Select enabled_flag into l_enabled_flag from po_vendors
    where enabled_flag = rec_cur.enabled_flag;
    exception
    when others then
    l_enabled_flag:=null;
    l_flag :='E';
    L_err_msg:='Enabled_flag Does not Exist';
    End;
    FND_FILE.PUT_LINE(FND_FILE.log,'Inserting data into interface table'||l_flag);
    FND_FILE.PUT_LINE(FND_FILE.log,'Inserting data into interface table'||l_flag);
    INSERT INTO AP_SUPPLIERS_INT
    ( VENDOR_INTERFACE_ID,VENDOR_NAME,SUMMARY_FLAG,ENABLED_FLAG )
    values(rec_cur.sno,rec_cur.VENDOR_NAME,rec_cur.SUMMARY_FLAG,rec_cur.ENABLED_FLAG);
    l_flag :=null;
    l_err_msg:=null;
    end loop;
    commit;
    end;
    Regards
    Goutham

  • Oracle forms 6i (Pentium 4, Win98) "Can not install"

    I have a "customized" Oracle interface that was created in Oracle forms 6. I have successfully used this form on Windows 95 Clients but now I must switch to a Windows 98 O/S on a Pentium 4 machine. I actually had to download a special Oracle Installer to install the 8i Client, but I now need to load a newer version of the Forms "run-time" and can not install it from the 6i forms download. Any suggestions would be more than appreciated.

    Forms 6i installation in Windows98 should not give any trouble. Hope you
    have checked the memory. You should have 128MB memory to install
    Forms6i. In Runtime you may not actually require that much of memory but
    to install you reuire that (Please see Hardware requirement for Form6i).

Maybe you are looking for