IMP 9.06 oracle into 10.2.01 oracle

Hi Experts,
I have a 9.0.6.0 oracle DB in 32 bit window. It is runing on noarchived mode. I need to dublicate this database. I make a full database exp in successfully without any error/warning. Te dump size is 34 G. However, I could not imp into 10GR1 database.
I must install 9.006 in testing server? At the present, I can download 9.01 only from OTN. which patch number do I need to apply to upgrade to 9.06.0?
Could we imp so large 32G dump file into Oracle10GR1 directly?
Thanks
JIm

You should be able to import into 10gR1
http://download.oracle.com/docs/cd/B14117_01/server.101/b10763/expimp.htm#i262220
HTH
Srini

Similar Messages

  • How to extract Exadata .dmp files into older versions of Oracle

    Hi, Our customer has provided exadata .dmp files ( HCC compressed) and I don't have access to Oracle Exadata - are there ways or utiltiies to extract these dumps to Older versions of Oracle? or even Oracle Express.
    Thanks/Prasad

    Hi,
    To Export form Exadata (DB 11g R2) and Import into Older versions of Oracle (Ex. 10g)
    You have to use exp utility which is version 10g and imp by utility 10g
    Ex.  use oracle 10 client to connect to Exadata and run exp cmd  then imp.
    BR
    Sami

  • Error while inserting .doc file into CLOB object in oracle

    hello everybody ,
    i am trying to insert .doc file into clob column in oracle database.i am using oracle 8i. But i am getting error saying
    ORA-01461: can bind a LONG value only for insert into a LONG column
    i have no clue.
    i am pasting code here
    please help me out.
    regards
    darshan
    import java.io.File;
    import java.io.FileInputStream;
    import java.io.FileReader;
    import java.io.Reader;
    import java.sql.Connection;
    import java.sql.PreparedStatement;
    public class InsertingClob {
    public static void main(String[] args) {
    File f = new File("E:\\dar
    sowres.doc");
    int len = (int) f.length();
    System.out.println(len);
    Connection conn = null;
    PreparedStatement ps = null;
    try {
    FileReader fr = new FileReader(f);
    String FILE_INSERT_QUERY = "INSERT INTO RESUMED VALUES(?,?)";
    conn = JDBCUtility.getConnection();
    ps = conn.prepareStatement(FILE_INSERT_QUERY);
    ps.setString(1,"1");
    ps.setCharacterStream(2,fr,len);
    int result = ps.executeUpdate();
    if(result ==1) {
    System.out.println("file has been successfully inserted into the db");;
    }else {
    System.out.println("not inserted");
    }catch (Exception e) {
    e.printStackTrace();
    and the error is
    java.sql.SQLException: ORA-01461: can bind a LONG value only for insert into a LONG column
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
    at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:289)
    at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:582)
    at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1986)
    at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteFetch(TTC7Protocol.java:1144)
    at oracle.jdbc.driver.OracleStatement.executeNonQuery(OracleStatement.java:2152)
    at oracle.jdbc.driver.OracleStatement.doExecuteOther(OracleStatement.java:2035)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:2876)
    at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:609)
    at InsertBlob.main(InsertBlob.java:21)
    java.sql.SQLException: ORA-01461: can bind a LONG value only for insert into a LONG column

    You may have one of a few errors going for you there:
    The error says your column in Oracle is a Long not a CLOB, if it is, then make your column a CLOB.
    CLOB's are not suppored in all environments and/or all interfaces (specifically Windoz ODBC has a problem).
    I believe a DOC (Windoz MS-Word) file is a BLOB, due to formatting characters in the file.
    I had a very similar error using Access and trying to do this, but changing to SAS fixed the problem. It could very well be that your version of ODBC/JDBC drivers does not support it properly.

  • Importing table dump from oracle into sybase

    I was wondering if anyone has ever imported a table dump from oracle into sybase. I just tried it and it didn't work. I used TOAD to create my table dump file but when i imprted it into sybase using bcp in i got the following error
    cs_convert: cslib user api layer: common library error: The result is truncated because the conversion/operation resulted in overflow.
    CSLIB Message: - L0/O0/S0/N36/1/0:

    If you are looking for a basic/standard way to exchange data between Oracle and Sybase, use the CSV format.
    It is trivial to write a generic SQL*Plus script for Oracle to extract data from a table (or a SELECT) and spool this into a properly formatted CSV file.
    It is also just as easy to use Sybase Bulk Copy utility to load that CSV file into Sybase.
    There are also alternative methods. One would be to use Oracle's Heterogeneous Services. You can define a database link in Oracle that connects, via ODBC, to Sybase. And using this database link you can push (insert) data into Sybase.

  • How to populate the Quering data into Excel sheet in Oracle

    Dear Guys,
    How to populate the Quering data into Excel sheet in oracle.
    Please provide a solution.
    Thanks & Regards,
    Senthil K Kumar

    Hi
    To make Excel sheets from sqlplus, you can use the markup html tag in sqlplus.
    Here's an example.
    Example
    <code>
    SET LINESIZE 4000
    SET VERIFY OFF
    SET FEEDBACK OFF
    SET PAGESIZE 999
    SET MARKUP HTML ON ENTMAP ON SPOOL ON PREFORMAT OFF
    SPOOL c:\test_xls.xls
    SELECT object_type
    , SUBSTR( object_name, 1, 30 ) object
    , created
    , last_ddl_time
    , status
    FROM user_objects
    ORDER BY 1, 2
    SPOOL OFF
    SET MARKUP HTML OFF ENTMAP OFF SPOOL OFF PREFORMAT ON
    SET LINESIZE 2000 VERIFY ON FEEDBACK ON
    </code>

  • How could I Write data into a field in Oracle whose data type is VARCHAR2

    The target data I want to write into Oracle is in http://tw.yahoo.com/info/utos.html.
    Now, these data is stored in Mysql database and the field which stores these data uses "Text" as its data type.
    I want to derive these data from mysql database and store them into a field of oracle database.
    In oracle, I create field whose data type is varchar2(4000) to store these data.
    I use JSP to derive data from mysql and insert into oracle through JDBC.
    But the result of the page shows me that "javax.servlet.ServletException: Data size bigger than max size for this type: 25623".
    Please anyone could help to resolve this problem?
    Thank you very much.

    My hunch is that the problem is that VARCHAR2(4000), but default, allocates 4000 bytes of storage. Depending on your database character set, a single character may require up to 4 bytes of storage.
    If you are on 9i, you can declare the column VARCHAR2(4000 CHAR) to allocate 4000 characters of storage. You can also set NLS_LENGTH_SEMANTICS to CHAR, which will cause Oracle to assume that your declarations are in characters rather than bytes.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • IMP-00008: unrecognized statement in the export file, oracle 11gr2 on redhat 5

    I am using Oracle 11g R2 on Radhat 5 linux to import(imp) a dmp file with a table with blob data type. I got the following errors with binary non-ascii on the screen and failed imp at that table:
    IMP-00008: unrecognized statement in the export file: (a lot of non-ascii characters followed)
    How do that happen and how do we handle it?
    -Henry

    Hello,
    IMP-00008 may due to several reasons.
    The Dump may be corrupted, you may also hit a Bug and so on, ...
    Is it the only error you got or do you have other error message (for instance IMP-00032) ?
    Else, I don't know why you use EXP/IMP in 11.2, the Original Export/Import is not recommended. You should use DATAPUMP which is much more powerful.
    Please, find enclosed a link about DATAPUMP:
    http://www.oracle-base.com/articles/10g/oracle-data-pump-10g.php
    Hope this help.
    Best Regards,
    Jean-Valentin Lubiez

  • How i can load excel sheet into a table in oracle through pl/sql procedure

    Hi,
    How i can load excel sheet into a table in oracle through pl/sql procedure or a pl/sql block. Excel sheet is saved on my c or d drive on my machine. In xls format.

    Depending on how big your spreadsheet is and how frequently you want to do this you might want to contruct insert statements in excel, then run these. I have done this to load a few hundred rows for a one off test on dev.
    e.g. if you have values 1 and 'a' in you spread sheet and want to insert them in to table xxx col1 & 2:
    | /|   A   |   B   |    C
    |1 |col1   |col2   |
    |2 |      1|a      |="insert into xxx ("&$A$1&","&B1&") values ("&A2&",'"&B2&"');"then paste the contents of colum C
    insert into xxx (col1,col2) values (1,'a');into sqlplus or a script.

  • Pulling data from oracle into sql server 2005

    hi,
    these days i am working on sql server 2005 on windows server 2008 64 bit.
    and oracle 10g on 32 bit unix.
    my problem is that when i am pulling data from oracle into sql server it shows me about *500-700 entries less*.
    why this is happening? is it because 32 bit to 64bit? or is it because of different os?

    Akki,
    are you using snapshot or replication from MSSQL? I am doing the same thing, hope to share your experience.
    I am using import/export from MGT studio and pull some data from Oracle database, I am working on how to update the changes on these tables pulled from Oracle.
    Thanks,
    -hank

  • Datas extracted from production tables in Oracle into pipe delimited flat f

    Hi
    Plz anyone tel me the query how to extract data in oracle into pipe delimited flat files using plsql stored procedure and also do incremental updates.plz tel me its urgent

    how to extract data in oracle into pipe delimited flat files using plsql stored procedure You can try utl_file.
    and also do incremental updates.This part I don't get. Updates to what? The above file?
    plz tel me its urgentWe don't care.

  • Unable to import oracle 9.2 data to oracle 10.2

    Dear all,
      part of the upgrade project i am facing this problem.
    I have taken DBEXPORT from 4.6C(EXT kernel),oracle 9.2.7 and the same i want to import it on 4.6C,oracle10.2.2.(already 4.6C on oracle 10.2.2 is there on another system)
    when i try to import through DBRELOAD it's showing list of supported database versions as Oracle 8.1.7 and oracle 9.2 only. it's not showing oracle 10.2 (both in DBRELOAD as well DBMIG).
    I want to import the oracle 9.2 databse into existing 10.2 datbase.
    how to proceed on this.. kindly help me on this
    Thanks & regards
    Subbu

    Hi Markus,
    I have raised the same question, that if I want to do the import with the same version of oracle, how can I carry the procedure. I have exported the database, through GUI-- Newinst--> Export Database...( 4th option of the main installation)
    Export is done. Created some 9 folder adn 1 LABEL.ASC file..
    Now as suggested by you in my thread of " Export/import" I am trying to import with the option u suggested( standard install..and then 2 option of System Copy/Migration". But after following all that, when the installation is asking for " Migration CD. LABEL.ASC, I have given what was exported. But gives the error.SAying DBSIZE.XML could find in ORA folder. When checked the export folders, there is only 1 DBSIZE.XML folder, that to ADA folder....
    Please suggest??
    Regards
    Prash

  • Pre-loading Oracle text in memory with Oracle 12c

    There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
    In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
    But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
    What I found as work-around is to build the index with the following storage options:
    ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
    ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
    Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
    Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
    1. create the table
    drop table test;
    CREATE TABLE test
    (ID NUMBER(9,0) NOT NULL ENABLE,
    XML_DATA XMLTYPE
    XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
    2. insert a few records
    insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
    insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    3. create the text index
    drop index i_test;
      exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
    begin
      CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP', 
                                section_name => 'SData_02',
                                tag => 'SData_02',
                                datatype => 'varchar2');
    end;
    exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    exec  ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
    exec  ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
    exec  ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
    exec  ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    create index I_TEST
      on TEST (XML_DATA)
      indextype is ctxsys.context
      parameters('
        section group   "TEST_SGP"
        storage         "TEST_STO"
      ') parallel 2;
    4. check the index size
    select ctx_report.index_size('I_TEST') from dual;
    it says :
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                                104
    TOTAL BLOCKS USED:                                                      72
    TOTAL BYTES ALLOCATED:                                 851,968 (832.00 KB)
    TOTAL BYTES USED:                                      589,824 (576.00 KB)
    4. optimize the index
    exec ctx_ddl.optimize_index('I_TEST','REBUILD');
    and now recompute the size, it says
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                               1112
    TOTAL BLOCKS USED:                                                    1080
    TOTAL BYTES ALLOCATED:                                 9,109,504 (8.69 MB)
    TOTAL BYTES USED:                                      8,847,360 (8.44 MB)
    which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
    5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
    alter table DR$I_TEST$I storage (buffer_pool keep);
    alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
    rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
    create or replace procedure loadTokenInfo is
      type c_type is ref cursor;
      c2 c_type;
      s varchar2(2000);
      b blob;
      buff varchar2(100);
      siz number;
      off number;
      cntr number;
    begin
        s := 'select token_info from  DR$i_test$I';
        open c2 for s;
        loop
           fetch c2 into b;
           exit when c2%notfound;
           siz := 10;
           off := 1;
           cntr := 0;
           if dbms_lob.getlength(b) > 0 then
             begin
               loop
                 dbms_lob.read(b, siz, off, buff);
                 cntr := cntr + 1;
                 off := off + 4096;
               end loop;
             exception when no_data_found then
               if cntr > 0 then
                 dbms_output.put_line('4K chunks fetched: '||cntr);
               end if;
             end;
           end if;
        end loop;
    end;
    Rgds, Pierre

    I have been working a lot on that issue recently, I can give some more info.
    First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
    What kind of performance do you have with your application ?
    In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
    SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */    
    TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID    
    FROM DR$idxname$I
    WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype    
    ORDER BY TOKEN_TEXT,  TOKEN_TYPE,  TOKEN_FIRST
    which is continuously done.
    I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
    But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
    And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
    What worked:
    first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
    Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
    I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
    alter session set events '10949 trace name context forever, level 1';
    alter table DR#idxname0001$I cache;
    alter table DR#idxname0002$I cache;
    alter table DR#idxname0003$I cache;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
    It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
    exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
    gqve the following
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drftoptrebxch
    ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 1141
    ORA-06512: at line 1
    Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with  dbms_pclxutil.build_part_index procedure (this enables  enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
    Other points of attention with the text index creation (stuff that surprised me at first !) ;
    - if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
    - this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
    I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
    Regards, Pierre

  • I want to move the data from oracle 8.0.5 to oracle 10g

    Dear Gurus i want to move my data exist in oracle 8.0.5 to oracle 10g, what is the simplest way to do that without loss of data and time.
    thanx and regard

    Since you are on 8.0.5, there is no direct path upgrade for you.
    You need to first upgrade to 8.1.7.4.1 at least before you can directly migrate to 10gR2.
    You can refer Metalink note 316889.1 for this purpose.
    Another option would be to simply export the data from 8.0.5 version and import into 10g database.
    Simply stated, there is no method which will not take time for this purpose but there would be no data loss (assuming you are not going to change the character set of the database).

  • How to install Oracle BPEL Process Manager for OracleAS Middle Tier

    hi,
    i need to install BPEL process manager, so i download the following file from otn
    1.soa_windows_x86_101310_disk1
    2.soa_windows_x86_bpel_101310
    here i read the document named b28980.pdf from bpel\doc\pc.1012 to install BPEL PM
    so i start to complete the pre-installation task
    1.installed Oracle database 10g
    2.Run the Integration Repository Creation Assistant on the Database
    3.Install Oracle Application Server 10g Release3 (10.1.3.1.0) and select either the J2EE Server installation type or the J2EE and Web Server installation type. selected J2EE and Web Server installation type
    and installed according to the Oracle application server installation guide.
    installed OracleAS in the path : D:\product\10.1.3.1\OracleAS_1
    4.Install the current release of Oracle BPEL Process Manager for OracleAS Middle Tier
    here they mention to select the J2EE and Web Server installation type because that type is selected in Oracle AS installed in Oracle Application Server
    so i start to install the BPEL PM by selecting the setup.exe-->and shows the location source and destination
    default destination is : D:\product\10.1.3.1\OraBPEL_1 selected next on the screen
    the next screen is select installation type here there are two types named
    1.BPEL process Manager for Developer (371MB)
    2.BPEL process manager for Oracle AS Middle tier (107MB)
    i selected 2.BPEL process manager for Oracle AS Middle tier (107MB) and click next
    pop up window opens with title dependencies
    error:
    BPEL Process manager for oracle AS Middle tier will run on top of a supported Oracle Application Server 10.1.3.1.0 J2EE server and Web Server Or J2EE server instance. this location does not contain this instance. Please select new Oracle home that contains a supported instance.
    so i changed the destination path to : D:\product\10.1.3.1\OracleAS_1\BIN then also i got the same error.
    please any one mention the path for J2EE and Web Server instance for installing the BPEL PM for Oracle AS Middle Tier.
    Thanks in Advance
    Aswath Thaniga

    If you choose the developer version you will be fine.
    If you have installed J2EE and Web Server installation into D:\product\10.1.3.1\OracleAS_1 then this is the location you install your BPEL PM into, not D:\product\10.1.3.1\OraBPEL_1 or D:\product\10.1.3.1\OracleAS_1\BIN.
    D:\product\10.1.3.1\OracleAS_1 is what we call the ORACLE_HOME, generally we create a new home for each install, but in this case there is a dependency on 10.1.3.1 OC4J container. So it needs to be installed into 10.1.3.1 oracle home.
    The bin directory is just the executables for that home, its not the actual. home.
    cheers
    James

  • Can i  use Oracle Database Audit Vault and Oracle Database Firewall on Solaris?

    Can i  use Oracle Database Audit Vault and Oracle Database Firewall on Solaris?

    4195bee8-4db0-4799-a674-18f89aa500cb wrote:
    i dont have access to My Oracle Support can u send text or html of document please?
    Moderator Action:
    No they cannot send you a document that is available only to those with access to MOS.
    That would violate the conditions of having such service contract credentials.
    Asking someone to violate such privileges is a serious offense and could get that other person's organization banned from all support and all their support contracts cancelled.
    Your post is locked.
    Your duplicate post that you placed into the Audit Vault forum space has been removed (it had no responses).
    This thread which you had placed in the Solaris 10 forum space is moved to the Audit Vault forum space.
    That's the proper location for Audit Vault questions.

Maybe you are looking for