Oracle Linux 6.3 with Oracle DB 11gR2  11.2.0.1.0 Installation issue

Trying to install Oracle Database 11gR2 version 11.2.0.1.0 on Oracle Linux 6.3
(11.2.0.1.0 is available for download from Oracle website)
I have run following also
su - root
yum install oracle-rdbms-server-11gr2-preinstall
But following packages are still not installed or missing and Installer cannot proceed.
Rest other checks have passed only following packages are still missing
libaio 0.3.105
libaio-devel 0.3.105 0.3.107
compat-libstdc++-33 3.2.3
libgcc 3.4.6 4.4.6
libstdc++ 3.4.6 4.4.6
unixODBC 2.2.11 2.2.14
unixODBC-devel 2.2.11 2.2.14
pdksh
Have tried yum install for above individual packages- packages not available. Why yum cannot find the above individual packages?

You cannot use OL 6.x since currently only 11.2.0.3 is certified to be installed on this OS - http://docs.oracle.com/cd/E11882_01/relnotes.112/e23558/toc.htm#CHDFHIEA
If you are going to use 11.2.0.1, you will need OL 5.x or another certified OS version - http://docs.oracle.com/cd/E11882_01/install.112/e24321/pre_install.htm#CIHFICFD
HTH
Srini

Similar Messages

  • SAP R/3 4.7 EXt200 installation on RedHat Linux 5.2 with Oracle 10G

    Hi,
    I got this error during Database Instance Installation:
    All file system node operations of table tORA_SapdataNodes processed successfully.
    ERROR      2009-08-31 11:04:39 [iaxxinscbk.cpp:289]
               abortInstallation
    MDB-06020  File not found: [no oracle rdbms file found].
    I am done with CI Installation.
    Any help would be appreciated.

    Hi ,
         First of all please check availability of your OS + DB requiremnts whether it is supported by SAP or not.You can check this at Product Availability Matrix at ::
    http://service.sap.com/pam
         If this support for your environment i.e.RedHat Linux 5.2 with Oracle 10G then be informed that after starting sapinst in step create database it will prompt you for installing your oracle DB. So open a new window & install your databse ,do the patching of your DB & then continue with sapinst after successfull installation of oracle.Rest of details sapinst will prompot you during instal;lation.
        Hope this will guide you for your query.
    Thanks..
    Mohit

  • Oracle Linux 6.2 with RH compatible kernel .versus. Red Hat Linux 6.2 diff?

    Hello everyone
    * assume that I booted Oracle Linux 6.2 with a Red Hat Compatible Kernel - as opposed to booting the Oracle Linus 6.2 with UEK R1 or UEK R2 kernels *
    I am interested in any feature differences between the two OS relating to: functionality, reliability, performance, stability and tools.
    The reason I ask is that my employer is trying to figure out if OL 6.2 with a RH compatible kernel is identical to the RHEL 6.2 OS, if we excluded the Oracle UEK and Oracle clustering extra functionality.
    This OS will not be used to run Oracle 11g R2 database, this is to be used for any other Linux based non-database applications (such has running Java, JDBC, Apache, C/C++, etc).
    many thanks
    Yuri

    yurib wrote:
    So now I can tell them to use Oracle Linux 6.x with a RH compatible kernel for all those non-DB systems and use OL 6.x with UEK for Oracle 11g database.
    And I remember there was (still is?) a third kernel, which does not contain new features or performance enhancements, but only bug fixes for the RH compatible kernel.
    I am not sure if you are going to need it, but it can be very useful in some cases. For example, if there is a bug in the RH kernel, and you don't want to run the UEK line as you want to stick to 100% strict RHEL compatibility, then Oracle can put the fix into this 3rd bug-fix only RH compatible kernel series.
    (For me, I've set up dual-boot UEK & standard RHEL kernels on my machines and never have never encountered issues or bugs.)
    >
    Many thanks, as always
    Yuri

  • Integrating Oracle Fusion Sales Cloud with Oracle Business Intelligence Cloud Service (BICS)

    Ever wondered how to integrate Oracle Fusion Sales Cloud with Business Intelligence Cloud Service (BICS) ?
    The blog outlines how to programmatically load Sales Cloud data into BICS, making it readily available to model and display on BICS dashboards.
    http://www.ateam-oracle.com/integrating-oracle-fusion-sales-cloud-with-oracle-business-intelligence-cloud-service-bics/

    I wouldn't try installing Oracle VM itself on an EC2 instance, as EC2 is essentially Xen itself. Rather, you should just be able to transport existing Oracle VM images to the EC2 cloud. I think this is what you mean, but your opening paragraph is slightly ambiguous. :)
    From a VPN perspective, I'd use OpenVPN as it has clients for all major operating systems (Windows, MacOS X, Linux) that are fairly easy to package and install. Packages for OpenVPN exist in EPEL so it's easy to install on OEL5. You could also consider using a firewall instead of a VPN and only allowing connectivity from specific IP addresses/ranges. This has the benefit of not requiring client software, but it does require a fixed IP address/range on the client-side.

  • I am trying to connect oracle develper suit form with oracle 10g database

    i am trying to connect oracle develper suit form with oracle 10g database
    but when i pass username and password
    this message apperars
    ORA-12560:TNS:protocol adapter error
    every time even i try to connect Report or Designer each time same problem
    making no connection .
    can any body help can help me to reslove this prblem
    Arshad khan

    Duplicate thread:
    Re: connection problem

  • Is practising on Oracle OLAP features free with Oracle 11g EE?

    Hi,
    Is practising on Oracle OLAP features free with Oracle 11g EE??
    Information found in Internet:
    - Oracle Corporation markets the Oracle Database OLAP Option as an extra-cost option to supplement the "Enterprise Edition" of its database.
    - Oracle Database OLAP is there pre-installed along with the rest of Oracle Database EE. You just need to license it for use on the DB Machine when you choose to use it.
    - Oracle OLAP is a world class multidimensional analytic engine embedded in Oracle Database 11g.
    Please confirm if I can try some OLAP features in my office where I have Oracle 11g EE installed as I have been asked to do an initial POC?
    Regards,
    Sudhir

    Hi there,
    I believe that the OLAP license falls under the developer license on OTN - http://www.oracle.com/technetwork/testcontent/standard-license-088383.html
    You should contact your local Oracle sales rep if you are still unsure
    Thanks,
    Stuart Bunby
    OLAP Blog: http://oracleOLAP.blogspot.com
    OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
    OLAP on OTN: http://www.oracle.com/technology/products/bi/olap/index.html
    DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html

  • Oracle Linux 6.2 with RAC 11g cluster install fails on root.sh ioctl

    I have 2 HP Servers I'm trying to install a cluster to.
    I've tried Oracle Linux 6.2 and 6.3 with the same error on both nodes when running the root.sh script.
    I have tried the permissions, run-levels, etc that I have found in the forum and nothing has worked except the deconfigure works great. /u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force
    root.sh Errors:
    Adding daemon to inittab
    CRS-4124: Oracle High Availability Services startup failed.
    CRS-4000: Command Start failed, or completed with errors.
    ohasd failed to start: Inappropriate ioctl for device
    ohasd failed to start at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.
    I can't see whats wrong, I did use ASM to configure iSCSI disks from a storage device which appeared to work correctly:
    # oracleasm listdisks
    VOL1
    VOL2
    VOL3
    VOL5
    # ls /dev/oracleasm/disks/
    VOL1 VOL2 VOL3 VOL5
    When I created the disks with ASM I used multipath successfuly:
    # oracleasm createdisk VOL1 /dev/mapper/23535333762373932
    The other node can see the disks just fine with oracleasm listdisks
    I chose 3 disks for the OCR - VOL1-3
    There was one forum work around to edit the $GRID_HOME/crs/install/s_crsconfig_lib.pm which I did no Oracle Linux 6.3 to no avail. I also tried using NFS mounted volumes for the OCR drives but got the same error.
    Using the install: linux.x64_11gR2_grid

    oracleasm (aka asmlib) may not be supported on RHEL(OEL) 6.x. You do NOT need asmlib and it's days are numbered.
    upon review of this document, even though it was for 10g, it still appears to be relevant
    "10g: Using Openfiler iSCSI with an Oracle RAC database on Linux [ID 371434.1]"
    and
    udev utility can be used for disk mounting consistency between the nodes. It is the preferred tool. Note:371814.1 explains how to use the udev option.
    and
    Can't install GI 11gr2 (11.2.0.3) root.sh fail
    Edited by: onedbguru on Dec 13, 2012 3:40 PM

  • Oracle Workflow 2.6 with Oracle 8.1.7 for linux

    Is Oracle Workflow Server 2.6 available for Linux as a
    standalone product against an Oracle 8.1.7 database?
    Oracle Workflow does not seem to be included in the Integration
    Server option with the 8.1.7 installation.
    I've only found the Oracle Workflow Server included with the 9i
    database. Will this work with 8.1.7 as well or does it require
    9i db?
    Thanks in advance for your help,
    Josi Antonio

    Is Oracle Workflow Server 2.6 available for Linux as a
    standalone product against an Oracle 8.1.7 database?
    Oracle Workflow does not seem to be included in the Integration
    Server option with the 8.1.7 installation.
    I've only found the Oracle Workflow Server included with the 9i
    database. Will this work with 8.1.7 as well or does it require
    9i db?
    Thanks in advance for your help,
    Josi Antonio

  • Install Dataware House on SUSE linux 9.3 with Oracle Rel 2 (9.2.0.4)

    Dear All
    One of my customer needs dataware house on SUSE linux and i am a core DBA working on Prod and development servers upto now. So i want to know the necessary things i need to keep in mind before going to configure datawarehouse for the customer. Can you guys please suggest me the things i need to take care?
    Any URL's PDF's
    Thanks in Advance
    Ravi

    But when i don't give the oracle user all rights, it isn't possible to proceed with theinstallation
    But if you give that rights then it's a security hole. According to your words I guess you have similar enviroment settings:
    ORACLE_BASE=/
    ORACLE_HOME=/<directory_name>
    Why you not installing on deeper directory such as /opt or some your own directory? For example
    ORACLE_BASE=/myoracledir
    ORACLE_HOME=$ORACLE_BASE/<directory_name>
    Then chown -R oracle:dba /myoracledir.
    Then oracle will be owner just for /myoracle directory and all its subdirectories.
    i just could look at the error details, but they didn't described the erroranyway.
    That's not so true. Error log you could find in /tmp/OraInstallYYYY-MM-DD_HH_MI_SS..

  • Crash in 32 bit C++ application on Linux 5.4 with Oracle Client 11.2.0.2.

    Hi ,
    I am getting following crash in 32 bit C++ application built on Linux platform 5.4 with Oracle client version 11.2.0.2.
    Program terminated with signal 6, Aborted.
    [New process 22157]
    #0 0xffffe410 in __kernel_vsyscall ()
    (gdb) bt
    #0 0xffffe410 in __kernel_vsyscall ()
    #1 0xf7dc5c81 in raise () from /lib/libpthread.so.0
    #2 0xf73d4d43 in skgesigOSCrash () from /opt/oracle/11.2.0.2/lib/libclntsh.so.11.1
    #3 0xf7643d61 in kpeDbgSignalHandler () from /opt/oracle/11.2.0.2/lib/libclntsh.so.11.1
    #4 0xf73d5003 in skgesig_sigactionHandler () from /opt/oracle/11.2.0.2/lib/libclntsh.so.11.1
    #5 <signal handler called>
    #6 0xffffe410 in __kernel_vsyscall ()
    #7 0x00be7df0 in raise () from /lib/libc.so.6
    #8 0x00be9701 in abort () from /lib/libc.so.6
    #9 0x0804b716 in main (argc=2, argv=0xffabe244) at crewxa.cc:371From the stack trace, it seems that the crash is occurring at oracle client library libclntsh.so.11.1 . When looked at Metalink, there is a note Pro*c Application Cores On 11g Client [ID 1410089.1], which says that Pro *C application consistently produces core dumps with above stack trace in Oracle client 11.2.0.2 .
    Such traces are resulting from the function “sqlnst ()”. But in this case, the stack is called from the main function of our application.
    Please advice.

    Thanks again, Phil.
    It seems to be some specific situation, since I could get to install the same version (Client 11.2.0.1 32 bits) in a Windows Vista 64 bits.
    I still hope somebody has an answer.
    Best regards!

  • Oracle Linux 6.3 and Oracle VM 3.0.3 : high "os thread startup" waits

    Hi all,
    we just installed Oracle Linux 6.3 as a PVM guest with Oracle VM 3.0.3.
    The vm is acting as a dbserver.
    We see high "os thread startup" wait times from statspack report. A 10-hour report shows:
    Top 5 Timed Events Avg %Total
    ~~~~~~~~~~~~~~~~~~ wait Call
    Event Waits Time (s) (ms) Time
    CPU time 13,819 57.5
    db file sequential read 1,839,279 5,791 3 24.1
    enq: TX - row lock contention 1 664 ###### 2.8
    os thread startup 1,350 451 334 1.9
    control file sequential read 166,312 386 2 1.6
    This seems to be an OS or virtualization issue: if i run some very simple commands like "ls " or "top", sometimes I see them hangig some seconds .
    What should I check for ?
    Thanks,
    Andrea

    This will sound silly, but: Make sure you aren't a victim of the "Some Linux machines have high CPU utilization after Leap Second insertion on July 1st". If your server was "doing NTP", this might have happened. You can google this, if you didn't hear of it. A reboot makes it go away.

  • Pre-loading Oracle text in memory with Oracle 12c

    There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
    In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
    But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
    What I found as work-around is to build the index with the following storage options:
    ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
    ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
    Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
    Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
    1. create the table
    drop table test;
    CREATE TABLE test
    (ID NUMBER(9,0) NOT NULL ENABLE,
    XML_DATA XMLTYPE
    XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
    2. insert a few records
    insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
    insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    3. create the text index
    drop index i_test;
      exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
    begin
      CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP', 
                                section_name => 'SData_02',
                                tag => 'SData_02',
                                datatype => 'varchar2');
    end;
    exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    exec  ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
    exec  ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
    exec  ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
    exec  ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    create index I_TEST
      on TEST (XML_DATA)
      indextype is ctxsys.context
      parameters('
        section group   "TEST_SGP"
        storage         "TEST_STO"
      ') parallel 2;
    4. check the index size
    select ctx_report.index_size('I_TEST') from dual;
    it says :
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                                104
    TOTAL BLOCKS USED:                                                      72
    TOTAL BYTES ALLOCATED:                                 851,968 (832.00 KB)
    TOTAL BYTES USED:                                      589,824 (576.00 KB)
    4. optimize the index
    exec ctx_ddl.optimize_index('I_TEST','REBUILD');
    and now recompute the size, it says
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                               1112
    TOTAL BLOCKS USED:                                                    1080
    TOTAL BYTES ALLOCATED:                                 9,109,504 (8.69 MB)
    TOTAL BYTES USED:                                      8,847,360 (8.44 MB)
    which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
    5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
    alter table DR$I_TEST$I storage (buffer_pool keep);
    alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
    rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
    create or replace procedure loadTokenInfo is
      type c_type is ref cursor;
      c2 c_type;
      s varchar2(2000);
      b blob;
      buff varchar2(100);
      siz number;
      off number;
      cntr number;
    begin
        s := 'select token_info from  DR$i_test$I';
        open c2 for s;
        loop
           fetch c2 into b;
           exit when c2%notfound;
           siz := 10;
           off := 1;
           cntr := 0;
           if dbms_lob.getlength(b) > 0 then
             begin
               loop
                 dbms_lob.read(b, siz, off, buff);
                 cntr := cntr + 1;
                 off := off + 4096;
               end loop;
             exception when no_data_found then
               if cntr > 0 then
                 dbms_output.put_line('4K chunks fetched: '||cntr);
               end if;
             end;
           end if;
        end loop;
    end;
    Rgds, Pierre

    I have been working a lot on that issue recently, I can give some more info.
    First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
    What kind of performance do you have with your application ?
    In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
    SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */    
    TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID    
    FROM DR$idxname$I
    WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype    
    ORDER BY TOKEN_TEXT,  TOKEN_TYPE,  TOKEN_FIRST
    which is continuously done.
    I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
    But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
    And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
    What worked:
    first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
    Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
    I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
    alter session set events '10949 trace name context forever, level 1';
    alter table DR#idxname0001$I cache;
    alter table DR#idxname0002$I cache;
    alter table DR#idxname0003$I cache;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
    It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
    exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
    gqve the following
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drftoptrebxch
    ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 1141
    ORA-06512: at line 1
    Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with  dbms_pclxutil.build_part_index procedure (this enables  enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
    Other points of attention with the text index creation (stuff that surprised me at first !) ;
    - if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
    - this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
    I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
    Regards, Pierre

  • Oracle ADF security integration with Oracle E-Business Suite SDK JAAS

    I have an Oracle ADF 11.1.2.2 application that is using ADF security for authentication and authorization.
    When we deploy this application to our JDeveloper integrated weblogic server, we utilize the security setting of "Custom" and use weblogic users and roles to map to the ADF application roles. In that environment our security is working properly.
    I have a Weblogic 10.3.5 standalone server that has the ADF runtime installed as well as the Oracle E-Business Suite SDK JAAS implementation installed.
    When I deploy the Oracle ADF application to the standalone weblogic server, I am directed to the JAAS login page when I attempt to access any JSF page (including those that I have granted View access through the anonymous-role. Does the Oracle ADF anonymous-role work (allow for anonymous page access) when JAAS security is handled by the Oracle E-Business Suite SDK JAAS implementation?
    Per the SDK instructions, when we install the Oracle ADF deployment on Weblogic we have selected "DD only" for our security setting. We have defined enterprise roles in the Oracle ADF security setup (jazn-data.xml) that are assigned the appropriate application roles. Those enterprise roles have the same name (i.e. UMX|YOURROLE) as the E-Business Suite roles that are assigned to our test users. When we login with an E-Business Suite user / password we are receiving an error:
    Error 401--Unauthorized
    From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1:
    10.4.2 401 Unauthorized
    Any thoughts on why that would be?
    Thanks
    Dan

    Thanks Juan.
    With the debugging options enabled it appears the issue is not an issue with the user / role credentials - it seems like the resource grants from jazn-data.xml are not being reviewed in my standalone weblogic instance EAR deployment:
    [JpsAuth] Check Permission
    PolicyContext: [TestApp]
    Resource/Target: [untitled1PageDef]
    Action: [view]
    Permission Class: [oracle.adf.share.security.authorization.RegionPermission]
    Result: [FAILED]
    Evaluator: [ACC]
    Failed ProtectionDomain:ClassLoader=sun.misc.Launcher$AppClassLoader@13f5d07
    CodeSource=file:/app/oracle/product/Middleware/oracle_common/modules/oracle.adf.share_11.1.1/adf-share-support.jar
    Principals=total 2 of principals(
    1. JpsPrincipal: oracle.security.jps.internal.core.principals.JpsAnonymousUserImpl "anonymous" GUID=null DN=null
    2. JpsPrincipal: oracle.security.jps.internal.core.principals.JpsAnonymousRoleImpl "anonymous-role" GUID=null DN=null)
    When I access the same page from my integrated weblogic server I see:
    [JpsAuth] Check Permission
    PolicyContext: [TestApp]
    Resource/Target: [untitled1PageDef]
    Action: [view]
    Permission Class: [oracle.adf.share.security.authorization.RegionPermission]
    Result: [FAILED]
    Evaluator: [ACC]
    Failed ProtectionDomain:ClassLoader=sun.misc.Launcher$AppClassLoader@13f5d07
    CodeSource=file:/app/oracle/product/Middleware/oracle_common/modules/oracle.adf.share_11.1.1/adf-share-support.jar
    Principals=total 2 of principals(
    1. JpsPrincipal: oracle.security.jps.internal.core.principals.JpsAnonymousUserImpl "anonymous" GUID=null DN=null
    2. JpsPrincipal: oracle.security.jps.internal.core.principals.JpsAnonymousRoleImpl "anonymous-role" GUID=null DN=null)
    When I review my EAR - I do see jazn-data.xml at:
    /META-INF/jazn-data.xml
    I will review the system-jazn-data.xml to see if the policy information has been migrated properly as part of the EAR deployment.
    Thanks.
    -Dan

  • Oracle Applications Release 11i with Oracle Database 10g Release 2 (10.2.0)

    Hi,
    I am upgrading our Oracle EBS 11i database to Oracle 10g R2 as per note:362203.1.
    In section After the Database Upgrade  --&gt; 5. Implement and run AutoConfig we need to refer to Note:165195.1.(Section 8: Migrating to AutoConfig on the Database Tier).
    Since I dont have the DB enviornement file, I created one by copying the envioroment file from Oracle 9i Home and replaced 9.2.0 with 10.2.0. I also created Apache directory under Oracle 10g Home and copied perl directory from Oracle 9i Home/Apache to Oracle 10g Home/Apache. This way I got the DB enviornement file for Oracle 10g Home.
    Then I used the following command to create the DB Context File:
    perl adbldxml.pl tier=db appsuser=apps
    Then executed autoconfig using:
    adconfig.cmd contextfile=&lt;Full path to the CONTEXT&gt;
    Is this the way I should do? Its working ok but I am not sure if my method is ok.
    Plz suggest if some other method is there.
    Thanks.
    Thiru

    This is a way I used to do it too. I am not sure whether it is the official way, but it works nevertheless.
    Do make sure you run autoconfig afterwards, which will create a new environment file for you. Then stop listener and database, source the new environment file and restart listener and database.
    I found some other procedure in order for you to recreate the environment file:
    1. On the application tier, Execute $AD_TOP/bin/admkappsutil.pl to generate appsutil.zip for the database tier.
    2. Copy this appsutil.zip to the database tier and unzip it into the ORACLE_HOME
    3. Set the following environment variables:
    ORACLE_HOME =<10g ORACLE_HOME>
    LD_LIBRARY_PATH = <10g ORACLE_HOME/lib, 10g ORACLE_HOME/ctx/lib>
    ORACLE_SID = instance name running on this database node.
    PATH= $PATH:$ORACLE_HOME/bin;
    TNS_ADMIN = $ORACLE_HOME/network/admin/<context_name> 4. Edit the $ORACLE_HOME/network/admin/tnsnames.ora file. Change the aliases for SID=<new RAC instance name>
    5. Modify the listener.ora. Change the instance name and Oracle Home to match environment
    6. Start the listener.
    7. From the 10g ORACLE_HOME/appsutil/bin directory, create an instance-specific XML context file by executing the command:
    adbldxml.pl tier=db appsuser=<APPSuser> appspasswd=<APPSpwd>8. De-register the current configuration using the command:
    perl $ORACLE_HOME/appsutil/bin/adgentns.pl appspass=apps contextfile=$CONTEXT_FILE -removeserver9. Rename $ORACLE_HOME/dbs/init<rac instance>.ora , to a new name (i.e. init<rac instance>.ora.old in order to allow AutoConfig to regenerate the file using the RAC specific parameters.
    10. From the 10g ORACLE_HOME/appsutil/bin directory, execute AutoConfig on the database tier by running the adconfig.pl script.
    Now you should have the officially created environment file.
    HTH.
    Arnoud Roth

  • Oracle Tuxedo 8.1 With Oracle 11g Database

    Hi,
    I have an old application that must be compiled with oracle 8.1 and Oracle9i Database.
    We are migrating to Oracle 11g Database, and now we have some inssues to compile with Oracle 11g Database. For example some Libraries that the Tuxedo XA Resource Manager needs from the Oracle Database, that just u can found on older versions like Oracle9i (kpudfo.o file).
    what can be the solution for that problem?
    An Oracle 11g database config or a Tuxedo config ?

    Hi,
    When you say "Tuxedo XA Resource Manager" I'm not sure what you are referring to. Do you mean the Tuxedo transaction management servers (TMS) servers that are built with the buildtms command? If so, if there are linking errors, it has to do with either your library paths or the RM definition for Oracle Database in the $TUXDIR/udataobj/RM file. Tuxedo itself only uses the XA switch from the resource manager such as Oracle database. Now the resource manager may require a bunch of libraries, which is why I suggested you check your library path. If the above doesn't help, can you post the error you are getting and what is generating/creating the error?
    Regards,
    Todd Little
    Oracle Tuxedo Chief Architect

Maybe you are looking for

  • Issue facing while opening a Canvas application in Webstart

    Hi experts, Need a help on the issue: issue was like this: I have created a Canvas application in Swings. And It was running fine. The next day i have compiled again the code and build the class and after jar. Deployed in Tomcat. Started the webstart

  • Re-creating User in 10.3.9

    Hello; I've successfully used the method below to re-create a User in a fresh-install of OS X in tiger, but doesn't seem to "want" to work in Panther. one thing that doesn't happen is I'm never asked if I want to use that Old User folder when I creat

  • Default Sharing Permissions

    I am creating a Numbers doc on my local machine and uploading it to iCloud for sharing by clicking Share > Share via iCloud > Copy Link. This is generally working ok, but the document is being shared with with sharing permissions set to "Allow Editin

  • Time-based turning off of bluetooth keyboard

    I'm using an Apple bluetooth keyboard with my iMac. If I forget to manually turn off the keyboard, the batteries in it drain 20% overnight. Is there no setting that says: Turn off the keyboard if there's no activity for 5 minutes? I notice that Windo

  • My PC was damaged during an office flood. How do I install office 2013 in my new computer?

    I have the product key, but not the microsoft account that was used by the sales person who installed office 2013 in the other computer.  I bought the program. How do I make it work in my new computer? Thank you Liliana