Physical change of oracle 9i DB to either oracle 9i or 10g.

Hi all,
I want to change physical location of oracle 9i DB to either oracle 9i or oracle 10g. please do suggest the way to do it without data loss. thanks in advance.

Ok assume that the destination server has the ORacle version installed(OS should be atleast the same platform
1.CReate 4 windows services (one service for one database) using oradim utility
2.Create the directory structure similiar to the Source server
eg
Source:
E:\oradata\PROD\ORCL\*.*
DESTINATION:
E:\oradata\PROD\ORCL\-----Create something similiar to the Current environment
3.Shutdown the DB
4.Copy all the files to destination server to their respective locations(--This location should be similiar to the Source server)
5.Copy the INITORCL.ora to the PRoduction server and put it in ithe ORACLE_HOME\database(windows) folder...
IF the INITORCL.ora is in the different folder,Edit the windows service using ORADIM to use from the respective location
ORADIM -edit -sid ORCL -pfile E:\oradata\PROD\ORCL\initorcl.ora -startmode auto
Thats it
If password files are there just copy them to the Oracle_home\database(windows) or oracle_home/dbs folder

Similar Messages

  • NQSError:73006 Cannot obtain Oracle BI Servers from either primary Cluster

    Hi,
    OBIEE11.1.1.6.2BP1 All working properly untill this noon after that i got nQSError: 73006 error. while cross checking services status--> BI All services are up and running. i just verified with OPMNCTL status.Weblogic console and Weblogic EM is run and the default weblogic userid works fine to login to console and EM. However, with the same userid when I am Logging into Analytics, I got the below error
    error on BI Analytics login page:
    "Unable to Sign In
    An error occurred during authentication. Try again later or contact your system administrator "
    while checking the bipresentation server -->sawlog0.txt got the below error message:
    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    [2012-09-12T20:19:57.000+08:00] [OBIPS] [ERROR:31] [] [saw.security.odbcuserpopulationimpl.searchidentities] [ecid: 00iA1OpxyBnFw0zlrL0Bjz3awoi2zG2zn0001Cw000000,0:280] [tid: 6628] Error retrieving user/group data from Oracle BI Server's User Population API.
    Unable to create a system user connection to BI Server while running user population queries
    Odbc driver returned an error (SQLDriverConnectW).
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred.
    [nQSError: 73006] Cannot obtain Oracle BI Servers from either the primary Cluster Controller (10.10.1.7) or the secondary Cluster Controller () specified for the clustered DSN. (HY000)[[
    File:odbcuserpoploaderimpl.cpp
    Line:719
    Location:
         saw.security.odbcuserpopulationimpl.searchidentities
         saw.security.userpopulationmanagerimpl.getaccountdetailsbyid
         saw.CatalogAttributes.cache.cleanup
         saw.taskScheduler.processJob
         saw.threadpool.taskscheduler
         saw.threads
    ecid: 00iA1OpxyBnFw0zlrL0Bjz3awoi2zG2zn0001Cw000000,0:280
    ThreadID: 6628
    task: Cache/CatalogAttributes
    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    Note i have tried below link but no luck
    http://docs.oracle.com/cd/E15586_01/fusionapps.1111/e14496/bi_trouble.htm
    BI 11g sign in problem
    http://debaatobiee.wordpress.com/2011/03/17/obiee-11g-error-during-authentication/
    Re: OBIEE 11g Cluster Controller Failed to start
    My System Configuration
    OBIEE11.1.1.6.2BP1
    windows 2008 64bit
    IE8
    jrockit-jdk1.6.0_31-R28.2.3-4.1.0
    Static IP configured (loop back adaptor and host file has updated with correct ip,
    SSL configured with current system
    Thanks
    Deva

    Hi,
    Yes changed,i am just working on the BI Composer part. applied JDEV 13952743 patch then extended domain...by script..
    got some error then rollback jdev patch file then reapplied.
    FYI:
    Applying JDev 13952743 Patch file.
    E:\>cd E:\opatch_top\13952743
    E:\opatch_top\13952743>set PATH=%PATH%;D:\Oracle\Middleware\Oracle_BI1\OPatch
    set ORACLE_HOME=D:\Oracle\Middleware\oracle_common
    set PATH=%ORACLE_HOME%\bin;%PATH%
    set JAVA_HOME=%ORACLE_HOME%\jdk
    set PATH=%JAVA_HOME%\bin;%PATH%
    set PATH=%ORACLE_HOME%\OPatch;%PATH%
    E:\opatch_top\13952743>opatch version
    E:\opatch_top\13952743>opatch apply
    used below command for rollback
    opatch rollback -id 13952743
    Thanks
    Deva

  • Changing the data source from DB2 to ORACLE in SBOP 4.0

    Hi Gurus,
    We have done SBOP 4.0 SP 2 installation successfully on Linux by choosing DB2 as default Database as suggested by SAP as there is some issue with RH LINUX 5.5 version. Now, we need to change the CMS data source back to ORACLE 11G. For that we have to execute cmsdbsetup.sh and go with the option of "copy" (Copy data from another Data Source). We need to provide the target/destination CMS database in my case ORACLE (TNS & CMS user) details. And also we need to provide the source CMS user (DB2) details. As we went with the bundled/default DB2 installation, we are not able to find the cms user name and password (no where during the installation it prompted to provide cms username and password).
    What will be the default cms username /password in DB2?
    Thanks,
    Sandeep

    Hi,
    The work around is to create/add an extra node (SIA Node) with default servers option for the existing CMS and provided my ORACLE CMS username/password along with the TNS Names using cmsdbsetup.sh. Make sure that this new node is visible in the Servers section of the CMC console (i.e. http://<webappserver>:8080/BOE/CMC-->servers) and also see that all the servers should be running. Then you can delete the old SIA which was connected to DB2 from the CMC->servers.
    Thanks,
    Sandeep

  • Not able to see ikm oracle incremental update and ikm oracle slowly changing dimensions under PHYSCIAL tab in odi 12c

    not able to see ikm oracle incremental update and ikm oracle slowly changing dimensions under PHYSCIAL tab in odi 12c
    But i'm able to see other IKM's please help me, how can i see them

    Nope, It has not been altered.
    COMPONENT NAME: LKM Oracle to Oracle (datapump)
    COMPONENT VERSION: 11.1.2.3
    AUTHOR: Oracle
    COMPATIBILITY: ODI 11.1.2 and above
    Description:
    - Loading Knowledge Module
    - Loads data from an Oracle Server to an Oracle Server using external tables in the datapump format.
    - This module is recommended when developing interfaces between two Oracle servers when DBLINK is not an option.
    - An External table definition is created on the source and target servers.
    - When using this module on a journalized source table, the Journaling table is first updated to flag the records consumed and then cleaned from these records at the end of the interface.

  • How to change the password of a schema using Oracle SQL Developer

    Hi need to change the password of a schema using Oracle SQL Developer how do i do it?

    Hi
    alter user username identified by password

  • How to change the db_name of the database in oracle 9i

    Please Tell me
    How to change the db_name of the database in oracle 9i
    regards

    ALTER DATABASE RENAME oldname TO newnameThis is not a valid command.
    @OP, you may want to check into the NID Utility.

  • Display Error: The display template had an error. You can correct it by fixing the template or by changing the display template used in either the Web Part properties or Result Types. $(...).slick is not a function (OnPostRender: )

    Hi Team,
    I implemented news carousel using display template concepts.
    Its working.
    But some times it shows like some thing went wrong
    when clicke on show details
    it showing the error like
    Display Error: The display template had an error. You can correct it by fixing the template or by changing the display template used in either the Web Part properties or Result Types.
    $(...).slick is not a function (OnPostRender: )
    some times showing result and while refreshing the page am getting the error like below
    How to fix the issue
    Regards,
    Dhayanand

    Hi Wendy Li,
    Finally we fixed.
    The prob is that we referring two different version of jquery files in master page and page layout.
    We corrected by referring same version of jquery files in both pages
    Regards,
    Dhayanand

  • HT6203 Did this update make a physical change? Branching from that question, will this update confuse lets say, a parent that isn't savvy?

    Did this update make a physical change? Branching from that question, will this update confuse lets say, a parent that isn't savvy?

    Did this update make a physical change?
    No
    will this update confuse lets say, a parent that isn't savvy?
    This will depend on whether they are savvy enough to open Macintosh HD > Applications > Utilities > AirPort Utility on their Mac , click on the picture of the AirPort, and then click the Update button.
    It that sounds too complicated, best to let someone else handle the update.

  • How can I change the directory in which I install oracle-xe-universal?

    Because I need to install it into my ssd which is mounted in the other directory.
    I know I can install the rpm package by relocate the root, but it won't work.
    Where I can just change the directory where to install the oracle-xe?
    Thanks a lot.
    Best rgds,
    Robin

    Kay,
    If you are looking at the entire music library ("Music") you can sort by any of the columns by clicking the column header.  If there is a column you can't see, enable it by going to View > View Options.
    Within a playlist, you can do the same, and in addition you can click above the column of sequence numbers, which will then allow you to drag tracks up or down to get any order you wish.

  • How to install Oracle BPEL Process Manager for OracleAS Middle Tier

    hi,
    i need to install BPEL process manager, so i download the following file from otn
    1.soa_windows_x86_101310_disk1
    2.soa_windows_x86_bpel_101310
    here i read the document named b28980.pdf from bpel\doc\pc.1012 to install BPEL PM
    so i start to complete the pre-installation task
    1.installed Oracle database 10g
    2.Run the Integration Repository Creation Assistant on the Database
    3.Install Oracle Application Server 10g Release3 (10.1.3.1.0) and select either the J2EE Server installation type or the J2EE and Web Server installation type. selected J2EE and Web Server installation type
    and installed according to the Oracle application server installation guide.
    installed OracleAS in the path : D:\product\10.1.3.1\OracleAS_1
    4.Install the current release of Oracle BPEL Process Manager for OracleAS Middle Tier
    here they mention to select the J2EE and Web Server installation type because that type is selected in Oracle AS installed in Oracle Application Server
    so i start to install the BPEL PM by selecting the setup.exe-->and shows the location source and destination
    default destination is : D:\product\10.1.3.1\OraBPEL_1 selected next on the screen
    the next screen is select installation type here there are two types named
    1.BPEL process Manager for Developer (371MB)
    2.BPEL process manager for Oracle AS Middle tier (107MB)
    i selected 2.BPEL process manager for Oracle AS Middle tier (107MB) and click next
    pop up window opens with title dependencies
    error:
    BPEL Process manager for oracle AS Middle tier will run on top of a supported Oracle Application Server 10.1.3.1.0 J2EE server and Web Server Or J2EE server instance. this location does not contain this instance. Please select new Oracle home that contains a supported instance.
    so i changed the destination path to : D:\product\10.1.3.1\OracleAS_1\BIN then also i got the same error.
    please any one mention the path for J2EE and Web Server instance for installing the BPEL PM for Oracle AS Middle Tier.
    Thanks in Advance
    Aswath Thaniga

    If you choose the developer version you will be fine.
    If you have installed J2EE and Web Server installation into D:\product\10.1.3.1\OracleAS_1 then this is the location you install your BPEL PM into, not D:\product\10.1.3.1\OraBPEL_1 or D:\product\10.1.3.1\OracleAS_1\BIN.
    D:\product\10.1.3.1\OracleAS_1 is what we call the ORACLE_HOME, generally we create a new home for each install, but in this case there is a dependency on 10.1.3.1 OC4J container. So it needs to be installed into 10.1.3.1 oracle home.
    The bin directory is just the executables for that home, its not the actual. home.
    cheers
    James

  • Oracle VM 3.1.1, Oracle VM Server, PeopleSoft Templates and networking

    I have installed Oracle VM Manager on an Oracle Linux x86_64 system, all freshly installed, and two Oracle VM Server 6 systems also freshly installed. These three servers are each connected to two networks. One is a 192.168.15.0/24 ("net-A"), and the other is 10.8.15.0/24 ("net-B"). net-B also has the fileserver for the repositories et al directly attached. "net-A" is connected to the outside world. This is all working great; all servers can intercommunicate, can be reached from other devices on each network, et cetera. I can ssh from any machine on the network to these machines, and vice versa. All servers correctly use the internal and the external DNS, and can communicate with Google, et cetera. Excellent!
    Now, I have downloaded the templates for PeopleSoft HCM9.1, and PeopleSoft PeopleTools 8.52, and have successfully created Virtual Machines from these. The VMs start up and run successfully, and I have gone through the startup configuration prompts using the Oracle VM "Launch Console" feature.
    My problem is that I have not yet figured out how Oracle VM Networking is supposed to work, and so I cannot get these machines to talk to each other nor to the outside world. And I cannot ping them from other devices on the network, either. Obviously, there's no advantage to having a PeopleSoft server running when one cannot attach to it. I've read through the documentation numerous times, and I've pored through http://itnewscast.com/chapter-7-oracle-vm-networking-8021q document over and over, but I get lost in the virtual-upon-virtual-upon-virtual world. Maybe (probably) it's me, but I am not getting how this fits together, and where/how the virtual-ness of the network ends. Plus, all of the configurations in that itnewscast.com Chapter 7 article involve at least one switch (virtual maybe? not clear!) between the VMM and the VMS, and I don't have a switch invoved in this network... it's flat, with everything on the same wire.
    My Oracle VM network is super simple at present: There is exactly one network ("ps-net"), and it runs all five network channels (server management, live migrate, storage, etc.). Both servers are on this network, and the NIC used is the "net-B" NIC. There is no VLAN, and the IP addresses are set by DHCP. Bonding, the configuration display says, is Not applicable. Since these devices are on the same NIC as "net-B," I provided the 10.8.15.x network information when prompted, and assigned them fixed IP addresses on that network. For "gateway," I specified the address of the VMM, not knowing what else to use. And, as I said, these VM don't talk to anything, not even to each other.
    My needs are very simple. The shame is I've built all this up for the express purpose of running those two templates, and it's been a battle, to say the least, to get this far. Who can point me to the error of my ways, or a better way to accomplish this end?
    Thanks for your time, and for reading this far!

    OK. Out of desire to resolve this, I have completely removed the 192.* network from this configuration, by disconnecting the eth0 networks, and changing the ifcfg-eth0 to ONBOOT=no (yes, I know either action should suffice).
    So there is exactly one network involved now. (Greg King said that's OK, if scalability is not an issue, and if he said it, I believe it. I'll complicate it later, after I get simple working.) And one VMS is out of the configuration for now. So I have ora-vmm at 10.8.15.49 ora-vms1 at 10.8.15.47, and the fileserver at 10.8.15.50. ora-vms2 is at 10.8.15.48, but is down for now. The server pool address is set to 10.8.15.1. The network looks like this:
    ID: 10.8.15.0
    Name: ps-net1
    Channels: all
    Servers: ora-vms1, ora-vms2
    Selected paths: ora-vms1 Port (2) (eth1), ora-vms2 Port (2) (eth1)
    VLAN Group: None
    VLAN Segment: None
    Configure IP Address: ora-vms1 Port (2) (eth1) Use DHCP 10.8.15.47 255.255.255.0 Bonding: N/A
    Configure IP Address: ora-vms2 Port (2) (eth1) Use DHCP 10.8.15.48 255.255.255.0 Bonding: N/A
    ifconfig from ora-vmm
    eth1 Link encap:Ethernet HWaddr 00:0C:29:38:92:7E
    inet addr:10.8.15.49 Bcast:10.8.15.255 Mask:255.255.255.0
    inet6 addr: fe80::20c:29ff:fe38:927e/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:3516 errors:0 dropped:0 overruns:0 frame:0
    TX packets:3186 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:1520847 (1.4 MiB) TX bytes:383384 (374.3 KiB)
    eth2 Link encap:Ethernet HWaddr 00:0C:29:38:92:88
    inet addr:10.8.16.1 Bcast:10.8.16.255 Mask:255.255.255.0
    inet6 addr: fe80::20c:29ff:fe38:9288/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:0 errors:0 dropped:0 overruns:0 frame:0
    TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:0 (0.0 b) TX bytes:830 (830.0 b)
    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:16436 Metric:1
    RX packets:136683 errors:0 dropped:0 overruns:0 frame:0
    TX packets:136683 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:30853824 (29.4 MiB) TX bytes:30853824 (29.4 MiB)
    ifconfig from ora-vms1
    10.8.15.0 Link encap:Ethernet HWaddr 00:0C:29:D5:97:F1
    inet addr:10.8.15.47 Bcast:10.8.15.255 Mask:255.255.255.0
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:21463 errors:0 dropped:1 overruns:0 frame:0
    TX packets:23017 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:10033833 (9.5 MiB) TX bytes:12175262 (11.6 MiB)
    10.8.15.0:0 Link encap:Ethernet HWaddr 00:0C:29:D5:97:F1
    inet addr:10.8.15.1 Bcast:10.8.15.255 Mask:255.255.255.0
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    eth1 Link encap:Ethernet HWaddr 00:0C:29:D5:97:F1
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:47343 errors:0 dropped:0 overruns:0 frame:0
    TX packets:48885 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:23261224 (22.1 MiB) TX bytes:22212168 (21.1 MiB)
    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    UP LOOPBACK RUNNING MTU:16436 Metric:1
    RX packets:5858 errors:0 dropped:0 overruns:0 frame:0
    TX packets:5858 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:2749072 (2.6 MiB) TX bytes:2749072 (2.6 MiB)
    I don't understand why, but the VMM has placed this entry into each server's /etc/sysconfig/network-scripts directory:
    ifcfg-10.8.15.0
    Contents are:
    #This file was dynamically created by OVM manager. Please Do not edit
    DEVICE=10.8.15.0
    TYPE=Bridge
    BOOTPROTO=dhcp
    ONBOOT=yes
    DELAY=0
    I am able to start the guest with no issue. It has been configured with IP address 10.8.15.101, netmask 255.255.255.0. Its gateway is 10.8.15.50, the same network configuration as all the other servers.
    The important parts of ifconfig output from the guest (which I must manually type since Launch Console provides no copy/paste functionality) are:
    eth0 Ethernet, HW Addr: 00:21:f6:00:00:11
    inet addr: 10.8.15.101 Bcast: 10.8.15.255 Mask: 255.255.255.0
    inet6 ...
    UP BROADCAST RUNNING MULTICAST ...
    RX Packets: 11 errors:0 dropped:0 overruns:0 frame:0
    TX Packets: 101 errors:0 dropped:0 overruns:0 carrier:0
    RX bytes:620 (620.0 b) TX bytes:10592 (10.3 KiB)
    Interrupt:14
    Ping to 10.8.15.47 (the server on which this guest is running) is successful
    All other ping attempts fail.
    This is where I am, and why I'm confused. Can anyone help me understand why this guest can only talk to its "host?"
    Thank you.

  • Multi-Org impact on Oracle CRM modules especially on Oracle Service

    Multi-Org impact on Oracle CRM modules especially on Oracle Service
    ====================================================
    I have been searching for any information (notes,whitepapers/ presentation) on the impact of multi org implementation on Oracle Service module and so far not been able to find any either on metalink or on internet.
    Any of you have any inputs on this ? Please provide the same if any.
    basically,
    Looking for the kind of security applied on SR creation form,Debrief form and charges form when a multi org is enabled.
    I also tried to test this out in our instance and found that it seems to have no impact.
    Gana

    HI,
    Yes indeed there is a impact of MULTI-ORG on the Service Module in 11i.
    All the things are integrated now.
    Everything is dependent on the MO:Operating Unit Profile Option and the setup which you had done.
    1)
    Security on SR creation Form:-
    See you can implement the security, but for that you have to setup accordinglly and also to follow the process.
    If you create 2 responsibilities with MO profile option different, then none of them will able to see the others data.
    Note:-
    But if you are using the instance to generate the SR, then you had to make it sure that the ITEM which you are using should be assigned to the Operating unit which is set in the MO profile Options of that responsibility.
    2)
    Debrief Form:-
    As you must know that for debrief to work, you had to setup the Service Activities.
    This is where you can define the security.
    1) Create a Service activity,
    2) Map it with BILLING TYPES
    3) Map the Billing Types with Order Management Header and the Line Type
    This the place where you can specify the Operating Unit.
    When a user will log in and open a debrief form, then he will be able to see only those service activities which are mapped with the operating unit as that of set in MO Profile Option to the user.
    3)
    Charges:-
    The same as the debrief is applied on the charges TAB.
    Here you will only able to see the Service activites which are mapped with the operating unit as that of set in MO operating unit.
    If you want ITEM level security, then you will be only able to see the items which are assigned to the Operating unit as that of set in the MO profile option.
    Hope this will clear your doubt.
    If want more clarification, you can ask me.
    Regds,
    Vikram

  • Pre-loading Oracle text in memory with Oracle 12c

    There is a white paper from Roger Ford that explains how to load the Oracle index in memory : http://www.oracle.com/technetwork/database/enterprise-edition/mem-load-082296.html
    In our application, Oracle 12c, we are indexing a big XML field (which is stored as XMLType with storage secure file) with the PATH_SECTION_GROUP. If I don't load the I table (DR$..$I) into memory using the technique explained in the white paper then I cannot have decent performance (and especially not predictable performance, it looks like if the blocks from the TOKEN_INFO columns are not memory then performance can fall sharply)
    But after migrating to oracle 12c, I got a different problem, which I can reproduce: when I create the index it is relatively small (as seen with ctx_report.index_size) and by applying the technique from the whitepaper, I can pin the DR$ I table into memory. But as soon as I do a ctx_ddl.optimize_index('Index','REBUILD') the size becomes much bigger and I can't pin the index in memory. Not sure if it is bug or not.
    What I found as work-around is to build the index with the following storage options:
    ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'YES' );
    ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    so that the token_info column will be stored in a secure file. Then I can change the storage of that column to put it in the keep buffer cache, and write a procedure to read the LOB so that it will be loaded in the keep cache. The size of the LOB column is more or less the same as when creating the index without the BIG_IO option but it remains constant even after a ctx_dll.optimize_index. The procedure to read the LOB and to load it into the cache is very similar to the loaddollarR procedure from the white paper.
    Because of the SDATA section, there is a new DR table (S table) and an IOT on top of it. This is not documented in the white paper (the white paper was written for Oracle 10g). In my case this DR$ S table is much used, and the IOT also, but putting it in the keep cache is not as important as the token_info column of the DR I table. A final note: doing SEPARATE_OFFSETS = 'YES' was very bad in my case, the combined size of the two columns is much bigger than having only the TOKEN_INFO column and both columns are read.
    Here is an example on how to reproduce the problem with the size increasing when doing ctx_optimize
    1. create the table
    drop table test;
    CREATE TABLE test
    (ID NUMBER(9,0) NOT NULL ENABLE,
    XML_DATA XMLTYPE
    XMLTYPE COLUMN XML_DATA STORE AS SECUREFILE BINARY XML (tablespace users disable storage in row);
    2. insert a few records
    insert into test values(1,'<Book><TITLE>Tale of Two Cities</TITLE>It was the best of times.<Author NAME="Charles Dickens"> Born in England in the town, Stratford_Upon_Avon </Author></Book>');
    insert into test values(2,'<BOOK><TITLE>The House of Mirth</TITLE>Written in 1905<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    insert into test values(3,'<BOOK><TITLE>Age of innocence</TITLE>She got a prize for it.<Author NAME="Edith Wharton"> Wharton was born to George Frederic Jones and Lucretia Stevens Rhinelander in New York City.</Author></BOOK>');
    3. create the text index
    drop index i_test;
      exec ctx_ddl.create_section_group('TEST_SGP','PATH_SECTION_GROUP');
    begin
      CTX_DDL.ADD_SDATA_SECTION(group_name => 'TEST_SGP', 
                                section_name => 'SData_02',
                                tag => 'SData_02',
                                datatype => 'varchar2');
    end;
    exec ctx_ddl.create_preference('TEST_STO','BASIC_STORAGE');
    exec  ctx_ddl.set_attribute('TEST_STO','I_TABLE_CLAUSE','tablespace USERS storage (initial 64K)');
    exec  ctx_ddl.set_attribute('TEST_STO','I_INDEX_CLAUSE','tablespace USERS storage (initial 64K) compress 2');
    exec  ctx_ddl.set_attribute ('TEST_STO', 'BIG_IO', 'NO' );
    exec  ctx_ddl.set_attribute ('TEST_STO', 'SEPARATE_OFFSETS', 'NO' );
    create index I_TEST
      on TEST (XML_DATA)
      indextype is ctxsys.context
      parameters('
        section group   "TEST_SGP"
        storage         "TEST_STO"
      ') parallel 2;
    4. check the index size
    select ctx_report.index_size('I_TEST') from dual;
    it says :
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                                104
    TOTAL BLOCKS USED:                                                      72
    TOTAL BYTES ALLOCATED:                                 851,968 (832.00 KB)
    TOTAL BYTES USED:                                      589,824 (576.00 KB)
    4. optimize the index
    exec ctx_ddl.optimize_index('I_TEST','REBUILD');
    and now recompute the size, it says
    TOTALS FOR INDEX TEST.I_TEST
    TOTAL BLOCKS ALLOCATED:                                               1112
    TOTAL BLOCKS USED:                                                    1080
    TOTAL BYTES ALLOCATED:                                 9,109,504 (8.69 MB)
    TOTAL BYTES USED:                                      8,847,360 (8.44 MB)
    which shows that it went from 576KB to 8.44MB. With a big index the difference is not so big, but still from 14G to 19G.
    5. Workaround: use the BIG_IO option, so that the token_info column of the DR$ I table will be stored in a secure file and the size will stay relatively small. Then you can load this column in the cache using a procedure similar to
    alter table DR$I_TEST$I storage (buffer_pool keep);
    alter table dr$i_test$i modify lob(token_info) (cache storage (buffer_pool keep));
    rem: now we must read the lob so that it will be loaded in the keep buffer pool, use the prccedure below
    create or replace procedure loadTokenInfo is
      type c_type is ref cursor;
      c2 c_type;
      s varchar2(2000);
      b blob;
      buff varchar2(100);
      siz number;
      off number;
      cntr number;
    begin
        s := 'select token_info from  DR$i_test$I';
        open c2 for s;
        loop
           fetch c2 into b;
           exit when c2%notfound;
           siz := 10;
           off := 1;
           cntr := 0;
           if dbms_lob.getlength(b) > 0 then
             begin
               loop
                 dbms_lob.read(b, siz, off, buff);
                 cntr := cntr + 1;
                 off := off + 4096;
               end loop;
             exception when no_data_found then
               if cntr > 0 then
                 dbms_output.put_line('4K chunks fetched: '||cntr);
               end if;
             end;
           end if;
        end loop;
    end;
    Rgds, Pierre

    I have been working a lot on that issue recently, I can give some more info.
    First I totally agree with you, I don't like to use the keep_pool and I would love to avoid it. On the other hand, we have a specific use case : 90% of the activity in the DB is done by queuing and dbms_scheduler jobs where response time does not matter. All those processes are probably filling the buffer cache. We have a customer facing application that uses the text index to search the database : performance is critical for them.
    What kind of performance do you have with your application ?
    In my case, I have learned the hard way that having the index in memory (the DR$I table in fact) is the key : if it is not, then performance is poor. I find it reasonable to pin the DR$I table in memory and if you look at competitors this is what they do. With MongoDB they explicitly says that the index must be in memory. With elasticsearch, they use JVM's that are also in memory. And effectively, if you look at the awr report, you will see that Oracle is continuously accessing the DR$I table, there is a SQL similar to
    SELECT /*+ DYNAMIC_SAMPLING(0) INDEX(i) */    
    TOKEN_FIRST, TOKEN_LAST, TOKEN_COUNT, ROWID    
    FROM DR$idxname$I
    WHERE TOKEN_TEXT = :word AND TOKEN_TYPE = :wtype    
    ORDER BY TOKEN_TEXT,  TOKEN_TYPE,  TOKEN_FIRST
    which is continuously done.
    I think that the algorithm used by Oracle to keep blocks in cache is too complex. A just realized that in 12.1.0.2 (was released last week) there is finally a "killer" functionality, the in-memory parameters, with which you can pin tables or columns in memory with compression, etc. this looks ideal for the text index, I hope that R. Ford will finally update his white paper :-)
    But my other problem was that the optimize_index in REBUILD mode caused the DR$I table to double in size : it seems crazy that this was closed as not a bug but it was and I can't do anything about it. It is a bug in my opinion, because the create index command and "alter index rebuild" command both result in a much smaller index, so why would the guys that developped the optimize function (is it another team, using another algorithm ?) make the index two times bigger ?
    And for that the track I have been following is to put the index in a 16K tablespace : in this case the space used by the index remains more or less flat (increases but much more reasonably). The difficulty here is to pin the index in memory because the trick of R. Ford was not working anymore.
    What worked:
    first set the keep_pool to zero and set the db_16k_cache_size to instead. Then change the storage preference to make sure that everything you want to cache (mostly the DR$I) table come in the tablespace with the non-standard block size of 16k.
    Then comes the tricky part : the pre-loading of the data in the buffer cache. The problem is that with Oracle 12c, Oracle will use direct_path_read for FTS which basically means that it bypasses the cache and read directory from file to the PGA !!! There is an event to avoid that, I was lucky to find it on a blog (I can't remember which, sorry for the credit).
    I ended-up doing that. the events to 10949 is to avoid the direct path reads issue.
    alter session set events '10949 trace name context forever, level 1';
    alter table DR#idxname0001$I cache;
    alter table DR#idxname0002$I cache;
    alter table DR#idxname0003$I cache;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0001$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0002$I;
    SELECT /*+ FULL(ITAB) CACHE(ITAB) */ SUM(TOKEN_COUNT),  SUM(LENGTH(TOKEN_INFO)) FROM DR#idxname0003$I;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0001$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0002$I ITAB;
    SELECT /*+ INDEX(ITAB) CACHE(ITAB) */  SUM(LENGTH(TOKEN_TEXT)) FROM DR#idxname0003$I ITAB;
    It worked. With a big relief I expected to take some time out, but there was a last surprise. The command
    exec ctx_ddl.optimize_index(idx_name=>'idxname',part_name=>'partname',optlevel=>'REBUILD');
    gqve the following
    ERROR at line 1:
    ORA-20000: Oracle Text error:
    DRG-50857: oracle error in drftoptrebxch
    ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
    ORA-06512: at "CTXSYS.DRUE", line 160
    ORA-06512: at "CTXSYS.CTX_DDL", line 1141
    ORA-06512: at line 1
    Which is very much exactly described in a metalink note 1645634.1 but in the case of a non-partitioned index. The work-around given seemed very logical but it did not work in the case of a partitioned index. After experimenting, I found out that the bug occurs when the partitioned index is created with  dbms_pclxutil.build_part_index procedure (this enables  enables intra-partition parallelism in the index creation process). This is a very annoying and stupid bug, maybe there is a work-around, but did not find it on metalink
    Other points of attention with the text index creation (stuff that surprised me at first !) ;
    - if you use the dbms_pclxutil package, then the ctx_output logging does not work, because the index is created immediately and then populated in the background via dbms_jobs.
    - this in combination with the fact that if you are on a RAC, you won't see any activity on the box can be very frightening : this is because oracle can choose to start the workers on the other node.
    I understand much better how the text indexing works, I think it is a great technology which can scale via partitioning. But like always the design of the application is crucial, most of our problems come from the fact that we did not choose the right sectioning (we choosed PATH_SECTION_GROUP while XML_SECTION_GROUP is so much better IMO). Maybe later I can convince the dev to change the sectionining, especially because SDATA and MDATA section are not supported with PATCH_SECTION_GROUP (although it seems to work, even though we had one occurence of a bad result linked to the existence of SDATA in the index definition). Also the whole problematic of mixed structured/unstructured searches is completly tackled if one use XML_SECTION_GROUP with MDATA/SDATA (but of course the app was written for Oracle 10...)
    Regards, Pierre

  • Running Tuxedo8.0 on oracle 8.1.6 and oracle 10.2.0.2.0 on the same machine

    Hello,
    I have oracle 8.1.6 64 bit and oracle 10.2.0.2.0 64 bit
    running on the same hp UX 11i 64-bit machine
    I have installed Tuxedo 8.0 32bit on the same machine.
    I have a working tuxedo 8.0 environment running against the 8.1.6 oracle
    database
    in the Tuxedo .../udataobj/RM file I have for the 8.1.6 environment the string
    Oracle_XA:xaosw:-L${ORACLE_HOME}/lib -lclntsh
    In oracle 8.1.6 home there are directories lib and lib64
    but in oracle 10.2.0.2.0 home there are directories
    lib and lib32
    what string should I use for the oracle 10.2.0.2.0?
    Oracle_XA:xaosw:-L${ORACLE_HOME}/lib32 -lclntsh maybe???
    Must the string in the RM file for oracle always begin with the string
    Oracle_XA?
    If so is, possible at all to run the the binaries of one Tuxedo installation
    against two different databases versions because the RM strings are similar in
    the beginning but in 8.1.6 refer to the lib directory and in 10.2.0.2.0 refer
    to the lib32 directory ?
    Or do I need to have two installations of tuxedo8.0 binaries on the machine?
    I have tried to for 10.2.0.2.0 the line (the Ora10_XA is only a try...)
    Ora10_XA:xaosw:-L${ORACLE_HOME}/lib32 -lclntsh
    in the RM file but when starting tuxedo I get:
    $ cat /mnt04/edu/ressu/bin/xa_NULL06092006.trc
    ORACLE XA: Version 10.2.0.1.0. RM name = 'Oracle_XA'.
    111035.16584.0:
    xaogetmod: XAER_INVAL; Invalid xa_info string.
    Any comments on the matter appreciated.
    rgds,
    Jyri

    Jyri,
    The information before the first colon in the $TUXDIR/udataobj/RM file is
    only used by the buildtms, buildserver, and buildclient programs to find the
    line corresponding to the value of the -r option, so it should be OK to
    specify different values for Oracle 8.1.6 and Oracle 10.2.0.2.0.
    The information specfied in the OPENINFO string in the *GROUPS section of
    the TUXCONFIG file must be of the form
    OPENINFO="ORACLE_XA:Oracle_XA+...."
    The strings ORACLE_XA and Oracle_XA cannot be changed, except for case. The
    "...." can be replaced with parameters such as
    SqlNet=NAMESesTm=100+LogDir=.+MaxCur=5 or whatever is used in your
    applicatino. The "xaogetmod: XAER_INVAL: Invalid xa_info string" error you
    are getting is due to an incorrect OPENINFO parameter.
    If you are running a 32-bit version of Tuxedo you must link with the
    ${ORACLE_HOME}/lib32 library on 10gR2; if you are using a 64-bit version of
    Tuxedo you must link with ${ORACLE_HOME}/lib library. The procedure is
    similar for Oracle 8 except that it seems that the lib directory may be the
    32-bit library in that version of Oracle.
    32-bit and 64-bit binaries cannot be mixed under a single TUXDIR, but it is
    possible to use multiple RMs or multiple versions of the same RM on the same
    machine.
    The syntax of the open string and the list of libraries to link with is
    specified in the "Oracle Database Application Developer's Guide -
    Fundamentals" in the "Developing Applications with Oracle XA" chapter.
    <Jyri Elomaa> wrote in message news:[email protected]...
    Hello,
    I have oracle 8.1.6 64 bit and oracle 10.2.0.2.0 64 bit
    running on the same hp UX 11i 64-bit machine
    I have installed Tuxedo 8.0 32bit on the same machine.
    I have a working tuxedo 8.0 environment running against the 8.1.6 oracle
    database
    in the Tuxedo .../udataobj/RM file I have for the 8.1.6 environment the
    string
    Oracle_XA:xaosw:-L${ORACLE_HOME}/lib -lclntsh
    In oracle 8.1.6 home there are directories lib and lib64
    but in oracle 10.2.0.2.0 home there are directories
    lib and lib32
    what string should I use for the oracle 10.2.0.2.0?
    Oracle_XA:xaosw:-L${ORACLE_HOME}/lib32 -lclntsh maybe???
    Must the string in the RM file for oracle always begin with the string
    Oracle_XA?
    If so is, possible at all to run the the binaries of one Tuxedo
    installation
    against two different databases versions because the RM strings are
    similar in
    the beginning but in 8.1.6 refer to the lib directory and in 10.2.0.2.0
    refer
    to the lib32 directory ?
    Or do I need to have two installations of tuxedo8.0 binaries on the
    machine?
    I have tried to for 10.2.0.2.0 the line (the Ora10_XA is only a try...)
    Ora10_XA:xaosw:-L${ORACLE_HOME}/lib32 -lclntsh
    in the RM file but when starting tuxedo I get:
    $ cat /mnt04/edu/ressu/bin/xa_NULL06092006.trc
    ORACLE XA: Version 10.2.0.1.0. RM name = 'Oracle_XA'.
    111035.16584.0:
    xaogetmod: XAER_INVAL; Invalid xa_info string.
    Any comments on the matter appreciated.
    rgds,
    Jyri

  • I want to move the data from oracle 8.0.5 to oracle 10g

    Dear Gurus i want to move my data exist in oracle 8.0.5 to oracle 10g, what is the simplest way to do that without loss of data and time.
    thanx and regard

    Since you are on 8.0.5, there is no direct path upgrade for you.
    You need to first upgrade to 8.1.7.4.1 at least before you can directly migrate to 10gR2.
    You can refer Metalink note 316889.1 for this purpose.
    Another option would be to simply export the data from 8.0.5 version and import into 10g database.
    Simply stated, there is no method which will not take time for this purpose but there would be no data loss (assuming you are not going to change the character set of the database).

Maybe you are looking for