Oracle VM 3.1.1 storage requirements

Hi,
I was told that OVM 3.1.1 absolutely requires a external storage to operate. After consulting ther hardware requirements for OVM Server and Manager, I did't find out anything about this issue.
Can anyone clarify this point for me?
Thank you.

Yes, local storage is OK as long as A) its on a supported adapter that presents a /dev/sdX file (versus stuff like a cciss device) and B) the local disk stands alone (you will need atleast TWO seperate disks to do this). Oracle VM cannot share the OS disk with a storage repository. The storage repository MUST be the only partition on the disk.

Similar Messages

  • OIM Data storage requirements

    I was wondering if there are any thumb rules to determine how much data will OIM generate. Is this dependant on number of user profile for any given number of transactions. Also, if this data can be broken into different categories such as identity data, provisioning data, audit-trail data etc?
    We are in middle of defining our data archival strategy and need to understand data storage requirements.
    Thanks in advance for your input on this.

    user7749985 wrote:
    I was wondering if there are any thumb rules to determine how much data will OIM generate. http://www.oracle.com/technology/products/id_mgmt/oxp/pdf/oracle%20identity%20manager%20sizing%20version%201.3.pdf
    Is this dependant on number of user profile for any given number of transactions. Yes it is.

  • Oracle ADF EAR 11.1.1 requires EAR 5.0  ??

    Hi,
    I am trying to create a new Oracle ADF application in Eclipse, as soon as I type the application name, an error show in the top of the dialog saying :
    Oracle ADF EAR 11.1.1 requires EAR 5.0
    and the Next and Finish buttons are disabled .
    your help is appreciated.

    It seems you are trying to develop an ADF 11.1.1.x application on an unsupported version of WebLogic Server.
    The certification matrix for ADF and WebLogic Server can be found here - http://www.oracle.com/technetwork/developer-tools/jdev/index-091111.html
    For example, if you are developing an ADF 11.1.1.6 application in OEPE, you will want to target a WebLogic Server 10.3.6 installation.
    Thanks,
    Greg

  • Oracle 10G in Linux server Hardware requirement

    Hi,
    I need to know the Hardware requirement of Linux dedicated server which oracle 10g have to instal. I need the best configuration including RAM memory. Very Urgent. Can u pls help me.
    Regards,
    Rajesh

    800643 wrote:
    I need to know the Hardware requirement of Linux dedicated server which oracle 10g have to instal. I need the best configuration including RAM memory. The answer is: A banana.
    The reason is: Because a motorcycle has no doors.
    And this is as a valid answer as any other to your question. You can run Oracle 10g XE on a notebook. You can run Oracle 10g Enterprise on a 64 CPU HP Superdome. You can run Oracle 10g RAC on a 60 blade servers, each with 8 dual-core CPUs and 64GB RAM.
    So what is the best configuration? That depends entirely on WHAT you are planning to do using the Oracle RDBMS product. And it should be a concern that your approach is thinking that Oracle RDBMS is like Microsoft Excel and you can quickly and easily determine am optimal h/w and RAM config for it. Oracle RDBMS is not a desktop product like Excel. You do not fit h/w to it. You fit h/w to your requirements - as you would fit your choice of Oracle products to use to your requirements.
    Very Urgent.We do not do "+Very Urgent+" here. This is a volunteer forum - where paid professionals are providing free opinion, advice and recommendations. That is without any warranty and guarantee. In a forum that does not provide SLAs or escalation procedures.
    You want "+urgent+" support? Then pay for it and use http://support.oracle.com

  • What is the video file size I can use for Telepresence Content Server to calculate storage required?

    Hi,
    I would like to calcualte storage requied for Video files on the Telepresence Content Server, MPEG Format.  What is the per minute storage required for a video file with and without compression? What kind of Video compression rates are supported on TelePresence Content Server and MXE 3500. 
    Thanks
    Sue

    Hi,
    Following thread might be helpful to you...
    http://scn.sap.com/thread/1637092
    http://scn.sap.com/thread/1284704
    BR,
    Anirban

  • Is there any value for free storage requirements to SQL Server 2K8 R2 to run?

    Grettings
    we have a SQL Server 2K8 running on a Server that has less than 10GB of free space.
    Do we have a problem with that? what is the storage requirements to SQL keep running normal?
    Thank you in advance

    Hi LmmLopes,
    If you are talking about 10GB free space on the primary drive I think you should be able to run your SQL server without any issues.
    Please find the chart below for the Hard disk requirements.
    Hard Disk Space Requirements (32-Bit and 64-Bit)
    During installation of SQL Server 2008, Windows Installer creates temporary files on the system drive. Before you run Setup to install or upgrade SQL Server, verify that you have at least 2.0 GB of available disk space on the system drive for these files.
    This requirement applies even if you install SQL Server components to a non-default drive.
    Actual hard disk space requirements depend on your system configuration and the features that you decide to install. The following table provides disk space requirements for SQL Server 2008 components:
    Feature
    Disk space requirement
    Database Engine and data files, Replication, and Full-Text Search
    280 MB
    Analysis Services and data files
    90 MB
    Reporting Services and Report Manager
    120 MB
    Integration Services
    120 MB
    Client Components
    850 MB
    SQL Server Books Online and SQL Server Compact Books Online
    240 MB
    But if you are talking about the 10 GB on the entire server. Then you need to consider the database size and its expected growth.
    You can consider adding additional disk drive and use it to store the data files.
    Hope this answers your question.
    Regards
    Sathish  S N

  • Smart Storage requires hard drives before configuration

    (From Davin Oishi)
    Issue/Question: Unit powers up,  but can't login to web administration GUI
    Resolution: The Cisco  Smart Storage products require at least one hard drive to be installed  in the system before configuration is allowed
    Issue/Question: Why does the  Smart Storage require a hard drive to be installed when my previous  NSS3000 did not require a hard drive before configuration could be made
    Resolution:  The architecture for both product families is different.  While  configuring a device without drives does provide the ability to  configure basic settings before deployment.  The actual usefulness is  still limited to when the device has drives.  Smart Storage takes  advantage of the drives by placing configuration and applications on the  hard drives.  If additional drives are installed after initial setup,  configuration information is striped across the additional drives as you  add them.  This allows for better utilization of Smart Storage memory,  improves system performance, and allows for higher scalability.

    With eSata not going to be an option for all 3 drives. I'll use eSATA for the 2 TB and daisy-chain the other 2 with FW800
    I'll use the 500GB for Time Machine, but which would be the better plan?
    A) Put stills on the 1TB and use the 2TB for FCE scratch drive and back up the compressed camera data on DVD's
    B) Use the 1 TB for camera data and then either folders or partition the 2TB for camera imports and FCE scratch drives,
    In either plan the AVCHD files get converted at log and transfer and the h264 data gets converted via 3rd party conversion software. In both cases the converted files end up on the scratch drive.
    I should also mention that I plan to use QT not self contained exports and keep them on the internal drive. I am currently keeping disk images of finished projects on an external hard drive, unsure what to do with them. maybe I need an other drive or not bother with saving disk images ?

  • Does Manage Storage require wifi?

    Shock of shocks--discovered today that Manage Storage requires WiFi being on!  Is that right?
    i'm talking about getting the list of apps on the device and seeing which are the memory hogs--not doing anything with iCloud.  In the past, it always came up regardless of whether WiFi was on or not, the list would come up.

    It's weird because on my iPad Mini 2, after I installed iOS 8.1.3, I was unable to get a list of apps on the device and their space consumption.  I had to turn wifi on to see it!
    Is there a bug?  I've had instances on different iPads, this one, too, in which the phrases about iCloud are not in their proper place.  I'm wondering if the Manage Storage button that was displayed was the one for iCloud and not the device itself.  It was showing the device's capacity and remaining space--not the 5 gb for iCloud.  I also didn't have iCloud set up (still don't).
    Today, fortunately, it worked even with wifi off. 

  • IOP application storage requirements in Oracle

    Are there any recommendations regarding database space( size in GigaBytes ) for an IOP dedicated schema? I did not find any recommendations in the Install Guides (pdf) for 4.03 or 11.1.2.0.
    I have seen Oracle backups that range from 2GB to 80GB, but don't know how large that is when stored in an Oracle database.
    Is there a recommended starting setting for the dedicated Oracle Schema for an IOP application?
    What is a good starting "grow by" setting for the IOP dedicated Oracle Schema?
    If there are any documents available which contain recommended specifications for the IOP dedicated Schema, please direct me to them.
    Thank you.

    We do not have any sizing document for now and it is a work in progress. The main focus should be on the number of member combinations (or the number of blocks) and the size of each block. This is also multiplied by the number of scenarios in the system. Sometimes the application rowsources contain lot of data and they replicate data from external sources. To avoid that one needs to consider using table-driven or custom rowsource, in particular, if the rowsource is read only. Also, frequent purging of old scenarios should be considered and there are commands to achieve them.

  • Oracle 10g RAC using ASM - Storage Issue

    I’ve an issue related to Oracle 10g RAC.
    I’ve 2 node cluster each being Dell 2850 Server with RHEL 4.0
    I’ve EMC CX300 SAN storage with following partitions
    /orasoft     10 Gb          OCFS2 File system
    /oracrs          2 Gb          OCFS2 File system
    /orabackup      100 Gb          OCFS2 File system
    The datafiles are on ASM which is not directly visible in OS.
    I’ve common Oracle Home installed in /orasoft/db_1 which is shared by both nodes in cluster.
    I’ve faced an issue recently related to EMC storage.
    The /orasoft partition displays 1.4 Gb space available using df command.
    With both nodes sharing the common Oracle Home (/orasoft/db_1), when ever I try to touch a file I get an error as No Space left on device. I’m unable to start any service with the same reason.
    Is this setup correct ??
    Can anyone help me with this storage issue ??

    Need a clarification here...what do you mean by "Storage System"...do you mean a server/node or the SAN storage system. If you are referring to a server/node's local storage, then it would NOT be possible for use by RAC, since the disk space has to be shared among the nodes.
    Here is what you can do:
    - Create two partitions/devices (for example Disk_1 and Disk_2) in the SAN storage
    - Create a ASM disk group which would mirror Disk_1 to Disk_2.
    Again, please note that the partitions have to be visible and be accessible read/write from both the nodes/servers.
    HTH
    Thanks
    Chandra Pabba

  • Crs installation problem in oracle 10g rac with NAS storage

    Hi,
    for my practice i am trying to install oracle 10gR2 on RHEL5-64bit OS in my laptop.
    during my crs installation i have struckup with the below error while i am executing root.sh in node1.
    Error:
    +++++
    Setting the permissions on OCR backup directory
    Setting up NS directories
    PROT-1: Failed to initialize ocrconfig
    Failed to upgrade Oracle Cluster Registry configuration
    ocrconfig.log ;
    ++++++++++++
    NFS file system /u01 mounted with incorrect options
    [  OCROSD][4265610768]WARNING:Expected NFS mount options: wsize>=32768,rsize>=32768,hard,(noac | actimeo=0 | acregmin=0,acregmax=0,acdirmin=0,acdirmax=0
    [  OCROSD][4265610768]utopen:6m'': OCR location [share/storage/ocr] configured is not a valid storage type. Rturn code [37].
    As per metalink i have find that this problem is fixed with Patch:4679769
    # Patch Installation Instructions:
    # To apply the patch, unzip the PSE container file:
    # p4679769_10201_LINUX.zip
    # Set your current directory to the directory where the patch
    # is located:
    # % cd 4679769
    # Copy the clsfmt.bin binary to the $ORACLE_HOME/bin directory where
    # clsfmt is being run:
    # % cp $ORACLE_HOME/bin/clsfmt.bin $ORACLE_HOME/bin/clsfmt.bin.bak
    # % cp clsfmt.bin $ORACLE_HOME/bin/clsfmt.bin
    # Ensure permissions on the clsfmt.bin binary are correct:
    # % chmod 755 $ORACLE_HOME/bin/clsfmt.bin
    3. Run the root.sh script and proceed with the installation.
    **My question is still i am not install Database ..only i ma trying to install crs but in this readme .txt we need to replace the clsfmt.bin file in ORACLE_HOME/bin.**
    **but i have not bin directory under in ORACLE_HOME.please clear my doupt to apply this patch...**
    Regards,
    Mugunth

    Also you clusterware installation installs to an ORACLE_HOME.
    Oracle does only make a differentiation, if it has to be clear, that you got a clusterware home and a database home.
    Normally if a patch is referring to $ORACLE_HOME (and the patch can be used for clusterware & database), it just means the installation directory of the oracle software installed.
    Sebastian

  • APEX app using Oracle Text  to index pages that require authorzation

    Hi Gurus and APEX Dev team
    My team need to develop an APEX App that will index all our documents spread across various servers. Some of the documents require Single sign on access (e.g. KIX.oraclecorp.com) and some require other authorization methods (e.g. Metalink) . The Question is , Is it possible to index the pages that require authorization using Oracle text. If yes How? I have implemented the demo app which can index pages that do not require authorization.
    Thanks a million
    regards
    Bala

    Hello,
    Unless I misunderstand you, the fact that the pages require authentication doesn't really matter, it is the underlying data you want to index correct? If so then you would index them in exactly the same way that you would index any table data using Oracle Text/interMedia.
    John.
    Blog: http://jes.blogs.shellprompt.net
    Work: http://www.apex-evangelists.com
    Author of Pro Application Express: http://tinyurl.com/3gu7cd
    REWARDS: Please remember to mark helpful or correct posts on the forum, not just for my answers but for everyone!

  • Oracle 10g Installtion error Network Configuration requirements ... failed

    Checking Network Configuration requirements ...
    Check complete. The overall result of this check is: Failed <<<<
    Problem: The install has detected that the primary IP address of the system is DHCP-assigned.
    Recommendation: Oracle supports installations on systems with DHCP-assigned IP addresses; However, before you can do this, you must configure the Microsoft LoopBack Adapter to be the primary network adapter on the system. See the Installation Guide for more details on installing the software on systems configured with DHCP.

    Are you installing Oracle 10G R2? If this is the case then for DHCP network environments, you need to create a Microsoft loop back adapter.
    Microsoft Loopback Adapter creation:
    ===========================
    Step 1: Programs -> control panel -> Add Hardware.
    Step 2: In the Add Hardware wizard. Click 'Next' and select 'Yes, I have already connected the hardware'. option and click on 'Next'.
    Step 3: Select the last option 'Add a new hardware device' from the 'Installed hardware' window and click on 'Next'.
    Step 4: Select the last option 'Install hardware that I manually select from the list (Advanced) and click on 'Next'.
    Step 5: Select 'Network adapters' in 'Common hardware types' list and click on 'Next'.
    Step 6: Where you can see 'Microsoft loopback adapter' on the right hand side of the frame.
    Step 7: proceed with all default installation and then try installing Oracle.
    I hope this will suffice.
    Thanks
    Venugopal

  • Oracle RAC with QFS shared storage going down when one disk fails

    Hello,
    I have an oracle RAC on my testing environment. The configuration follows
    nodes: V210
    Shared Storage: A5200
    #clrg status
    Group Name Node Name Suspended Status
    rac-framework-rg host1 No Online
    host2 No Online
    scal-racdg-rg host1 No Online
    host2 No Online
    scal-racfs-rg host1 No Online
    host2 No Online
    qfs-meta-rg host1 No Online
    host2 No Offline
    rac_server_proxy-rg host1 No Online
    host2 No Online
    #metastat -s racdg
    racdg/d200: Concat/Stripe
    Size: 143237376 blocks (68 GB)
    Stripe 0:
    Device Start Block Dbase Reloc
    d3s0 0 No No
    racdg/d100: Concat/Stripe
    Size: 143237376 blocks (68 GB)
    Stripe 0:
    Device Start Block Dbase Reloc
    d2s0 0 No No
    #more /etc/opt/SUNWsamfs/mcf
    racfs 10 ma racfs - shared
    /dev/md/racdg/dsk/d100 11 mm racfs -
    /dev/md/racdg/dsk/d200 12 mr racfs -
    When the disk /dev/did/dsk/d2 failed (I have failed it by removing from the array), the oracle RAC went offline on both nodes, and then both nodes paniced and rebooted. Now the #clrg status shows below output.
    Group Name Node Name Suspended Status
    rac-framework-rg host1 No Pending online blocked
    host2 No Pending online blocked
    scal-racdg-rg host1 No Online
    host2 No Online
    scal-racfs-rg host1 No Online
    host2 No Pending online blocked
    qfs-meta-rg host1 No Offline
    host2 No Offline
    rac_server_proxy-rg host1 No Pending online blocked
    host2 No Pending online blocked
    crs is not started in any of the nodes. I would like to know if anybody faced this kind of a problem when using QFS on diskgroup. When one disk is failed, the oracle is not supposed to go offline as the other disk is working, and also my qfs configuration is to mirror these two disks !!!!!!!!!!!!!!
    Many thanks in advance
    Ushas Symon

    I'm not sure why you say QFS is mirroring these disks!?!? Shared QFS has no inherent mirroring capability. It relies on the underlying volume manager (VM) or array to do that for it. If you need to mirror you storage, you do it at the VM level by creating a mirrored metadevice.
    Tim
    ---

  • Improve XML readability in Oracle 11g for binary XMLType storage for huge files

    I have one requirement in which I have to process huge XML files. That means there might be around 1000 xml files and the whole size of these files would be around 2GB.
    What I need is to store all the data in these files to my Oracle DB. For this I have used sqlloader for bulk uploading of all my XML files to my DB and it is stored as binary XMLTYPE in my database.Now I need to query these files and store the data in relational tables.For this I have used XMLTable Xpath queries. Everything is fine when I try to query single xml file within my DB. But if it is trying to query all those files it is taking too much time which is not acceptable.
    Here's my one sample xml content:
    <ABCD>
      <EMPLOYEE id="11" date="25-Apr-1983">
        <NameDetails>
          <Name NameType="a">
            <NameValue>
              <FirstName>ABCD</FirstName>
              <Surname>PQR</Surname>
              <OriginalName>TEST1</OriginalName>
              <OriginalName>TEST2</OriginalName>
            </NameValue>
          </Name>
          <Name NameType="b">
            <NameValue>
              <FirstName>TEST3</FirstName>
              <Surname>TEST3</Surname>
            </NameValue>
            <NameValue>
              <FirstName>TEST5</FirstName>
              <MiddleName>TEST6</MiddleName>
              <Surname>TEST7</Surname>
              <OriginalName>JAB1</OriginalName>
            </NameValue>
            <NameValue>
              <FirstName>HER</FirstName>
              <MiddleName>HIS</MiddleName>
              <Surname>LOO</Surname>
            </NameValue>
          </Name>
          <Name NameType="c">
            <NameValue>
              <FirstName>CDS</FirstName>
              <MiddleName>DRE</MiddleName>
              <Surname>QWE</Surname>
            </NameValue>
            <NameValue>
              <FirstName>CCD</FirstName>
              <MiddleName>YTD</MiddleName>
              <Surname>QQA</Surname>
            </NameValue>
            <NameValue>
              <FirstName>DS</FirstName>
              <Surname>AzDFz</Surname>
            </NameValue>
          </Name>
        </NameDetails>
      </EMPLOYEE >
    </ABCD>
    Please note that this is just one small record inside one big xml.Each xml would contain similar records around 5000 in number.Similarly there are more than 400 files each ranging about 4MB size approx.
    My xmltable query :
    SELECT t.personid,n.nametypeid,t.titlehonorofic,t.firstname,
            t.middlename,
            t.surname,
            replace(replace(t.maidenname, '<MaidenName>'),'</MaidenName>', '#@#') maidenname,
            replace(replace(t.suffix, '<Suffix>'),'</Suffix>', '#@#') suffix,
            replace(replace(t.singleStringName, '<SingleStringName>'),'</SingleStringName>', '#@#') singleStringName,
            replace(replace(t.entityname, '<EntityName>'),'</EntityName>', '#@#') entityname,
            replace(replace(t.originalName, '<OriginalName>'),'</OriginalName>', '#@#') originalName
    FROM xmlperson p,master_nametypes n,
             XMLTABLE (
              --'ABCD/EMPLOYEE/NameDetails/Name/NameValue'
              'for $i in ABCD/EMPLOYEE/NameDetails/Name/NameValue        
               return <row>
                        {$i/../../../@id}
                         {$i/../@NameType}
                         {$i/TitleHonorific}{$i/Suffix}{$i/SingleStringName}
                        {$i/FirstName}{$i/MiddleName}{$i/OriginalName}
                        {$i/Surname}{$i/MaidenName}{$i/EntityName}
                    </row>'
            PASSING p.filecontent
            COLUMNS
                    personid     NUMBER         PATH '@id',
                    nametypeid   VARCHAR2(255)  PATH '@NameType',
                    titlehonorofic VARCHAR2(4000) PATH 'TitleHonorific',
                     firstname    VARCHAR2(4000) PATH 'FirstName',
                     middlename  VARCHAR2(4000) PATH 'MiddleName',
                    surname     VARCHAR2(4000) PATH 'Surname',
                     maidenname   XMLTYPE PATH 'MaidenName',
                     suffix XMLTYPE PATH 'Suffix',
                     singleStringName XMLTYPE PATH 'SingleStringName',
                     entityname XMLTYPE PATH 'EntityName',
                    originalName XMLTYPE        PATH 'OriginalName'
                    ) t where t.nametypeid = n.nametype and n.recordtype = 'Person'
    But this is taking too much time to query all those huge data. The resultset of this query would return about millions of rows. I tried to index the table using this query :
    CREATE INDEX myindex_xmlperson on xml_files(filecontent) indextype is xdb.xmlindex parameters ('paths(include(ABCD/EMPLOYEE//*))');
    My Database version :
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    "CORE 11.2.0.2.0 Production"
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
    Index is created but still no improvement with the performance though. It is taking more than 20 minutes to query even a set of 10 similar xml files.Now you can imagine how much will it take to query all those 1000 xml files.
    Could someone please suggest me how to improve the performance of my database.Since I am new to this I am not sure whether I am doing it in proper way. If there is a better solution please suggest. Your help will be greatly appreciated.

    Hi Odie..
    I tried to run your code through all the xml files but it is taking too much time. It has not ended even after 3hours.
    When I tried to do a single insert select statement  for one single xml it is working.But stilli ts in the range of ~10sec.
    Please find my execution plan for one single xml file with your code.
    "PLAN_TABLE_OUTPUT"
    "Plan hash value: 2771779566"
    "| Id  | Operation                                     | Name                                     | Rows   | Bytes | Cost (%CPU)| Time     |"
    "|   0 | INSERT STATEMENT                   |                                              |   499G |   121T |   434M  (2) |999:59:59  |"
    "|   1 |  LOAD TABLE CONVENTIONAL    | WATCHLIST_NAMEDETAILS  |            |           |                 |                 |"
    "|   2 |   SORT AGGREGATE                   |                                             |     1      |     2    |                 |          |"
    "|   3 |    XPATH EVALUATION                 |                                             |             |          |                 |          |"
    "|   4 |   SORT AGGREGATE                   |                                             |     1      |     2    |                 |          |"
    "|   5 |    XPATH EVALUATION                 |                                             |             |          |                 |          |"
    "|   6 |   SORT AGGREGATE                   |                                             |     1       |     2   |                 |          |"
    "|   7 |    XPATH EVALUATION                 |                                             |              |         |                 |          |"
    "|   8 |   SORT AGGREGATE                   |                                             |     1        |     2  |                 |          |"
    "|   9 |    XPATH EVALUATION                 |                                             |              |         |                 |          |"
    "|  10 |   NESTED LOOPS                       |                                             |   499G    | 121T |   434M (2) | 999:59:59 |"
    "|  11 |    NESTED LOOPS                      |                                             |    61M     |  14G |  1222K (1) | 04:04:28 |"
    "|  12 |     NESTED LOOPS                     |                                             | 44924      |  10M |    61   (2) | 00:00:01 |"
    "|  13 |      MERGE JOIN CARTESIAN      |                                             |     5         | 1235 |     6   (0) | 00:00:01 |"
    "|* 14 |       TABLE ACCESS FULL          | XMLPERSON                        |     1          |  221 |     2   (0) | 00:00:01 |"
    "|  15 |       BUFFER SORT                     |                                             |     6          |  156 |     4   (0) | 00:00:01 |"
    "|* 16 |        TABLE ACCESS FULL         | MASTER_NAMETYPES        |     6          |  156 |     3   (0) | 00:00:01 |"
    "|  17 |      XPATH EVALUATION             |                                             |                |         |               |          |"
    "|* 18 |     XPATH EVALUATION              |                                             |               |          |               |          |"
    "|  19 |    XPATH EVALUATION               |                                              |               |         |              |          |"
    "Predicate Information (identified by operation id):"
    "  14 - filter(""P"".""FILENAME""='PFA2_95001_100000_F.xml')"
    "  16 - filter(""N"".""RECORDTYPE""='Person')"
    "  18 - filter(""N"".""NAMETYPE""=CAST(""P1"".""C_01$"" AS VARCHAR2(255) ))"
    "Note"
    "   - Unoptimized XML construct detected (enable XMLOptimizationCheck for more information)"
    Please note that this is for a single xml file. I have like more than 400 similar files in the same table.
    And for your's as well as Jason's Question:
    What are you trying to accomplish with
    replace(replace(t.originalName, '<OriginalName>'),'</OriginalName>', '#@#') originalName 
    originalName XMLTYPE PATH 'OriginalName'
    Like Jason, I also wonder what's the purpose of all those XMLType projections and strange replaces in the SELECT clause
    What I was trying to achieve was to create a table containing separate rows for all the multi item child nodes for this particular xml.
    But since there was an error beacuse of multiple child nodes like 'ORIGINALNAME' under 'NAMEVALUE' node, I tried this script to insert those values by providing a delimiter and replacing the tag names.
    Please see the link for more details - http://stackoverflow.com/questions/16835323/construct-xmltype-query-to-store-data-in-oracle11g
    This was the execution plan for one single xml file with my code :
    Plan hash value: 2851325155
    | Id  | Operation                                                     | Name                                         | Rows  | Bytes   | Cost (%CPU)  | Time       |    TQ  | IN-OUT | PQ Distrib |
    |   0 | SELECT STATEMENT                                   |                                                 |  7487   |  1820K |    37   (3)        | 00:00:01 |           |             |            |
    |*  1 |  HASH JOIN                                                 |                                                 |  7487   |  1820K  |    37   (3)        | 00:00:01 |           |             |            |
    |*  2 |   TABLE ACCESS FULL                                | MASTER_NAMETYPES            |     6     |   156     |     3   (0)         | 00:00:01 |           |             |            |
    |   3 |   NESTED LOOPS                                        |                                                 |  8168   |  1778K  |    33   (0)        | 00:00:01 |           |             |            |
    |   4 |    PX COORDINATOR                                    |                                                 |            |             |                      |               |           |             |            |
    |   5 |     PX SEND QC (RANDOM)                           | :TQ10000                                  |     1    |   221      |     2   (0)        | 00:00:01 |  Q1,00 | P->S     | QC (RAND)  |
    |   6 |      PX BLOCK ITERATOR                              |                                                 |     1    |   221      |     2   (0)        | 00:00:01 |  Q1,00 | PCWC   |            |
    |*  7 |       TABLE ACCESS FULL                            | XMLPERSON                            |     1    |   221      |     2   (0)        | 00:00:01 |  Q1,00 | PCWP   |            |
    |   8 |    COLLECTION ITERATOR PICKLER FETCH  | XQSEQUENCEFROMXMLTYPE |  8168  | 16336    |    29   (0)       | 00:00:01  |           |               |            |
    Predicate Information (identified by operation id):
       1 - access("N"."NAMETYPE"=CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(SYS_XQEXTRACT(VALUE(KOKBF$),'/*/@NameType'),0,0,20971520,0),50,1,2
                  ) AS VARCHAR2(255)  ))
       2 - filter("N"."RECORDTYPE"='Person')
       7 - filter("P"."FILENAME"='PFA2_95001_100000_F.xml')
    Note
       - Unoptimized XML construct detected (enable XMLOptimizationCheck for more information)
    Please let me know whether this has helped.
    My intention is to save the details in the xml to different relational tables so that I can easily query it from my application. I have similarly many queries which inserts the xml values to different tables like the one which I have mentioned here. I was thinking of creating a stored procedure to insert all these values in the relational tables once I receive the xml files. But even a single query is taking too much time to complete. Could you please help me in this regard. Waiting for your valuable feedback.

Maybe you are looking for