BI Applications on Exadata ?

Does anyone have experience(s) with implementing Oracle Business Intellligence Applications on Exadata (V2, preferably) ?
Hemant K Chitale

For Response Time.
However, we don't have the budget for Exadata currently. If and when we begin an implementation we will begin with a SAN (and a single Linux DB server) for a "limited" implementation and then consider Exadata later.
Hemant K Chitale

Similar Messages

  • Oracle Exadata Vs Oracle database machine

    Hi,
    I would like to know the difference between Oracle exadata and Oracle database machine. From Oracle website I can find both are used high volume database applications.
    Can somebody help.
    Thanks
    Rajesh

    user12122325 wrote:
    Hi,
    I would like to know the difference between Oracle exadata and Oracle database machine. From Oracle website I can find both are used high volume database applications.I amnot sure that how come you missed the pages of both machines? Anways, ORacle EXadata(version1)was for DWH applications while Exadata Version2, The Database Machine is for OLTP systems.
    [Sun Oracle Database Machine|http://www.oracle.com/database/database-machine.html]
    [Oracle Exadata|http://www.oracle.com/database/exadata.html]
    HTH
    Aman....

  • Best Practises on SMART scans

    For Exadata x2-2 is there a best practises document to enable SMART scans for all the application code on exadata x2-2?

    We cover more in our book, but here are the key points:
    1) Smarts scans require a full segment scan to happen (full table scan, fast full index scan or fast full bitmap index scan)
    2) Additionally, smart scans require a direct path read to happen (reads directly to PGA, bypassing buffer cache) - this is automatically done for all parallel scans (unless parallel_degree_policy has been changed to AUTO). For serial sessions the decision to do a serial direct path read depends on the segment size, smalltable_threshold parameter value (which is derived from buffer cache size) and how many blocks of a segment are already cached. If you want to force the use of a serial direct path read for your serial sessions, then you can set serialdirect_read = always.
    3) Thanks to the above requirements, smart scans are not used for index range scans, index unique scans and any single row/single block lookups. So if migrating an old DW/reporting application to Exadata, then you probably want to get rid of all the old hints and hacks in there, as you don't care about indexes for DW/reporting that much anymore (in some cases not at all). Note that OLTP databases still absolutely require indexes as usual - smart scans are for large bulk processing ops (reporting, analytics etc, not OLTP style single/a few row lookups).
    Ideal execution plan for taking advantage of smart scans for reporting would be:
    1) accessing only required partitions thanks to partition pruning (partitioning key column choices must come from how the application code will query the data)
    2) full scan the partitions (which allows smart scans to kick in)
    2.1) no index range scans (single block reads!) and ...
    3) joins all the data with hash joins, propagating results up the plan tree to next hash join etc
    3.1) This allows bloom filter predicate pushdown to cell to pre-filter rows fetched from probe row-source in hash join.
    So, simple stuff really - and many of your every-day-optimizer problems just disappear when there's no trouble deciding whether to do a full scan vs a nested loop with some index. Of course this was a broad generalization, your mileage may vary.
    Even though DWs and reporting apps benefit greatly from smart scans and some well-partitioned databases don't need any indexes at all for reporting workloads, the design advice does not change for OLTP at all. It's just RAC with faster single block reads thanks to flash cache. All your OLTP workloads, ERP databases etc still need all their indexes as before Exadata (with the exception of any special indexes which were created for speeding up only some reports, which can take better advantage of smart scans now).
    Note that there are many DW databases out there which are not used just only for brute force reporting and analytics, but also for frequent single row lookups (golden trade warehouses being one example or other reference data). So these would likely still need the indexes to support fast single (a few) row lookups. So it all comes from the nature of your workload, how many rows you're fetching and how frequently you'll be doing it.
    And note that the smart scans only make data access faster, not sorts, joins, PL/SQL functions coded into select column list or where clause or application loops doing single-row processing ... These still work like usual (with exception to the bloom filter pushdown optimizations for hash-join) ... Of course when moving to Exadata from your old E25k you'll see speedup as the Xeons with their large caches are just fast :-)
    Tanel Poder
    Blog - http://blog.tanelpoder.com
    Book - http://apress.com/book/view/9781430233923

  • How an application knows that it runs on Exadata ?

    Hi
    We have same sources code which run on 10gR2 and Exadata. So, to use new features (like EHCC) of Exadata, our applications have to know automaticaaly (without setting a environment variable) on which system they run.
    Is there a Oracle view or something else where we can find this information ?
    Thank's

    Depending on what your requirements are, it may be best to add options like compression into the tablespace defaults. This way an application need not know the underlying platform and it still can leverage the feature.
    e.g.
    create tablespace ts_mydata datafile size 100m nologging default compress for query high;
    Regards,
    Greg Rahn
    http://structureddata.org

  • Connectivity issue in RAC(exadata)

    Team,
    oracle version : 11gr2
    2 node rac
    exadata x2-2
    Application team is complaining the connectivity issue and they are telling we get connection after 5 to 8 hits on the application
    application logfile errors are below
    SQL Error: 17002, SQLState: 08006
    [o.h.u.JDBCExceptionReporter:234] : IO Error: The Network Adapter could not establish the connection
    Please can anyone guide me As dba what are the things we need to check
    Thanks
    Prakash GR

    Hi,
    Thanks for the information, i have one question, which ip in database server first communicate to application request
    is it scan ip, vip,host ip OR local listener IP? and is all db server ip's(scan ip, vip, host ip and local listener ip) should be pingable from application server ?
    and also the application users says
    "We are sometimes able to connect, but when we try 5,6 times we hardly able to connect once
    as i am new to rac please help to understand .
    Thanks
    PGR

  • Oracle Client 32 bit installation on Exadata Machine

    Hi,
    We are starting our migration to exadata next month.
    One of the issues we have is regarding to our Informatica ETL tool . Our application is licence to 32 bit.
    The database repository of this tool is currently running on linux redhat 5.5 64 bit with 11203 rdbms version.
    We had to install oracle client 32 bit software , in order to allow the application to connect to the database.
    Is it possible to install oracle client 32 bit on exdata ? If not what would you suggest ?
    Best Regards

    I can't speak for Informatica, but it should be able to connect to the database over the network, so the database server OS wouldn't matter in that case. That is, if Informatica runs on a different Linux machine that runs 32-bit Linux, it can connect over the network to the Exadata database server node. If Informatica requires to run on the database server directly, you'd have to ask them how they can support 64-bit Linux (or you may have to modify or add to your license).

  • Exadata performance

    In our exachk results, there is one item for shared_server.
    our current production environment has shared_server set to 1. shared_server=1.
    Now I got those from exachk:
    Benefit / Impact:
    As an Oracle kernel design decision, shared servers are intended to perform quick transactions and therefore do not issue serial (non PQ) direct reads. Consequently, shared servers do not perform serial (non PQ) Exadata smart scans.
    The impact of verifying that shared servers are not doing serial full table scans is minimal. Modifying the shared server environment to avoid shared server serial full table scans varies by configuration and application behavior, so the impact cannot be estimated here.
    Risk:
    Shared servers doing serial full table scans in an Exadata environment lead to a performance impact due to the loss of Exadata smart scans.
    Action / Repair:
    To verify shared servers are not in use, execute the following SQL query as the "oracle" userid:
    SQL>  select NAME,value from v$parameter where name='shared_servers';
    The expected output is:
    NAME            VALUE
    shared_servers  0
    If the output is not "0", use the following command as the "oracle" userid with properly defined environment variables and check the output for "SHARED" configurations:
    $ORACLE_HOME/bin/lsnrctl service
    If shared servers are confirmed to be present, check for serial full table scans performed by them. If shared servers performing serial full table scans are found, the shared server environment and application behavior should be modified to favor the normal Oracle foreground processes so that serial direct reads and Exadata smart scans can be used.
    Oracle lsnrctl service on current production environments shows all 'Local Server'.
    What should I proceed here?
    Thanks again in advance.

    Thank you all for your help.
    Here is an output of lsnrctl service:
    $ORACLE_HOME/bin/lsnrctl service
    LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 14-JUL-2014 14:15:24
    Copyright (c) 1991, 2013, Oracle.  All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
    Services Summary...
    Service "+ASM" has 1 instance(s).
      Instance "+ASM2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:1420 refused:0 state:ready
             LOCAL SERVER
    Service "PREME" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREMEXDB" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "D000" established:0 refused:0 current:0 max:1022 state:ready
             DISPATCHER <machine: prodremedy, pid: 16823>
             (ADDRESS=(PROTOCOL=tcp)(HOST=prodremedy)(PORT=61323))
    Service "PREME_ALL_USERS" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_TXT_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_CORP_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_DISCO_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_EAST_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_CRM" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_CRM_WR" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_RPT" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_WEST_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    The command completed successfully

  • How to do performance tuning in EXadata X4 environment?

    Hi,  I am pretty new to exadata X4 and we had a database (oltp /load mixed) created and data loaded. 
    Now the application is testing against this database on exadata.
    However they claimed the test results were slower than current produciton environment. and they send out the explain plan, etc.
    I would like advices from pros here what are specific exadata tuning technics I can perform to find out why this is happening.
    Thanks a bunch.
    db version is 11.2.0.4

    Hi 9233598 -
    Database tuning on Exadata is still much the same as on any Oracle database - you should just make sure you are incorporating the Exadata specific features and best practice as applicable. Reference MOS note: Oracle Exadata Best Practices (Doc ID 757552.1) to help configuring Exadata according to the Oracle documented best practices.
    When comparing test results with you current production system drill down into specific test cases running specific SQL that is identified as running slower on the Exadata than the non-Exadata environment. You need to determine what is it specifically that is running slower on the Exadata environment and why. This may also turn into a review of the Exadata and non-Exadata architecture. How is application connected to the database in the non-Exadata vs Exadata environment - what's the differences, if any, in the network architecture in between and the application layer?
    You mention they sent the explain plans. Looking at the actual execution plans, not just the explain plans, is a good place to start... to identify what the difference is in the database execution between the environments. Make sure you have the execution plans of both environments to compare. I recommend using the Real Time SQL Monitor tool - access it through EM GC/CC from the performance page or using the dbms_sql_tune package. Execute the comparison SQL and use the RSM reports on both environments to help verify you have accurate statistics, where the bottlenecks are and help to understand why you are getting the performance you are and what can be done to improve it. Depending on the SQL being performed and what type of workload any specific statement is doing (OLTP vs Batch/DW) you may need to look into tuning to encourage Exadata smart scans and using parallelism to help.
    The SGA and PGA need to be sized appropriately... depending on your environment and workload, and how these were sized previously, your SGA may be sized too big. Often the SGA sizes do not usually need to be as big on Exadata - this is especially true on DW type workloads. DW workload should rarely need an SGA sized over 16GB. Alternatively, PGA sizes may need to be increased. But this all depends on evaluating your environment. Use the AWR to understand what's going on... however, be aware that the memory advisors in AWR - specifically for SGA and buffer cache size - are not specific to Exadata and can be misleading as to the recommended size. Too large of SGA will discourage direct path reads and thus, smart scans - and depending on the statement and the data being returned it may be better to smart scan than a mix of data being returned from the buffer_cache and disk.
    You also likely need to evaluate your indexes and indexing strategy on Exadata. You still need indexes on Exadata - but many indexes may no longer be needed and may need to be removed. For the most part you only need PK/FK indexes and true "OLTP" based indexes on Exadata. Others may be slowing you down, because they avoid taking advantage of the Exadata storage offloading features.
    You also may want evaluate and determine whether to enable other features that can help performance including configuring huge pages at the OS and DB levels (see MOS notes: 401749.1, 361323.1 and 1392497.1) and write-back caching (see MOS note: 1500257.1).
    I would also recommend installing the Exadata plugins into your EM CC/GC environment. These can help drill into the Exadata storage cells and see how things are performing at that layer. You can also look up and understand the cellcli interface to do this from command line - but the EM interface does make things easier and more visible. Are you consolidating databases on Exadata? If so, you should look into enabling IORM. You also probably want to at least enable and set an IORM objective - matching your workload - even with just one database on the Exadata.
    I don't know your current production environment infrastructure, but I will say that if things are configured correctly OLTP transactions on Exadata should usually be faster or at least comparable - though there are systems that can match and exceed Exadata performance for OLTP operations just by "out powering" it from a hardware perspective. For DW operations Exadata should outperform any "relatively" hardware comparable non-Exadata system. The Exadata storage offloading features should allow you to run these type of workloads faster - usually significantly so.
    Hope this helps.
    -Kasey

  • E-Business Suite R12.1 on Exadata with Database Rel 12c - Upgrade and Migrate, or Migrate and Upgrade

    Given:
    E-Business Suite R12.1 running against non-RAC database release 11gR2
    Aspiration:
    E-Business Suite R12.1 running against database release 12c RAC on Exadata
    In the context of Oracle best practices, what would be a preferred approach for the database tier to meet the above aspiration, i.e. (a) Upgrade database on source and then migrate to Exadata OR (b) Migrate database to Exadata and then upgrade ?
    Appreciate thoughts from community members/Oracle support.
    Thanks,
    Rakesh

    Rakesh,
    It is necessary to refine Srini's statement:
    EBS does not need to be "certified" on Exadata.  See:
    Running E-Business Suite on Exadata V2
    https://blogs.oracle.com/stevenChan/entry/e-business_suite_exadata_v2
    E-Business Suite 11i, 12.0, and 12.1 are certified with Database 12.1.0.1.
    E-Business Suite 12.2 will be certified with Database 12.1.0.1 soon.
    Regards,
    Steven Chan
    Applications Technology Group Development

  • From Oracle database to EXADATA

    Hi, I'm doing research on which impact would have in my application (ETL) moving from Oracle to EXADATA.
    How, where can I find info about this issue?
    Thanks in advance

    As the famous quote says: 'Exadata is still Oracle!'
    In other words, your application will most likely work the same as before - but 10 times faster :)
    Seriously, you gave us too few details to say anything specific.
    One point: ETL has usually flat files involved. You would put them into DBFS on Exadata.
    If you haven't heard about it yet: It looks like an ordinary filesystem for your ETL process but it is spread across (almost) all drives of the storage servers.
    Kind regards
    Uwe Hesse
    "Don't believe it, test it!"
    http://uhesse.com

  • Oracle Database migration to Exadata

    Dear Folks,
    I have a requirement to migrate our existing Oracle Database to Exadata Machine. Below is the source & destination details:
    Source:
    Oracle Database 11.1.0.6 Verson & Oracle DB 11.2.0.3
    Non-Exadata Server
    Linux Enivrionment
    DB Size: 12TB
    Destination:
    Oracle Exadata 12.1
    Oracle Database 12.1
    Linux Environment
    System Dowtime would be available for 24-30 hours.
    Kindly clarify below:
    1. Do we need to upgrade the source database (either 11.1 or 11.2) to 12c before migration?
    2. Any upgarde activity after migration?
    3. Which migration method is best suited in our case?
    4. Things to be noted before migration activity?
    Thanks for your valuable inputs.
    Regards
    Saurabh

    Saurabh,
    1. Do we need to upgrade the source database (either 11.1 or 11.2) to 12c before migration?
    This would help if you wanted to drop the database in place as this would allow a standby database to be used which would reduce downtime or a backup and recovery to move the database as is into the Exadata.  This does not however allow you the chance to put in some things that could help you on the Exadata such as additional partitioning/adjusting partitioning, Advanced Compression and HCC Compression.
    2. Any upgrade activity after migration?
    If you upgrade the current environment first then not there would not be additional work.  However if you do not then you will need to explore a few options you could have depending on your requirements and desires for your exadata.
    3. Which migration method is best suited in our case?
    I would suggest some conversations with Oracle and/or a trusted firm that has done a few Exadata implementations to explore your migration options as well as what would be best for your environment as that can depend on a lot of variables that are hard to completely cover in a forum.  At a high level I typically have recommended when moving to Exadata that you setup the database to utilize the features of the exadata for best results.  The Exadata migrations I have done thus far have been done using Golden Gate where we examine the partitioning of tables, partition the ones that make sense, implement advanced compression and HCC compression where it makes sense, etc.  This gives us an environment that fits with the Exadata rather then a drop an existing database in place though that works very well.  Doing it with Golden Gate eliminates the migration issues for the database version difference as well as other migration potential issues as it offers the most flexibility, but there is a cost for Golden Gate to be aware of as well so may not work for you and Golden Gate will keep your downtime way down as well and give you opportunity to ensure that the upgrade/implementation will be smooth by giving some real work load testing to be done..
    4. Things to be noted before migration activity?
    Again I would suggest some conversations with Oracle and/or a trusted firm that has done a few Exadata implementations to explore your migration options as well as what would be best for your environment as that can depend on a lot of variables that are hard to completely cover in a forum.  In short here are some items that may help keep in mind exadata is a platform that has some advantages that no other platform can offer, while a drop in place does work and does make improvements, it is nothing compared to the improves that could be if you plan well and implement with the features Exadata has to offer.  The use of Real Application Testing Database Replay and flashback database will allow you to implement the features, test then with a real workload and tune it well before production day and allow you to be nearly 100% confident that you have a well running tuned system on the Exadata before going live.  The use of Golden Gate allows you to get an in Sync database run many replays of workloads on the Exadata without losing the sync giving you time and ability to test different workload partitioning and compression options.  Very nice flexibility.
    Hope this helps...
    Mike Messina

  • CPU & Memory allocation in Exadata Database

    Hi All,
    We are using Exadata 1/8th Rack which is having 256 GB memory and 12 cores on each Database server.
    Our requirement is to create 15 databases out of which 10 are standalone and 5 are clustered database.
    Now we are concerned about how the memory and cpu will be done for all these databases. I know we should not allocate more than 75% of total memory and cpu on the server. But don't know how much sga and pga I should allocate to each db and what is the mathematics here.
    Kindly let us know if any one has done same kind of configuration earlier or any best practices to resolve such issues.
    Also, please suggest if instance caging will be helpful here, please share detail if anyone has experience instance caging on exadata.
    Your assistance is highly appreciated.
    Thanks,
    Vineet

    Hi Vineet,
    Some quick math:
    With 10 standalone and 5 2-node clustered databases, that would average to 10 instances per database server.  Again going by straight averages, using 75% of total memory and CPU, that would leave you with an average of 19GB of memory and 0.9 CPU cores per instance.  It's a lot of instances but definitely not impossible.
    I'd suggest looking at where the databases are sitting now:  assuming they're Oracle databases, how much SGA and PGA do they have allocated now?  What are the advisers telling you about optimal sizes?
    With this many instances, instance caging doesn't really make sense:  you'd only have a single CPU core for each instance.  Again here, looking at the existing workload is a good place to start.  How many CPU cores are they using (typical/peak) already?  Would you expect to use more than 75% of CPU capacity during peak usage periods?  If you are, one option to look from the CPU-management side would be the MAX_UTILIZATION_LIMIT, though the main issue is that, as for instance caging, it only refers to a single instance.
    And don't ignore I/O capacity.  Even though X4s have a lot of flash, it doesn't hurt to get statistics for read and write volumes on your databases, and see how they compare to Exadata specs.
    And lastly, there's no true substitute for real testing.  If you're using 12c, RAT consolidated replay is very nice, but perhaps you can re-purpose application-level testing infrastructure to test on Exadata as well?
    There are a lot of good Exadata and general Oracle consolidation presentations from various conferences floating around.  A Google search will show a few, or you may even want to sign up for virtual attendance for one of the major Oracle conferences to get access to the slides and papers.
    HTH!
    Marc

  • Exadata and Oracle Berkerly DB

    Hello
    What are key differences between Berkerly DB and Exadata from application and commercial usage stand point?
    Other than the well known facts that Exadata is the child out of Oracle-Sun's marriage and it is a competing product against Teradata, and Berkerly DB being an open source DB and exadata is way more expensive:-
    From Application and commercial perspective, which is the best option out of Exadata and Berkerly DB?
    Is there any comparative analysis that may be available out there, then it would be helpful. I tried googlingwithout any luck.
    Thank you for your time in reading this post.
    -R

    Both are very opposite. Berkely DB is the smallest foot print database especially for mobile devices and embedded databases. Where as EXADATA is just opposite and high end db as told by you. We cannot compare both. For application which are for mobile devices and small web application you can use berkely db.

  • Exadata and OLTP

    hello experts,
    in our environment, OLTP databases (10g,11g) are on single instance mode and we are planning to do a feasibility analysis on moving to Exadata
    1) as per exadata related articles, exadata can provide better OLTP performance with flash cache.
    if we can allocate enough SGA as per the application workload then what is the meaning in moving in exadata?
    2) any other performance benefits for OLTP databases?
    3) since exadata is pre-configured RAC, will it be a problem for non-RAC databases which are not tested on RAC
    in general, how can we conduct an effective feasibility analsysis for moving non RAC OLTP databases to exadata
    thanks,
    charles

    Hi,
    1.Flash cache is one of the advantage in Exadata, to speed up your sql statement processing.Bear in ind that it s done on the storage level and it should not be compared directly with a non Exadata machine.
    2.As far as I know, besides faster query elapsed, we can also benefit from compression (hybrid columnar compression - Exadata spesific)
    and also as the storage is located inside the Exadata machine, thus will decrease the I/O factor of your database perfromance.
    3.you can have a single node database in Exadata.just set the connection to directly used the phyisical ip, instead of using scanip (11g) for RAC.
    I think the best thing to access, is project the improvement and cost saving if you are going to migrate to Exadata.Access the processing improvement you will gain, the storage used and also the license cost.usually,most shops used Exadata to consolidate their different physical db boxes
    br,
    mrak

  • DB and EBS cloning methods in exadata server

    Hi
    We are planning to migrate exadata server (Linux 5.3 OS) . My DB will be 11gR2 and EBS 12.1.2
    My infrastructire will be like
    1st Exadata server will hold PRD DB and application will be on sun linux server
    2nd Exadat server will hold DR (only database) and DEV/UAT/TST instance (database) for these instance application will be on sun linux server.
    this all sturecture will be with RAC and ASM.
    As on DR server we are planning to have TST/DEV/UAT database and we use RMAN for DB backup. I would like to know the cloning (DB and EBS) methods (standards and customs) for existing environments. We want to clone my instance on daily basis from my runing DR site.
    Kindly suggest some action plan.
    Thanks
    Krishna

    Hi;
    Please check below links
    https://blogs.oracle.com/stevenChan/entry/ebs_exadata_whitepaper
    https://blogs.oracle.com/stevenChan/entry/e-business_suite_exadata_v2
    http://www.oracle.com/technetwork/database/features/availability/maa-ebs-exadata-197298.pdf
    Is still have dubt please update here
    Regard
    Helios

Maybe you are looking for

  • Unable To Print or Access Page Setup in Crystal Reports 2008

    I have Win 7 32 bit, and installed Crystal Reports 2008 last week (v12.0.0.683 from the Help window). Every time I open a report, and from the Preview tab clicking File -> Print, or File - > Page Setup, I immediately receive a Windows message that "C

  • How to coment a pdf document in browser?

    I have this issue. I have a SharePoint service, when i try to open a pdf document with my browser, the comment option are grey (disabled). I don't have problem to edit the documen or add comment by Adobe Reader program and i have perfect comunication

  • OmniPortlet: getting "500 Internal Server Error" message on edit

    hi, i've installed the jpdk and portalTools on oracle 9ias-rel.2 (portal 9.0.2.6). i can include omniportlets on my pages, but whenever i want to define/edit them i get the message "500 Internal Server Error" from listener (on the edit page). what ha

  • Lenovo Miix 2 8 inch HDMI Adapter?

    I want to display my Lenovo Miix 2 on TV equipped with an HDMI input. I don't believe the Miix 2 is MHL compatible. Is there an adapter to connect the Miix 2 micro-USB port to HDMI?

  • Finder Strange behaviour: folder and file icons not showing, then they disa

    Yesterday I was trying to install the postgresql package from (http://web.mac.com/dru_satori/iWeb/PostgreSQLforMac/Welcome.html). Actually I had an older version of their package, i uninstalled it, tried installing the new package, but kept receiving