Oracle Database Architecture

Hi Please,
Can please provide me Metalink id for Oracle database 11gR2 architecture.
Can please provide me Metalink id for Oracle database 12c architecture.

I do not believe you will find MOS Docs that describe the architecture. Have you checked the docs ? What specific information are you looking for ?
Introduction to Oracle Database
Introduction to Oracle Database
HTH
Srini

Similar Messages

  • Oracle Database Architecture Understanding

    Hello everyone,
    I am a newbie in Oracle arworld, and I want to learn ORACLE. A few months ago, I started to learn Oracle Database Architecture, and read a book. However, I forget much of that books after weeks ago, and read again forget again.. I could not make tangible Oracle DB in my mind. This situation really degrades my performance while working. For example a simple example; the books says;
    Data Files:Every Oracle database has one or more physical data files, which contain all the database data. (physical storage structure)
    Data Blocks: At the finest level of granularity, Oracle Database data is stored in data blocks.(logical storage structure)
    Here, my mind is mixing. Both store data, so what is the difference. I want to go details of Oracle, in order to be successful. Please help  me and suggest something to do.
    Thank You

    I do not believe you will find MOS Docs that describe the architecture. Have you checked the docs ? What specific information are you looking for ?
    Introduction to Oracle Database
    Introduction to Oracle Database
    HTH
    Srini

  • Oracle database architecture documentation

    Hi,
    Any one can send oracle 9i/10g database architecture documentation.

    What do you mean by Architecture documentation? Concepts about Oracle architecture?
    Did you go through the documentation?
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/toc.htm
    Please post such questions in Database-General forum. This forum is for reporting issues with Oracle documentation.

  • What are the BEST books for Oracle database architect/designer?

    What concrete books would you recommend for OLTP database developer (to start from the scratch, starting from data sources analysis, logical and physical data modeling, indexes, tuning, maintaining). It doesn't have to be a book particulary for Oracle but suitable for it..
    I don't mean books DBA's or overall Oracle handbooks, also not for OLAP.
    Thanks!

    For learning how to use Oracle database effectively, i would say
    Tom Kyte's both books:
    Effective Oracle by design
    & Expert Oracle Database Architecture
    Jonathan Lewis's
    Practical Oracle 8i
    They tell you all the stuff: what/how to do something? and most importantly what/how not to do ?
    And their writing style is just awesome :)
    Amardeep Sidhu

  • How to implement optemistic locking in pl/sql for oracle database

    i have search the net to find example of how i can do the optimistic locking in pl/sql, all the resources i found were old , for example "Optimistic Locking with Concurrence in Oracle.
    As " by Graham Thornton Snr. Database Architect / Oracle DBA in 2001, but at the end he said that the approach will not work fine if the update being made is a relative update, this apprach includes:-
    1.add a timestamp column to an exsiting table.
    2.add a preinsert trigger on the table to set the timestamp.
    3.add a preupdate trigger to comapre the old time stamp with the new one.
    So where i can find updated resources about this issue.
    Edited by: 812643 on 17-Nov-2010 12:39

    totally plagiarized from expert oracle database architecture 9i, 10g, 11g by Tom Kyte pg201
    one popular implementation of optimistic locking is to keep the old and new values in the application and upon updating the data use and update like this
    update table
    set column1 =: new_column1, column2 = : new_column2, ...
    where primary_key = :primary_key
    and decode( column1, :old_column1, 1 ) = 1
    and decode( column2, :old_column2, 1 ) = 1
    another implementation
    optimistic locking using a version column
    add a single column to each database table you wish to protect from lost updates. this column is generally either a number or date/timestamp column
    It is typically maintened via a row trigger on the table, which is responsible for incrementing the number column or updating the date/timestamp column
    every time a row is modified.
    another one on page 204
    optimistic locking usin a checksum
    similiar to the version column implementation but it uses the base data itself to compute a virtual version column.
    the ones suggested where owa_opt_lock.checksum, dbms_obfuscation_toolkit.md5, dbms_crypto.hash, ora_hashEdited by: pollywog on Nov 17, 2010 3:48 PM
    might be a good book for you to look into getting it has a whole chapter on locking and latching.
    Edited by: pollywog on Nov 17, 2010 3:54 PM

  • Oracle database 10g

    Hi
    I just start learning oracle database 10g so that I want you to provide me with some books about oracle database 10g can anyone help me please.
    Bay.

    You SHOULD start learning with book "Expert Oracle Database Architecture" by Thomas Kite. That book teaches you the basis. Only after you finish it, read
    Concepts http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/toc.htm
    Database Administrator's Guide http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/toc.htm

  • Three tier architecture using oracle dev suite 10g & oracle database 9i

    hi ,
    I am trying to build a software which will manage the database of a hospital through usual form design .
    The tools I am using for these are
    (1) oracle server database 9i (2) oracle developer suite 10g (3) windows xp professional service pack 2 .
    I have designed the form modules in developer suite , created the tables in the database , connected those tables to the form modules using dml statements, now data can be inserted , updates and deleted through the form design . I have also deployed the forms using "run forms through web" and thereby other computers connected to the main computer through lan can also access the software using the web port address and the name of form to be used , these computers are not having oracle developer suite or oracle database installed , but they can access the software through the browser .
    In this scenario my question is that , is this a three tier architecture as oracle database is the first tier , oracle developer suite is the middle tier where I am puting all the bussiness logics and oc4j instance is used to connect the database and the dev suite , and for user interaction we have the browser as the third tier ?
    or this is a two tier architecture ? if this is a two tier architecture please let me know how can I implement a three tier architecture using oracle developer suite 10g and oracle server database 9i .
    Thanks a lot for showing ur interest to read this

    You need Oracle Application Server to deploy the forms when you go live.
    What you are currently using OC4J which came in the developer suite. It is used only for development purpose and can not have capacity to handle higher load.
    3 tier arch
    1. Thin Client :-> Browser
    2 Middle tier -> Oracle Appln Server /OC4j(in ur case)
    3 Database Tier -> Oracle Database
    Rajesh

  • Oracle database Internal architecture

    Hi
    Any one Can explane me about oracle database internal architecture
    thanks & regards

    user9098698 wrote:
    Hi
    Any one Can explane me about oracle database internal architectureYes, there are people who can explain this to you.
    How much time do you want to spend? I've been learning the Oracle internal architecture for over 20 years and there are still new things to learn. (Well ... it also depends on what you mean by 'internal architecture'.)
    If you want the basics - the Oracle Documentation set at http://tahiti.oracle.com has a Concepts manual that does a decent job of describing the basic architecture. And the Windows Platform Guide, the Unix Administrator's Reference give a fair bit of additional information. As does the Oracle Networking reference.
    And there are courses. Oracle University courses provide a decent overview of the architecture. And Jonathan Lewis, Richard Foote and others give excellent courses and seminars on specific aspects of the architecture.
    ANd there are books. Search for books by Tom Kyte. Robert Stackowiak.
    And internet search.
    And http://orainfo.com/architecture/architecture.htm
    And ...

  • Connection pooling and auditing on an oracle database

    Integration of a weblogic application with an oracle backend,
    Connection pooling, and auditing ,2 conflicting requirements ?
    Problem statement :
    We are in the process of maintaining a legacy client server application where
    the client is
    written in PowerBuilder and the backend is using an Oracle database.
    Almost all business logic is implemented in stored procedures on the database.
    When working in client/server mode ,1 PowerBuilder User has a one-to-one relation
    with
    a connection(session) on the oracle database.
    It is a requirement that the database administrator must see the real user connected
    to the database
    and NOT some kind of superuser, therefore in the PowerBuilder app each user connects
    to the database
    with his own username.(Each user is configured on the database via a seperate
    powerbuilder security app).
    For the PowerBuilder app all is fine and this app can maintain conversional state(setting
    and
    reading of global variables in oracle packages).
    The management is pushing for web-based application where we will be using bea
    weblogic appserver(J2EE based).
    We have build an business app which is web-based and accessing the same oracle
    backend app as
    the PowerBuilder app is doing.
    The first version of this web-based app is using a custom build connector(based
    on JCA standard and
    derived from a template provided by the weblogic integration installation).
    This custom build connector is essentially a combination of a custom realm in
    weblogic terms
    and a degraded connection pool , where each web session(browser) has a one-to-one
    relation
    with the back end database.
    The reason that this custom connector is combining the security functionality
    and the pooling
    functionality , is because each user must be authenticated against the oracle
    database(security requirement)
    and NOT against a LDAP server, and we are using a statefull backend(oracle packages)
    which would make it
    difficult to reuse connections.
    A problem that surfaced while doing heavy loadtesting with the custom connector,
    is that sometimes connections are closed and new ones made in the midst of a transaction.
    If you imagine a scenario where a session bean creates a business entity ,and
    the session bean
    calls 1 entity bean for the header and 1 entity bean for the detail, then the
    header and detail
    must be created in the same transaction AND with the same connection(there is
    a parent-child relationship
    between header and detail enforced on the back end database via Primary and Foreing
    Keys).
    We have not yet found why weblogic is closing the connection!
    A second problem that we are experincing with the custom connector, is the use
    of CMP(container managed persistence)
    within entity beans.
    The J2EE developers state that the use of CMP decreases the develoment time and
    thus also maintenance costs.
    We have not yet found a way to integrate a custom connector with the CMP persistence
    scheme !
    In order to solve our loadtesting and CMP persistence problems i was asked to
    come up with a solution
    which should not use a custom connector,but use standard connection pools from
    weblogic.
    To resolve the authentication problem on weblogic i could make a custom realm
    which connects to the
    backend database with the username and password, and if the connection is ok ,
    i could consider this
    user as authenticated in weblogic.
    That still leaves me with the problem of auditing and pooling.
    If i were to use a standard connection pool,then all transaction made in the oracle
    database
    would be done by a pool user or super user, a solution which will be rejected
    by our local security officer,
    because you can not see which real user made a transaction in the database.
    I could still use the connection pool and in the application , advise the application
    developers
    to set an oracle package variable with the real user, then on arrival of the request
    in the database,
    the logic could use this package variable to set the transaction user.
    There are still problems with this approach :
    - The administrator of the database can still not see who is connected , he will
    only see the superuser connection.
    - This scheme can not be used when you want to use CMP persistence , since it
    is weblogic who will generate the code
    to access the database.
    I thought i had a solution when oracle provided us with a connection pool known
    as OracleOCIConnectionPool
    where there is a connection made by a superuser, but where sessions are multiplexed
    over this physical pipe with the real user.
    I can not seem to properly integrate this OCI connectionpool into weblogic.
    When using this pool , and we are coming into a bean (session or entity bean)
    weblogic is wrapping
    this pool with it's own internal Datasource and giving me back a connection of
    the superuser, but not one for the real user,
    thus setting me with my back to the wall again.
    I would appreciate if anyone had experienced the same problem to share a possible
    solution with us
    in order to satisfy all requirements(security,auditing,CMP).
    Many Thanks
    Blyau Gino
    [email protected]

    Hi Blyau,
    As Joe has already provided some technical advice,
    I'll try to say something on engineering process level.
    While migrating an application from one technology to
    other, like client-server to n-tier in you case, customers and
    stakeholders want to push into the new system as many old
    requirements as possible. This approach is AKA "we must
    have ALL of the features of the old system". Mostly it happens
    because they don't know what they want. Ad little understanding
    of abilities of the new technology, and you will get a requirement
    like the one you have in you hands.
    I think "DBA must see real user" is one of those. For this
    type of requirements it can make sense to try to drop it,
    or to understand its nature and suggest alternatives. In this
    particular case it can be a system that logs user names,
    login and logout times.
    Blind copying of old features into an incompatible new architecture
    may endanger the whole project and can result in its failure.
    Hope this helps.
    Regards,
    Slava Imeshev
    "Blyau Gino" <[email protected]> wrote in message
    news:[email protected]...
    >
    Integration of a weblogic application with an oracle backend,
    Connection pooling, and auditing ,2 conflicting requirements ?
    Problem statement :
    We are in the process of maintaining a legacy client server applicationwhere
    the client is
    written in PowerBuilder and the backend is using an Oracle database.
    Almost all business logic is implemented in stored procedures on thedatabase.
    When working in client/server mode ,1 PowerBuilder User has a one-to-onerelation
    with
    a connection(session) on the oracle database.
    It is a requirement that the database administrator must see the real userconnected
    to the database
    and NOT some kind of superuser, therefore in the PowerBuilder app eachuser connects
    to the database
    with his own username.(Each user is configured on the database via aseperate
    powerbuilder security app).
    For the PowerBuilder app all is fine and this app can maintainconversional state(setting
    and
    reading of global variables in oracle packages).
    The management is pushing for web-based application where we will be usingbea
    weblogic appserver(J2EE based).
    We have build an business app which is web-based and accessing the sameoracle
    backend app as
    the PowerBuilder app is doing.
    The first version of this web-based app is using a custom buildconnector(based
    on JCA standard and
    derived from a template provided by the weblogic integrationinstallation).
    This custom build connector is essentially a combination of a custom realmin
    weblogic terms
    and a degraded connection pool , where each web session(browser) has aone-to-one
    relation
    with the back end database.
    The reason that this custom connector is combining the securityfunctionality
    and the pooling
    functionality , is because each user must be authenticated against theoracle
    database(security requirement)
    and NOT against a LDAP server, and we are using a statefull backend(oraclepackages)
    which would make it
    difficult to reuse connections.
    A problem that surfaced while doing heavy loadtesting with the customconnector,
    >
    is that sometimes connections are closed and new ones made in the midst ofa transaction.
    If you imagine a scenario where a session bean creates a business entity,and
    the session bean
    calls 1 entity bean for the header and 1 entity bean for the detail, thenthe
    header and detail
    must be created in the same transaction AND with the same connection(thereis
    a parent-child relationship
    between header and detail enforced on the back end database via Primaryand Foreing
    Keys).
    We have not yet found why weblogic is closing the connection!
    A second problem that we are experincing with the custom connector, is theuse
    of CMP(container managed persistence)
    within entity beans.
    The J2EE developers state that the use of CMP decreases the develomenttime and
    thus also maintenance costs.
    We have not yet found a way to integrate a custom connector with the CMPpersistence
    scheme !
    In order to solve our loadtesting and CMP persistence problems i was askedto
    come up with a solution
    which should not use a custom connector,but use standard connection poolsfrom
    weblogic.
    To resolve the authentication problem on weblogic i could make a customrealm
    which connects to the
    backend database with the username and password, and if the connection isok ,
    i could consider this
    user as authenticated in weblogic.
    That still leaves me with the problem of auditing and pooling.
    If i were to use a standard connection pool,then all transaction made inthe oracle
    database
    would be done by a pool user or super user, a solution which will berejected
    by our local security officer,
    because you can not see which real user made a transaction in thedatabase.
    I could still use the connection pool and in the application , advise theapplication
    developers
    to set an oracle package variable with the real user, then on arrival ofthe request
    in the database,
    the logic could use this package variable to set the transaction user.
    There are still problems with this approach :
    - The administrator of the database can still not see who is connected ,he will
    only see the superuser connection.
    - This scheme can not be used when you want to use CMP persistence , sinceit
    is weblogic who will generate the code
    to access the database.
    I thought i had a solution when oracle provided us with a connection poolknown
    as OracleOCIConnectionPool
    where there is a connection made by a superuser, but where sessions aremultiplexed
    over this physical pipe with the real user.
    I can not seem to properly integrate this OCI connectionpool intoweblogic.
    When using this pool , and we are coming into a bean (session or entitybean)
    weblogic is wrapping
    this pool with it's own internal Datasource and giving me back aconnection of
    the superuser, but not one for the real user,
    thus setting me with my back to the wall again.
    I would appreciate if anyone had experienced the same problem to share apossible
    solution with us
    in order to satisfy all requirements(security,auditing,CMP).
    Many Thanks
    Blyau Gino
    [email protected]

  • Oracle database (10.2.0.4) and HTTP server / HTML DB conflict

    Hi there,
    I installed a fresh 10.2.0.1 (patched with 10.2.0.4) oracle database on Oracle Entreprise linux 5.7, and HTTP server + HTML DB products from the companion CD. Both are located in different home directory:
    ODB:  $ORACLE_BASE/product/10.2.0/db_1
    APEX: $ORACLE_BASE/product/10.2.0/apex
    I just discovered that logs files ($APEX/opmn/logs/) have filled up my entire disk this week end. After some googling / digging, I've read that there is a conflict between the ONS services that run for both products on the same port. Opening the configuration files of both products:
    +$ODB/opnm/conf/ons.config+
    +$APEX/opnm/conf/opnm.xml+ (btw, ons.conf is empty)
    I indeed saw that they both use port 6113 as local port and 6200 as remote port. The workaround I've found in variuos places is to either change the port number, or unsuscribe ONS for the database listener (and sometimes both).
    Question 1: what is the best solution ? what are the consequences for the database listener in case of unsuscribtion ?
    I also saw that opnm.xml in APEX configuration file is making use of the $ORACLE_HOME environment variable, which I define in my /etc/profile as $ORACLE_BASE/product/10.2.0/db_1. So I guess the opnm of APEX is looking at the wrong place... Yet if I change ORACLE_HOME in my /etc/profile, I won't be able to run dbstart upon stgartup, as it is run using ORACLE_HOME in a init.d script...
    Question 2: Can someone clarify things up about the exact relationship between oracle database and HTTP server ? How should I deal with ORACLE_HOME environment variable, which one use it and when... ?
    Note that I'm completely new to Oracle (and hence APEX).
    Thanks in advance!

    Billy  Verreynne  wrote:
    There should be no conflicts - except for configuration ones.
    Oracle is multi home capable. Thus multiple Oracle s/w products installed into different homes and these products running side by side.
    The conflict is system resources - like a TCP port. That port cannot be shared by multiple processes. So you need to make sure that each s/w component that needs a TCP listener end-point, has a unique port number allocated for it to use.
    As mentioned, the Oracle Apache server is an Oracle client with respect to Oracle client-server architecture. It simply needs a client OCI driver to connect to the Oracle database instance. So whether you run Oracle Apache on the same server as the database instance, or on another server all together - this makes no difference. The Oracle Apache s/w will be using its home for running. It has no need for anything from the database instance's home directory. Thus there are no conflicts - as long as you correctly keep them separated and not set the wrong home for the wrong component or include the wrong path (to another home's executables).
    OK, so to summarize a bit:
    - I should just be carefull when running startup script which will launch all services upon startup, i.e. changing the value of ORACLE_HOME before running each product startup script (dbstart => ORACLE_HOME = (..)/10.2.0/db_1, opmnctl => ORACLE_HOME = (..)/apex). I'm still quite worried because ORACLE_HOME is heavily used in opmn.xml but anyways.
    - That's basically it: when each product's services are running, they do not use ORACLE_HOME (and such) anymore, hence no conflict from this point of view.
    Also do not attempt to use IPC connectivity between the mod_plsql Apache module and the database instance. Does not make sense ito performance. Shared server should be considered and localhost TCP connectivity can be used.Well as I don't really know IPC yet, so hopefully I won't get into troubles. Just to be clear about that, i've this in my listener.ora:
    SID_LIST_LISTENER =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = PLSExtProc)
    (ORACLE_HOME = /opt/oracle/app/oracle/product/10.2.0/db_1)
    (PROGRAM = extproc)
    LISTENER =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
    (ADDRESS = (PROTOCOL = TCP)(HOST = apex.nwk)(PORT = 1521))
    Does that mean that ICP will be used somehow ?
    Thanks again from your help.
    Edited by: lv on 27 févr. 2012 06:42

  • Performance Issues on Oracle Database Lite 10.3.0.3

    Hello,
    we have a big problem with one of our customers. He uses Oracle Database Lite in the standalone version. There are about 600 clients running a project with about 20 Publication Items. The customer works in the logistics field, so the Main time where nearly all clients want to sync and get their tours is from 02 AM to 05 AM, so that is the time with the most traffic.
    Everything went well before last week. Suddenly the compose cycle needs (at least between that hours) about ten times longer than before. Normally we had about 40 - 80 seconds, now we have about 300 - 500 average with max values from over 1200 seconds.
    Therefore we have a lot of conflicts and Disconnects on the clients. It is sometimes that bad, that nearly no client can sync successfully.
    We checked our Selects in the publication Items, all are very fast and look good in the explain plan, so that shouldnt be the problem.
    The installation is nearly in the Default state as oracle does when you install Oracle Lite. Are there any standard settings we can change? Does anyone had the same problem already?
    The Machine is a Windows Server 2003 with an Intel Xeon CPU E7-4820 and 4 GB of RAM.
    Of course our customer gets a lot of trouble because of that and we have to improve and fix that as fast as we can.
    Any recommendation or opinions are very welcome.
    If I have to give more Info about somewhat please feel free to ask, I will give it as fast as possible.
    Thanks in advance
    Holger

    Hey,
    many thanks for your proposals. The MGP runs every minute like the default settings are, because our application is a time sensitive. The data have to be very fast on the clients in the nights and thex are only a few minutes before available. I think the parameters are important in environemnts where the mgps runs only after a few minutes?
    Let me summarize the situation since my las post:
    We have days (nights) where everything goes quite well. Mainly that is thrusday and friday. But on the other hand, there are days where everything really is a mess.
    The mgp caycles increase and when you look to mobile manager you see 70 clients syncing very very slowly (about 300 - 600 seconds) for just a little bit of data. Then we get the disonnects and the messages in the err.log. 
    What do you think, is this somehow network related, or ist this a logical problem in the Mobile Server itself? 
    Like postes earlier, there are 500 clients syncing against a standalone mobile server.
    We just couldnt figure out, why the mgp times somehow and sometimes increase that much, that there are these problems.
    First we thought, that they sync newly over wireless (GPRS or Edge) and that syncing is that slowly, that everything goes down, but arent sure. Is this possible because of the architecture of the Mobile Server? That a very slow network can slow down the mgps and that does timeouts and all the other stuff?
    How many clients can a Standalone Mobile Server normally serve at same time without performance problems. What do you thin about the hardware I wrote? Is it powerful enough?

  • How to install oracle database 11g express edition on Windows 7

    Hi Guys,
    i am trying to download and install Oracle database 11g express edition on windows7 and i can't seem to get it right. i did get the download but when i try to install, it keeps saying it appears to be an invalid archive.
    Please help..

    Action - Refer to the logs or contact OracleFinding the problem is the key. "Refer to the logs" is the only option, XE has no oracle support offerings.
    Which plugin failed, and why did it fail will be important clues. There should be error messages in the log that will be helpful.
    Otherwise, we don't have much help to offer. Fixing a problem means identifying the problem and performing steps to correct it, if there a fix is available.
    A MOS lookup on the ins-20802 error has snippets from one installer session, this one is from an x64 install. Not x86. To paraphrase the details:
    ... Created a new file <drive>:<OH path>\cfgtoollogs\configToolAllCommands
    SEVERE: java.io.IOException: Access is denied
    ...So from those symptoms, the user trying to run the installer does not have appropriate rights on that drive and/or folder. The fix for that particular problem is adding the user to the local administrators group. And rerunning the install. After running the deinstall steps to clean up the installation. As specified in the XE install guide for Windows.
    http://docs.oracle.com/cd/E17781_01/install.112/e18803/toc.htm
    Also note the System Requirements, it specifies a System architecture Intel x86, which is not X64. There is no X64 installer for XE on Windows. So it might work. Or it might not if your host is x64.
    So if that is your particular error, verify that your user is in the local admins group. Either use the local users and groups applet (Start/Run/lusrmgr.msc) and open up the Administrators group. Click the Add button. Find your user. If your OS user is a windows domain user, be sure you have authenticated to the domain.
    Try the `net ...` list of the adminis group, that should reveal whom is indeed in the local administrators group:
    net localgroup administrators
    Administrator
    <domain>\Domain Admins ... # ??? is the host in windows domain ???
    <domain>\Local Admins
    <domain>\<user1>
    <user2>
    <user3>
    echo %USERNAME%
    # if relevant:
    echo %USERDOMAIN%Like it also states in the Windows install guide, under Permission Required quote:
    ... must be a member of the Administrators group on Windows to install Oracle Database XE.

  • ORA-12154 Connection error from HFM to Oracle Database

    Hi,
    I am trying to configure Hyperion HFM but can write to HFM database.
    The implementation architecture:
    Hyperion 11.1.2.2 (with all the requiered patches for HFM, FDM, Shared Services, Workspace and Oracle Application Development)
    Server 1:
    Windows Server 2008 x64
    Installed products: Foundation (EPMA, CalcManager), BI, HFM web components and ADM driver
    Configured products: Foundation(EPMA, CalcManager), BI.
    Database Client: 11gR2 x64
    Server 2:
    Windows Server 2008 x64
    Installed products: HFM, FDQM
    Configured Products: FDQM, HFM
    Database Client: 11gR2 x32, 11gR2 x64 (x32 version installed first)
    Server 3:
    Database: Oracle 11.2.0.2
    All the products from server 1 are working fine, FDQM (server 2) is also working fine, but, when I try to do any action related with HFM database the system fails.
    I have tested the connection is these scenarios:
    1. SQLdeveloper: successfull!, I can create tables, views, etc. Double checking the user privileges it has all the required.
    2. tnsping: successfull!
    3. HFMApplicationCopy utility: Successfull using UDL file and writing the connection parameters.
    4. EPM System Configurator: the configurator successfully validates the database connection information, but does not create the tables on the database. No errors in the configtool log.
    5. EPM Diagnostic Tool: fails with this error message:
    ------------STARTING VALIDATION SCRIPTS----------
    LOGGING IN HFM....
    CREATING APPLICATION....
    ERROR: Unable to CreateApplicationCAS
    Number (dec) : -2147215936
    Number (hex) : &H800415C0
    Description  : <?xml version="1.0"?>
    +<EStr><Ref>{DC34A1FD-EE02-4BA6-86C6-6AEB8EF5E5A3}</Ref><AppName/><User/><DBUpdate>1</DBUpdate><ESec><Num>-2147467259</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>HfmADOConnection.cpp</File><Line>511</Line><Ver>11.1.2.2.300.3774</Ver><DStr>ORA-12154: TNS:could not resolve the connect identifier specified</DStr></ESec><ESec><Num>-2147215616</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxSQLConnectionPool.cpp</File><Line>585</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>0</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxServerImpl.cpp</File><Line>8792</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>0</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxServer.cpp</File><Line>90</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxClient.cpp</File><Line>1356</Line><Ver>11.1.2.2.300.3774</Ver><PSec><Param><server_name></Param></PSec></ESec><ESec><Num>-2147215936</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxClient.cpp</File><Line>936</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>0</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxClient.cpp</File><Line>4096</Line><Ver>11.1.2.2.300.3774</Ver></ESec></EStr>+
    Source       : Hyperion.HFMErrorHandler.1
    ERROR: while Application created
    7. HFM Classic application creation: fails with the following error:
    Error*11*<user_name+>*10/19/2012 08:30:52*CHsxServer.cpp*Line 90*<?xml version="1.0"?>+
    +<EStr><Ref>{DC34A1FD-EE02-4BA6-86C6-6AEB8EF5E5A3}</Ref><AppName/><User/><DBUpdate>1</DBUpdate><ESec><Num>-2147467259</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>HfmADOConnection.cpp</File><Line>511</Line><Ver>11.1.2.2.300.3774</Ver><DStr>ORA-12154: TNS:could not resolve the connect identifier specified</DStr></ESec><ESec><Num>-2147215616</Num><Type>1</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxSQLConnectionPool.cpp</File><Line>585</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>0</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxServerImpl.cpp</File><Line>8792</Line><Ver>11.1.2.2.300.3774</Ver></ESec><ESec><Num>-2147215936</Num><Type>0</Type><DTime>10/19/2012 8:30:52 AM</DTime><Svr><server_name></Svr><File>CHsxServer.cpp</File><Line>90</Line><Ver>11.1.2.2.300.3774</Ver></ESec></EStr>+
    8. EPMA Application deployment: fails with same message.
    Please help me with some insights on this problem, I have tried everything but nothing works.
    Regards
    Edited by: Otein on 19-oct-2012 14:04

    Hi,
    I Have solved one of my problems, the one that keep HFM from connecting to the Oracle database.
    I just change the TNSNAMES.ORA, like this:
    Initial tnsnames.ora
    PRUEBA.WORLD=
    +(DESCRIPTION_LIST =+
    +(DESCRIPTION =+
    +(LOAD_BALANACE = ON)+
    +(FAILOVER = ON)+
    +(ADDRESS_LIST =+
    +(ADDRESS = (PROTOCOL = TCP)(HOST = <server_name>)(PORT = <port>))+
    +)+
    +(CONNECT_DATA =+
    +(SERVICE_NAME = <service_name>)+
    +)+
    +)+
    +)+
    Modified tnsnames.ora
    PRUEBA.WORLD=
    +(DESCRIPTION =+
    +(LOAD_BALANACE = ON)+
    +(FAILOVER = ON)+
    +(ADDRESS_LIST =+
    +(ADDRESS = (PROTOCOL = TCP)(HOST = <server_name>)(PORT = <port>))+
    +)+
    +(CONNECT_DATA =+
    +(SERVICE_NAME = <service_name>)+
    +)+
    +)+
    I Just delete the line "+(DESCRIPTION_LIST =+" and its corresponding closing parenthesis, I did this cause in the configuration utility log I saw this line:
    +TNS parsing: Entry: DESCRIPTION_LIST [[Address: Protocol:(TCP) Host:(<server_name>) Port:(1521) SID:(<service_name>)]]+
    So, if the applications were trying to connect to connection descriptor DESCRIPTION_LIST, the driver could not recognize DESCRIPTION_LIST as a valid one.
    There is a lot going on behind the scenes when you work with Oracle Database as the repository, maybe there is some other way to address this issue, but it worked for me, hope it can help you too.

  • Oracle Database: productivity increase

    Hi all,
    in our company we use the three-tier architecture base on Oracle products:
    client(web browser) - application server (Oracle AS 10g) - database server (Oracle database 10g).
    and we plan to appreciably increase number of clients. And it is necessary for me to define how much it will be necessary to increase hardware units productivity (CPU speed in MHz and RAM capacity in MB) to support new clients on Oracle AS 10g and Oracle Database 10g side.
    I've found AS Discoverer Sizing Calculator on oracle.com, but I can't find similar calculator for Database 10g.
    Could anybody please advise how to calculate hardware sizing for Oracle Database 10g server?
    Thanks a lot

    Again - as the question you asked about the bandwidth - this cannot be answered without knowing the application, number of users, amount of data, etc.
    I suggest you build an environment for a POC/Benchmark.
    In this environment you can install your application and tweak the settings (various initialization parameters, etc) to see the impact on your system. Be sure to include enough data as most systems work well with 1 MB of test data but not with 5 GB of test data.
    cu
    Andreas

  • Server Sizing For Oracle Database

    Hi All,
    I need a server sizing for the below mentioned architecture:
    This application is basically for logistics company which we are planing to host it centrally with two server's one server for application and one for oracle database along with DR site (Other Location). There are four locations and each location will have 20 users who are going to access this application (20 x 4= 80 Users). We are using MPLS network of 35 mbps bandwidth.
    1. Application server: Windows server 2008 R2
    2. Database Server: Windows server 2008 R2, Oracle 11g r2
    I need a server sizing documents.
    Thanks........

    EdStevens wrote:
    Justin Mungal wrote:
    EdStevens wrote:
    user1970505 wrote:
    Hi All,
    I need a server sizing for the below mentioned architecture:
    This application is basically for logistics company which we are planing to host it centrally with two server's one server for application and one for oracle database along with DR site (Other Location). There are four locations and each location will have 20 users who are going to access this application (20 x 4= 80 Users). We are using MPLS network of 35 mbps bandwidth.
    1. Application server: Windows server 2008 R2
    2. Database Server: Windows server 2008 R2, Oracle 11g r2
    I need a server sizing documents.
    Thanks........I'd seriously reconsider hosting Oracle db on Windows. Obviously there are many, many shops that do. And obviously it is often a case of the fact that they do not have (and choose to not acquire) expertise in Linux. But I've been in IT for 30+ years and have worked on IBM S-370 and its variants and descendents, Windows since v3, DEC VMS, IBM OS/2, Solaris, AIX, HPUX, and Oracle Linux. The first Oracle database I ever created was on Windows 3.11 and at that point I had never seen *nix.  Now I am in a position to state that Windows is the worst excuse of an operating system of any I have ever used.  I am constantly amazed/amused by how often (at least once a month on schedule, plus unplanned times) that our Windows SA has to send out a notice that he is re-booting his servers.  I can't remember the last time we had to reboot a Linux server ( I have 4 of them)
    Yes, I'm biased away from Windows, but that bias comes from experience. Hardly a day goes by that I don't see something that causes me to say to whoever is in earshot "have I told you how much I hate Windows?"I was going to refrain from commenting on that, as I assumed they're a Windows shop and aren't open to any other OS (but my assumption could be incorrect).
    I haven't been working in IT for as long as many of the folks around here, only about 10 years. I'm a former system admin that maintained both Linux and Windows servers, but my focus was on Windows. In the right hands, Windows can be rock solid. If a system admin has to reboot Windows servers often, he is most likely doing something wrong, or is rebooting for security updates. It's never as simple as "Windows Sucks," or "Linux Sucks;" it all depends on who's running the system (again, in my opinion).
    I have seen some windows servers run uninterrupted for so long no one could remember the admin password. But more often memory leaks and the "weekly update" (replacing last weeks bugs with this weeks) is the culprit.
    Yes, it really is sad how often you have to reboot for updates if you want to keep your system current. Mind you, it's better to have the fixes then to not have them (maybe). I rebooted my servers about once every month at my old place... which is not that bad.
    With that said, in my experience, Oracle on Windows is a major pain. It takes me much longer to do anything. Once you get proficient with a CLI like the bash shell, the Windows GUI can't compare.Agreed. One of my many complaints about Windows is the poor excuse of a shell processor. I'm pretty proficient in command line scripting, but still cringe when I have to do it. Practically every line of code I write for a command script is accompanied by the remark "this is so lame compared to what I could do with a shell script". Same for vi vs. notepad. But my real problem is the memory leaks and the registry. I'm fairly comfortable hacking certain areas of the registry, but the need to and the arcane linkages between different areas of the registry and how they influence 'process environment' remains a mystery to all but a tiny minority of admins. Compare to *nix where everything is well documented and "knowable". 
    One (of many) anecdotal experiences, this with my personal Win7 laptop. One time it crashed and refused to reboot. A bit of a google search turned up some arcane keystroke sequence to put it into some sort of recovery mode on bootup .. similar to getting into the bios, but the keystroke sequence was much more complex .. it may have involved standing on one foot while entering the sequence. Anyway, it entered a recovery process I've never seen before or since and repaired everything. My first thought was "hey, that was pretty cool." Then my second thought was 'but only Windows would need such a facility.
    Bottom line? To paraphrase a famous Tom Hanks character, "My momma always said Windows was like a box of chocolates. You never know just what you'll get."Haha... I like that one. Yes, the registry is definitely horrible. It's amazing to me that a single point of failure was Microsoft's answer to INI files.
    I think Windows and nix have their places. Server work definitely seems more productive to me in a nix environment, but I think I'd jump off a cliff if I had to use it as my desktop environment day-in-day-out. The other problem is application lock-down; I can't blame the OS for that, but it's a reality... and using virtualization to run those applications seems to defeat the point to me.

Maybe you are looking for