Oracle SLA Metrics and System Level Metrics Best Practices

I hope this is the right forum...
Hey everyone,
This is what I am looking for. We have several SLA's setup and we have defined many Business Metrics and we are trying to map them to System level metrics. One key area for us is Oracle. I was wondering is there is a best practice guide out there for SLA's when dealing with Oracle or even better, System Level Metric best practices.
Any help would be ideal please.

Hi
Can you also include the following in the FAQ?
1) ODP.NET if installed prior to this beta version - what is the best practice ? De-install it prior to getting this installed etc ..
2) As multiple Oracle home's have become the NORM these days - this being a Client only should probably be non-intrusive and non-invasive.. Hope that is getting addressed.
3) Is this a pre-cursor to the future happenings like some of the App-Server evolving to support .NET natively and so on??
4) Where is BPEL in this scheme of things? Is that getting added to this also so that Eclipse and .NET VS 2003 developers can use some common Webservice framework??
Regards
Sundar
It was interesting to see options for changing the spelling of Webservice [ the first one was WEBSTER]..

Similar Messages

  • How to enable BPEL loggers at domain and system level ?

    As far as I know there are two kind of BPEL loggers
    - at domain and
    - system level
    Where EXACTLY can I enable/disable them resp. set them to e.g. DEBUG mode?
    Peter

    Apart from the posts mentioned above, please note that log4j-config.xml is the files that has these logging entries.
    For domain level : SOA_ORACLE_HOME\bpel\domains\default\config\log4j-config.xml
    For system level : SOA_ORACLE_HOME\bpel\system\config\log4j-config.xml
    You set these loggers on domain level or system level depending on the information you are interested to see, so set that particular logger.

  • When I share a file to YouTube, where does the output file live? I want also to make a DVD. And is this a best practice or is there a better way?

    I want also to make a DVD, but can't see where the .mov files are.
    And is this a best practice or is there a better way to do this, such as with a master file?
    thanks,
    /john

    I would export to a file saved on your drive as h.264, same frame size. Then import that into youtube.
    I have never used FCP X to make a DVD but I assume that it will build the needed vob mpeg 2 source material for the disk.
      I used to use Toast & IDVD. Toast is great.
    To "see" the files created by FCP 10.1.1 for YouTube, rt. (control) click on the Library Icon in your Movies/show package contents/"project"/share.

  • OVM Repository and VM Guest Backups - Best Practice?

    Hey all,
    Does anybody out there have any tips/best practices on backing up the OVM Repository as well ( of course ) the VM's? We are using NFS exclusively and have the ability to take snapshots at the storage level.
    Some of the main points we'd like to do ( without using a backup agent within each VM ):
    backup/recovery of the entire VM Guest
    single file restore of a file within a VM Guest
    backup/recovery of the entire repository.
    The single file restore is probably the most difficult/manual. The rest can be done manually from the .snapshot directories, but when we're talking about having hundreds and hundreds of guests within OVM...this isn't overly appealing to me.
    OVM has this lovely manner of naming it's underlying VM directories off of some abiguous number which has nothing to do with the name of the VM ( I've been told this is changing in an upcoming release ).
    Brent

    Please find below the response from the Oracle support on that.
    In short :
    - First, "manual" copies of files into the repository is not recommend nor supported.
    - Second we have to go back and forth through templates and http (or ftp) server.
    Note that when creating a template or creating a new VM from a template, we're tlaking about full copies. No "fast-clone" (snapshots) are involved.
    This is ridiculous.
    How to Back up a VM:1) Create a template from the OVM Manager console
    Note: Creating a template requires the VM to be stopped (this is required because the if the copy of the virtual disk is done with the running will corrupt data) and the process to create the template make changes to the vm.cfg
    2) Enable Storage Repository Back Ups using the step above:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-storage-repo-config.html#vmusg-repo-backup
    2) Mount the NFS export created above on another server
    3) Them create a compress file (tgz) using the the relevant files (cfg + img) from the Repository NFS mount:
    Here is an example of the template:
    $ tar tf OVM_EL5U2_X86_64_PVHVM_4GB.tgz
    OVM_EL5U2_X86_64_PVHVM_4GB/
    OVM_EL5U2_X86_64_PVHVM_4GB/vm.cfg
    OVM_EL5U2_X86_64_PVHVM_4GB/System.img
    OVM_EL5U2_X86_64_PVHVM_4GB/README
    How to restore up a VM:1) Then upload the compress file (tgz) to an HTTP, HTTPS or FTP. server
    2) Import to the OVM manager using the following instructions:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-repo.html#vmusg-repo-template-import
    3) Clone the Virtual machine from the template imported above using the following instructions:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-vm-clone.html#vmusg-vm-clone-image
    Edited by: user521138 on Sep 5, 2012 11:59 PM
    Edited by: user521138 on Sep 6, 2012 3:06 AM

  • RAID Level Configuration Best Practices

    Hi Guys ,
       We are building new Virtual environment for SQL Server and have to define RAID level configuration for SQL Server setup.
    Please share your thoughts for RAID configuration for SQL data, log , temppdb, Backup files .
    Files  RAID Level 
    SQL Data File -->
    SQL Log Files-->
    Tempdb Data-->
    Tempdb log-->
    Backup files--> .
    Any other configuration best practices   are more then welcome . 
    Like Memory Setting at OS level , LUN Settings. 
    Best practices to configure SQL Server in Hyper-V with clustering.
    Thank you
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    Hi,
    If you can shed some bucks you should go for RAID 10 for all files. Also as a best practice keeping database log and data files on different physical drive would give optimum performance. Tempdb can be placed with data file or on a different drive as per
    usage. Its always good to use dedicated drive for tempdb
    For memory setting.Please refer
    This link for setting max server memory
    You should monitor SQL server memory usage using below counters taken from
    this Link
    SQLServer:Buffer Manager--Buffer Cache hit ratio(BCHR): IIf your BCHR is high 90 to 100 Then it points to fact that You don't have memory pressure. Keep in mind that suppose somebody runs a query which request large amount of pages in that
    case momentarily BCHR might come down to 60 or 70 may be less but that does not means it is a memory pressure it means your query requires large memory and will take it. After that query completes you will see BCHR risiing again
    SQLServer:Buffer Manager--Page Life Expectancy(PLE): PLE shows for how long page remain in buffer pool. The longer it stays the better it is. Its common misconception to take 300 as a baseline for PLE.   But it is not,I read it from
    Jonathan Kehayias book( troubleshooting SQL Server) that this value was baseline when SQL Server was of 2000 version and max RAM one could see was from 4-6 G. Now with 200G or RAM coming into picture this value is not correct. He also gave the formula( tentative)
    how to calculate it. Take the base counter value of 300 presented by most resources, and then determine a multiple of this value based on the configured buffer cache size, which is the 'max server memory' sp_ configure option in SQL Server, divided by 4 GB.
      So, for a server with 32 GB allocated to the buffer pool, the PLE value should be at least (32/4)*300 = 2400. So far this has done good to me so I would recommend you to use it.  
    SQLServer:Buffer Manager--CheckpointPages/sec: Checkpoint pages /sec counter is important to know about memory pressure because if buffer cache is low then lots of new pages needs to be brought into and flushed out from buffer pool, 
    due to load checkpoint's work will increase and will start flushing out dirty pages very frequently. If this counter is high then your SQL Server buffer pool is not able to cope up with requests coming and we need to increase it by increasing buffer pool memory
    or by increasing physical RAM and then making adequate changes in Buffer pool size. Technically this value should be low if you are looking at line graph in perfmon this value should always touch base for stable system.  
    SQLServer:Buffer Manager--Freepages: This value should not be less you always want to see high value for it.  
    SQLServer:Memory Manager--Memory Grants Pending: If you see memory grants pending in buffer pool your server is facing SQL Server memory crunch and increasing memory would be a good idea. For memory grants please read this article:
    http://blogs.msdn.com/b/sqlqueryprocessing/archive/2010/02/16/understanding-sql-server-memory-grant.aspx
    SQLServer:memory Manager--Target Server Memory: This is amount of memory SQL Server is trying to acquire.
    SQLServer:memory Manager--Total Server memory This is current memory SQL Server has acquired.
    For other settings I would like you to discuss with vendor. Storage questions IMO should be directed to Vendor.
    Below would surely be a good read
    SAN storage best practice For SQL Server
    SQLCAT best practice for SQL Server storage
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Needed: system requirements, recommendations & best practices

    I realize that this is just a beta release but I think it'd be nice if the system requirements for this VS.NET add-in were clearly defined in a single post/page.
    For example, Christian Shay's reply to "Oracle Release Requirements" by [170516] indicates that "For the beta release you should be able to use Oracle Database version 8.1.7 or later." However, Christian's reply in the "Oracle DB 8.1.7 connection problems" thread started by [151631] indicates that "you'd need to upgrade to at least 8.1.7.4.1 for it to work."
    Which is it??? I'm using Oracle8i Enterprise Edition Release 8.1.7.2.0 and I don't want to install this add-in if it's not going to work with my current environment. I realize I can download 10g but my employer is using 8.1.7.2 and that's my target DB for now.
    In browsing this forum, I've read a number of posts with suggestions (such as installing the version 10 client in a new Oracle Home, etc.) and I think it'd be nice to see a basic set of requirements/recommendations/best practices in a single place for all to read.

    Hi
    Can you also include the following in the FAQ?
    1) ODP.NET if installed prior to this beta version - what is the best practice ? De-install it prior to getting this installed etc ..
    2) As multiple Oracle home's have become the NORM these days - this being a Client only should probably be non-intrusive and non-invasive.. Hope that is getting addressed.
    3) Is this a pre-cursor to the future happenings like some of the App-Server evolving to support .NET natively and so on??
    4) Where is BPEL in this scheme of things? Is that getting added to this also so that Eclipse and .NET VS 2003 developers can use some common Webservice framework??
    Regards
    Sundar
    It was interesting to see options for changing the spelling of Webservice [ the first one was WEBSTER]..

  • Bring CRM to clients and partners, Architecture DMZ best practices

    Hi, we need to bring access to CRM from internet to clients and partners.
    So that we need to know the best practices to architecture design.
    We have many doubts with these aspects:
    - We will use SAP portal, SAP Gateways and web dispatchers with a DMZ:
           do you have examples about this kind of architecture?
    - The new users will be added in 3 steps: 1000, 10000 and 50000:
           how can regulate the stress at internal system?, is it possible?
    - The system can't show any problems to the clients:
           we need 24x7 system, because the clients are big clients.
    - At the moment we have 1000 internal users.
    thanks

    I use the Panel Close? filter event and discard it and use the event to signal to my other loops/modules that my software should shut down. I normally do this either via user events or if I'm using a queued state machine (which I generally do for each of my modules) then I enqueue a 'shutdown' message where each VI will close its references (e.g. hardware/file) and stop the loop.
    If it's just a simple VI, I can sometimes be lazy and use local variables to tell a simple loop to exit.
    Finally, once all of the modules have finished, use the FP.Close method to close the top level VI and the application should leave memory (once everything else has finished running).
    This *seems* to be the most recommended way of doing things but I'm sure others will pipe up with other suggestions!
    The main thing is discarding the panel close event and using it to signal the rest of your application to shut down. You can leave your global for 'stopping' the other loops - just write a True to that inside the Panel Close? event but a better method is to use some sort of communications method (queue/event) to tell the rest of your application to shut down.
    Certified LabVIEW Architect, Certified TestStand Developer
    NI Days (and A&DF): 2010, 2011, 2013, 2014
    NI Week: 2012, 2014
    Knowledgeable in all things Giant Tetris and WebSockets

  • External System Authentication Credentials Best practice

    We are in the process of an 5.0 upgrade.
    We are using NTLM as our authentication source to get teh users and the groups and authenticate against the source. So currently we only have the NT userid, group info(NT domain password is not stored).
    We need to get user credentials to other systems/applications so that we can pass that on the specfic applications when we search/crawl or integrate with those apps/systems.
    We were thinking of getting the credentials(App userid and password) for other applications by developing a custom Profile Web service to gather the information specific to these users. However, don't know if external application password is secured when retrieving from the external repository via a PWS and storing into the Portal database.
    Is this the best approach to take to gather the above information? If not, please recommend the best practice to follow.
    Alternatively, can have the users enter the external system credentials by having them edit their user profile. However, this approach is not preferred.
    If we can't store the user credential to the external apps, we won't eb able to enhance the user experience when doing a search/or click-thorugh to tthe other applications.
    Any insight would be appreciated.
    Thanks.
    Vanita

    Hi Vanita,
    So your solution sounds fine - however, it might be easier to use an SSO Token or the Plumtree UserID in your external applications as a difinitive authentication token.
    For example if you have some external application that requires a username and password, then if you are in a portlet view of the application the application should be able to take the userid plumtree sends it to authenticate that it is the correct user.  You should limit this sort of password bypass to traffic being gatewayed by the portal (i.e. coming from the portal server only).
    If you want to write a Profile Web Service, the data the gets stored in the Plumtree Database is exactly what the Profile Web Service send it as the value for a particular attribute.  For example if your PWS tells Plumtree that the APP1UserName and APP1Password for user My Domain\Akash is Akash and password then that is what we save.  If your PWS encrypts the password using some 2-way encryption before hand, then that is what we will save.  These properties are simply attached to the user, and can be sent to different portlets.
    Hope this helps,
    -aki-

  • UDDI and deployed Web Services Best Practice

    Which would be considered a best practice?
    1. To run the UDDI Registry in it's own OC4J container with Web Services deployed in another container
    2. To run the UDDI Registry in the same OC4J container as the deployed Web Services

    The reason you don't see your services in the drop-down is because, CE does lazy initialization of EJB components (gives you a faster startup time of the server itself). But your services are still available to you. You do not need to redeply each time you start the server. One thing you could do is create a logical destinal (in NWA) for each service and use the "search by logical destination" button. You should always see your logical names in that drop-down that you can use to invoke your services. Hope it helps.
    Rao

  • What is the Account and Contact workflow or best practice?

    I'm just learning the use of the web services. I have written something to upload my customers into accounts using the web services. I need to now include a contact for each account. I'm trying to understand the workflow. It looks like I need to first call the web service to create the account, then call a separate web service to create the contact and include the account's ID with the contact to that they are linked, is this correct?
    Is there a place I can go to find the "best practices" for work flows?
    Can I automatically create the contact within my call to create the account in the web service?
    Thanks,

    Probably a poor choice of words. Sorry.
    So basically, I have gotten further, but I just noticed related problem.
    I'm using the WebServices(WS) 1.0. I insert an account, then, on a separate WS call, I insert my contacts for the account. I include the AccountID, and a user defined key from the Account when creating the Contact.
    When I look at my Contact on the CRMOD web page, it shows the appropriate links back to the Account. But when I look at my Account on the CRMOD web page, it shows no Contacts.
    So when I say workflow or Best Practice, I was hoping for guidance on how to properly write my code to accomplish all of the necessary steps. As in this is how you insert an account with a contact(s) and it updates the appropriate IDs so that it shows up properly on the CRMOD web pages.
    Based on the above, it looks like I need to, as the next step, take the ContactID and update the Account with it so that their is a bi-directional link.
    I'm thinking there is a better way in doing this.
    Here is my psuedocode:
    AccountInsert()
    AccountID = NewAcctRec
    ContactInsert(NewAcctRec)
    ContactID = NewContRec
    AccountUpdate(NewContRec)
    Thanks,

  • Grid Control and SOA suite monitoring best practice

    Hi there,
    I’m trying to monitor a SOA implementation on Grid Control.
    Are there some best practices about it?
    Thanks,     
    Nisti
    Edited by: rnisti on 12-Nov-2009 9:34 AM

    If they use it to access and monitor the database without making any other changes, then it should be fine. But if they start scheduling stuff like oradba mentioned above, then that is where they will clash.
    You do not want a situation where different jobs are running on the same database from different setups by different team (cron, dbcontrol, dbms_job, grid control).
    Just remember their will be aditional resource usage on the database/server to have both running and the Grid Control Repository cannot be in the same database as the db console repository.

  • GRC AACG/TCG and CCG control migration best practice.

    Is there any best practice documents which illustrates the step by step migration of AACG/TCG and CCG controls from the development instance to the production? Also, how should one take the back up for the same ?
    Thanks,
    Arka

    There are no automated out of the box tools to migrate anything from CCG.  In AACG/TCG  you can export and import Access Models (includes the Entitlements) and Global Conditions.  You will have to manual setup roles, users, path conditions, etc.
    You can't clone AACG/TCG or CCG.
    Regards,
    Roger Drolet
    OIC

  • Oracle 9i Client and System.Data.OracleClient Problems

    I have an application server that has an ASP.NET webpage that queries an Oracle database on another machine through ODP.NET.
    The app server's machine's OS is Microsoft Windows XP SP2 with IIS and .NET 2.0 Framework installed and configured correctly.
    The following Oracle software is installed (from OUI inventory listing)
    Oracle Services For Microsoft Transaction Server 9.2.0.7.0
    Oracle ODBC Driver 9.2.0.7.0
    Oracle Provider for OLE DB 9.2.0.7.0
    Oracle Objects for OLE 9.2.0.7.0
    Oracle Data Provider for .NET 9.2.0.7.0
    Oracle 9i Client 9.2.0.1.0
    Sun JDK 1.3.1.0.1a
    When the ASP.NET page attempts to connect to the Oracle instance on the remote machine via the Data provider I get the following message...
    System.Data.OracleClient requires Oracle client software version 8.1.7 or greater.
    Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
    Exception Details: System.Exception: System.Data.OracleClient requires Oracle client software version 8.1.7 or greater.
    Source Error:
    An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.
    <<snipped>>
    I know the connectivity between the client on the app server (SQL*Plus: Release 9.2.0.1.0)
    and the remote database (Oracle Database 10g Enterprise Edition Release 10.2.0.3.0)
    is working since sqlplus sessions connect and the NET Manager tests through Local Service Naming succceed.
    I've tried the "ORACLE_HOME permission changes" solution for Autenticated users as articulated here ....
    http://jasondotnet.spaces.live.com/blog/cns!BD40DBF53845E64F!122.entry
    but the error persists.
    In the list of oracle installed products above, I installed the first 5 (ODP.NET) before the last 2 (Oracle 9i client). Would this matter ? At first I thought perhaps the ODP.NET component didn't "know" about the client since it was installed before there was one on that app server.
    Feeback much appreciated.

    Well I figured it out. I guess...
    1) Uninstalled all Oracle Software Products
    2) Rebooted the machine.
    3) Manually removed C:\oracle\ora92\.
    4) Installed and configured the following
    * Oracle 9i Client 9.2.0.1.0
    5) Installed and configured the following
    * Oracle Services for Microsoft Transaction Server 9.2.0.7.0
    * Oracle ODBC Driver 9.2.0.7.0
    * Oracle Provider for OLE DB 9.2.0.7.0
    * Oracle Objects for OLE 9.2.0.7.0
    * Oracle Data Provider for .NET 9.2.0.7.0
    6) Gave the following accounts Full Control over the ORACLE_HOME directory
    * ASP.NET Machine Account
    * Internet Guest Account
    * Launch IIAS Process Account
    6) iisreset
    Conclusion....
    The order in which Oracle components are installed does matter.

  • Oracle XA driver and isolation level

    We have an Entity EJB that has isolation level specified in its
    deployment descriptor and everything works fine if we use the non-XA
    Oracle 9i driver.
    However when we use the Oracle 9i Release 2 XA driver we get the
    following exception:
    java.sql.SQLException: Due to vendor limitations, setting transaction
    isolation for "Oracle 8.1.7 XA" JDBC XA driver is not supported.
    at
    weblogic.jdbc.jta.DataSource.setTxIsolationFromTxProp(DataSource.java:1126)
    at
    weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(DataSource.java:1109)
    at weblogic.jdbc.jta.Connection.getXAConn(Connection.java:145)
    at weblogic.jdbc.jta.Connection.getAutoCommit(Connection.java:247)
    at
    weblogic.jdbc.rmi.internal.ConnectionImpl.getAutoCommit(ConnectionImpl.java:173)
    at
    weblogic.jdbc.rmi.SerialConnection.getAutoCommit(SerialConnection.java:164)
    Note the 8.1.7 version that the container prints. The driver is
    definitely 9i and not 8.1.7 and it's the very first thing in the classpath.
    Is this a problem with the driver really or is it a WLS issue?
    Thanks,
    Dejan

    Hi,
    I removed the transaction isolation level setting from the deployment
    descriptor and it works now as expected.
    But in the meanwhile I also ran a test by using the driver directly
    without Weblogic and I was able to successfully set the transaction
    isolation level on the XA connections so I believe it's a Weblogic problem.
    Dejan
    Deyan D. Bektchiev wrote:
    Yes,
    We know about this but we just try to set it to the default one
    (READ_COMMITED) which is probably redundant for Oracle but might not
    be the default one for another DB vendor.
    Here is the part of the deployment descriptor:
    <transaction-isolation>
    <isolation-level>TRANSACTION_READ_COMMITTED</isolation-level>
    <method>
    <ejb-name>Event</ejb-name>
    <method-name>*</method-name>
    </method>
    </transaction-isolation>
    Dejan
    Slava Imeshev wrote:
    Deyan,
    I'm not 100% sure, but AFAIR oracle doesn't support all isolation
    levels.
    Which level do you set? Could you show us this part of the deployment
    descriptor?
    Slava
    "Deyan D. Bektchiev" <[email protected]> wrote in message
    news:[email protected]...
    Thanks for the reply Sree,
    Yes the 9i driver is the very first thing in the classpath as otherwise
    we wouldn't even be able to connect to the 9i database with the 8.1.7
    driver (we were getting lots of other exceptions when we were still
    using the 8.1.7 driver).
    I'll try removing the setting of the isolation level and I'll post the
    result to the newsgroup.
    Dejan
    Sree Bodapati wrote:
    hi Deyan,
    java.sql.SQLException: Due to vendor limitations, setting
    transaction
    isolation for "Oracle 8.1.7 XA" JDBC XA driver is not supported.
    should not have showed up with wrong version of oracle. If you can
    verifiy
    that the thin driver is the first thing in the classpath and you are
    indeed
    using the thin driver, This is a bug.
    But in this case it looks like you need to remove the
    TransactionIsolation
    level from the EJB descriptor to get this to work. can you try that? I
    will
    forward this to one of our EJB engineers and see if we can get some
    help
    for
    you.
    sree
    "Deyan D. Bektchiev" <[email protected]> wrote in message
    news:[email protected]...
    Sree,
    We already did but the only answer we got was that the Oracle 9i
    Release
    2 driver was not at all supported by Weblogic 7.0 SP1.
    But the certification page says that it is even certified...
    So which one is true: The Oracle 9i Release 2 driver is not
    supported or
    is supported?
    --dejan
    Sree Bodapati wrote:
    Please file a support case for this at [email protected]
    "Deyan D. Bektchiev" <[email protected]> wrote in message
    news:[email protected]...
    We have an Entity EJB that has isolation level specified in its
    deployment descriptor and everything works fine if we use the
    non-XA
    Oracle 9i driver.
    However when we use the Oracle 9i Release 2 XA driver we get the
    following exception:
    java.sql.SQLException: Due to vendor limitations, setting
    transaction
    isolation for "Oracle 8.1.7 XA" JDBC XA driver is not supported.
    at
    weblogic.jdbc.jta.DataSource.setTxIsolationFromTxProp(DataSource.java:1126
    at
    weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(DataSource.java:1109)
    at weblogic.jdbc.jta.Connection.getXAConn(Connection.java:145)
    at
    weblogic.jdbc.jta.Connection.getAutoCommit(Connection.java:247)
    at
    weblogic.jdbc.rmi.internal.ConnectionImpl.getAutoCommit(ConnectionImpl.jav
    a
    >
    173)
    at
    weblogic.jdbc.rmi.SerialConnection.getAutoCommit(SerialConnection.java:164
    Note the 8.1.7 version that the container prints. The driver is
    definitely 9i and not 8.1.7 and it's the very first thing in the
    classpath.
    Is this a problem with the driver really or is it a WLS issue?
    Thanks,
    Dejan

  • Oracle report installation and system flow

    Please help me to give some inputs for the oracle development. The 12 reports wont integrate with the existing system because it was not built by us and also I want to know how the oracle report installation will be done. Please update my diagram scenario of the proposed solution as below
    Problems Faced in the existing system:-
    ================================
    •     12 Critical Finance Reports not generated according to user requirements
    •     Most of basic feature for reporting purposes are not implemented in current system which is in Oracle Apps 11i - E-Business Suite
    •     Takes a lot of time to prepare reports which are needed to be presented for management meetings.
    Proposed System
    ===============
    Expectation
    •     The finance dept plans to redevelop 12 new critical reports for their system
    •     The new implementation must be compatible with the future migration of current Oracle Apps 11i - E-Business Suite to Oracle Apps version 12.
    Solution
    The new development will be built using the oracle reports and won’t be integrated with the existing application. Oracle reports do not require any specific installation to be done at client PC.
    Advantages
    •     If it's from a desktop users may find it easier to fire the reports as they may not require to login to Apps.
    •     When the Report is displayed, the user can simply generate the report to several formats like PDF, XLS, HTML, RTF, XML etc.
    •     Oracle Reports is reliable: You can serve thousands of concurrent users in a secure and reliable manner.
    •     Oracle Reports has the ability to apply changes to a report based on a customization file in XML either at runtime or in batch.
    Proposed system flow for Oracle reports development
    =========================================
    12 reports to be installed onto the Red Hat AS3 system <--------------Red Hat AS3(Oracle E-Business Suite Ver 11i (Apps) and Oracle Database Ver 9.2.0.5) -----> Oracle Database

    There are groups specific to e-Business Suite. Please delete this post and post your inquiry there.

Maybe you are looking for

  • Floating popup in Apex 4.0

    Hi, Is there any way to create a floating pop up in Apex 4.0. For example when i click a button in a page a new page must be popped up, but the first page will be non editable but we can see it in background.When i click a button the popped up page i

  • No Sound in Gnome 3.2

    Just upgraded and sounds are disappear. pulseaudio + alsa.

  • I got the new 5S - but where did my new photos go?

    before I restored all my old info from my 4 to the new 5S (photos, apps etc) I used the phone to take photos and text - now that I've restored it all the new photos I took are gone. Where can I find them?

  • Have to re install elements 11 & premiere elements 11. cause problem with upgrade windows 8.0 to 8.1

    have to re install elements 11 & premiere elements 11. cause: problem with upgrade windows 8.0 to 8.1.    Lost all software and data. how to re install elements11. (downloads purchased in your store 2013-05) [email protected]

  • Standards accepted by Premiere elements 9

    Before january, I imported videos from my camcorder to Vegas software (under Windows). In january, I bought an imac and Adobe elements (photoshop and Premier elements 9). How to do to integrate towards Premiere elementsmy previous .m2t files ?