Apex  : Ideal Architecture

Work for Qualcomm and we using Ebusiness 11i, Oracle 10g.
Evaluating APEX for and was hoping to get answers on what is the best practice from an architecture point of view.
Should we run a separate database just for APEX ? We Oracle ADF 11g instance setup. Can APEX run inside this container .
We do not want to run mod_pls/sql on the ebiz database. Both from a security and a performance point of view.

Hi,
What is the best architecture depends on what you are trying to achieve.
You can run APEX in a separate database from EBS and this has the advantage that you are not tied to the EBS database versions. However, you will have to retrieve all EBS data used in the APEX application using a database link which is more difficult to do and will perform less well. You also may want to authenticate APEX users using the EBS authentication and run other process in EBS from APEX. Again this is possible but more difficult using an database link. Also creating a link into EBS weakens the system security.
You cannot really run APEX inside ADF container.
I wouldn't rule out running APEX in the same database from either a security or performance point of view. The APEX security is good and can be hardened if that is a requirement. The APEX processing can be managed within the database using resource groups or RAC nodes so that it does not impact the EBS processing. This article (http://www.oracle.com/technology/products/database/application_express/pdf/apex_ebs_wp_cabot_consulting.pdf) has some information on the EBS/APEX infrastructure.
Rod West

Similar Messages

  • Architecture Considerations with AD RMS

    Hi,
    I'm looking to implement an AD RMS in an organization, and would like to find out more details on some architectures that I have come up with and hopefully some advice and which is better.
    Architecture 1: 2 Physical Servers for AD RMS and MSSQL
    Of course, we know that this is the most ideal architecture, but it is not cost-efficient for the organization, leading to the concepts of the following architectures.
    Architecture 2: 2 Virtual (VMWare) Servers for AD RMS and MSSQL
    What are the implications of using a virtual server for production?
    Architecture 3: 1 Physical Servers for AD RMS and MSSQL
    I know it is possible to install AD RMS and MSSQL in a single server (regardless of bad practices), but I would like to know the implications and if it will cause any underlying or prospective problems.
    Architecture 4: 1 Virtual (VMWare) Servers for AD RMS and MSSQL
    Most ideal in terms of cost, but what are the implications by doing so?
    Thanks in advance for any advice!

    Hi jeromeee,
    AD RMS is at the end a web service talking to AD and has no problem running on a virtual server. For SQL it might makes sense to run it on a physical box but only for really large environments (dont ask me where really larger begins for RMS). So for the
    projects I did i just used an existing SQL server/cluster provided from the client's SQL team. And you have to check performance as part of your operational tasks anyway, regardless if it is virtual or physical. And then you can move the SQL database to another
    server, physical or virtual.
    If you just plan with one RMS server SQL can be on the same machine. You could even add another RMS machine in the cluster for load balancing, but not for failover unless you don't care about the RMS log files.
    Regards,
    Lutz
    Hi Lutz,
    Thanks for your reply and input! It makes sense that both AD RMS and SQL Server could be running on a shared virtual environment (Architecture 4 as mentioned in my first post), and since it'll only be supporting around 600 users (at max), I feel that this
    set up is the more favorable at the moment.
    However, I don't quite understand what you meant by "..., but not for failover unless you don't care about the RMD log files.". Does this mean that if I plan to scale up by adding an additional RMS machine in the cluster for load balancing in future, there
    is no possibility for failover?
    Once again, thanks for your reply Lutz!

  • How to access my Apex application as a website

    On my Windows 8.1 desktop machine, the stack is,
    Oracle Standard 11g R2
    Apex 4.2.6
    ORDS 2.0.10
    Open Source Glassfish 4.1
    Apache Tomcat 8.0.21
    Apache HTTP Server 2.4.3
    Port details
    Apex ORDS - 8080
    Glassfish - 8090
    Tomcat - 8888
    Apache HTTP server - 9090
    I have done Port Forwading to myDesktop on my home wi-fi as follows
    Apex ORDS 8080
    Glassfish 8090
    Tomcat 8888
    Apache HTTP server 9090
    Following is c:\windows\system32\drivers\etc\hosts
    127.0.0.1       localhost
    ::1                 localhost
    100.100.1.1     myDesktop       ( This is fixed IP of the desktop at my home network. This is made up here )
    100.100.1.1     www.my_website.com       ( This is fixed IP of the desktop at my home network. This is made up here )
    c:\Program Files\Apache Software Foundation\Apache2.4\conf\httpd.conf
    Listen 80
    c:\Program Files\Apache Software Foundation\Apache2.4\conf\extra\httpd-vhosts.conf
    <VirtualHost *:80>
        ServerName www.my-website.com
        ServerAlias my-website
        ErrorLog "logs/my-website.com-error.log"
        TransferLog "logs/my-website.com-access.log"
        Redirect / http://www.my-website.com/ords/f?p=100:LOGIN_DESKTOP
        <Location /ords/>
            ProxyPass               http://localhost:8888/ords/
            ProxyPassReverse        http://localhost:8888/ords/
        </Location>
        <Location /i/>
            ProxyPass               http://localhost:8888/i/
            ProxyPassReverse        http://localhost:8888/i/
        </Location>
    </VirtualHost>
    When I access 111.111.11.11:8888 from outside, I can access Tomcat page successfully.
    ( 111.111.11.11 is my static IP provided by my ISP ).
    Without internet, I can access the Apex Admin page on the same desktop machine by www.my-website.com/ords/f?p=4550. This indicates, all the stack is working properly.
    But if I access www.my-website.com, over the internet, I only get blank page.
    (my domain is registered with GoDaddy and I forwarded to 111.111.11.11 which is my static IP)
    Why can't I access my Apex application over the internet by typing the website.

    Hi 3ds/Niranjan,
         To all the forum members, this thread is related to : 2 applications on one website
         I have helped him to setup architecture like this : Dimitri Gielis Blog (Oracle Application Express - APEX): Preparing architecture for APEX 5.0 upgrade
         The above architecture is setup properly on his home PC. He has obtained a domain from GoDaddy and on the domain he has configured redirect to his home PC.
         This redirect thing will not work I have explained him. For an APEX application to be accessible on internet(hosted) it should be hosted on APEX hosting solution providers domain. Refer : https://apex.oracle.com/pls/otn/f?p=24793:11::::::
         OR there is cloud option available.
         Refer :
    hosted or cloud ? Apex...
    Re: Apex on Cloud?
         Forum members can provide more clearance on this issue.
         Hope this helps!
    Regards,
    Kiran

  • Questions on the comparison between Oracle Forms and Oracle APEX

    Hi All,
    The link below presents information about Oracle Application Express for Oracle Forms Developers, the table at the end of the page shows a comparison between Oracle Forms and Oracle APEX, all the points of comparisons are clear for me except 3 points which are:
    •Locking, what is meant by locking models?
    •Database Connections, what is meant by Synchronous/Asynchronous connections in Oracle Forms and Oracle Apex?
    •Architecture, what is meant by 2tier and 3 tier connections?
    http://www.oracle.com/technology/products/database/application_express/html/apex_for_forms.html
    What I need is a simple explanation for these points without deep details.
    Thanks

    Hi
    That is how I understand that document:
    Locking: Forms, by default, locks a row as soon as the user starts modifying the data. That is pessimistic locking. Apex, on other hand (and optionally forms also) do not lock the record, but before applying any changes checks if the data has changed since the user queried it (what for some reason is called optimistic "locking")
    DB connections: I am not sure why they used the terms synchronous/asynchronous, but the difference is that Forms, by default, keeps an permanent DB connection while the user is using the application, while Apex gets a connection from a connection pool every time a page is requested/submitted.
    Architecture: Forms (in its web version at least) has 3 tiers: the browser, the appserver where the forms service runs and the database. As Apex runs inside the database, there are only 2 tiers: the browser and the database (though you still may need an http server in between which serves static content, I don't think it is considered part of the application in this context). If you are talking about client/server forms, then there are only 2 tiers.
    I hope this helps!
    Luis

  • Advantages of using Oracle with Unix over Windows server

    Hi there,
    I want some article/document which depicts the advantages of using Oracle with Unix (say HP-UX or Solaris backend). Actually the plan is to use some Data Warehousing applications using Cognos Poweplay, ReportNet and either Decision Stream of Cognos or Oracle Warehouse builder may be used as ETL tool. For Data mining applications we are planning to use SPSS Clementine. The Data volume will be substantial one. At present we are developing some prototype in Windows 2003 advanced environment. We are plaiing to use Risc server and RAID-5. Pl. advice some ideal architecture for us, as you know, it's typically a Govt. level application data (mostly archival data). The reports will be published using Report Net, Adhoc query etc, OLAP analysis will be done using Powerplay.
    Regards,
    Anupam Basu

    Hi there,
    I want some article/document which depicts the advantages of using Oracle with Unix (say HP-UX or Solaris backend). Actually the plan is to use some Data Warehousing applications using Cognos Poweplay, ReportNet and either Decision Stream of Cognos or Oracle Warehouse builder may be used as ETL tool. For Data mining applications we are planning to use SPSS Clementine. The Data volume will be substantial one. At present we are developing some prototype in Windows 2003 advanced environment. We are plaiing to use Risc server and RAID-5. Pl. advice some ideal architecture for us, as you know, it's typically a Govt. level application data (mostly archival data). The reports will be published using Report Net, Adhoc query etc, OLAP analysis will be done using Powerplay.
    Regards,
    Anupam Basu

  • An interview on OpenSSO's identity services

    Read a new interview article on SDN starring Aravindan Ranganathan, software architect: "From the Trenches at Sun Identity, Part 6: Identity Services for Securing Web Applications" at http://developers.sun.com/identity/reference/techart/identity-services.html. You'll learn the reasons why OpenSSO's identity services are an ideal architecture for protecting applications from unauthorized access, the related tasks, the benefits, and the plans for integrating identity services with the federation capability in OpenSSO.

    Sprint would be the best source of information on how their service works.

  • Question on replication/high availability designs

    We're currently trying to work out a design for a high-availability system using Oracle 9i release 2. Having gone through some of the Oracle whitepapers, it appears that the ideal architecture involves setting up 2 RAC sites using Dataguard to synchronize the data. However, due to time and financial constraints, we are only allowed to have 2 servers for hosting the databases, which are geographically separate from each other in prevention of natural disasters. Our app servers will use JDBC pools to connect to the databases.
    Our goal is to have both databases be the mirror image of each other at any given time, and the database must be working 24/7. We do have a primary and a secondary distinction between the two, so if the primary fails, we would like the secondary database to take over the tasks as needed.
    The ability to query existing data is mission critical. The ability to write/update the database is less important, however we do need the secondary to be able to process data input/updates when primary is down for a prolonged period of time, and have the ability to synchronize back with the primary site when it is back up again.
    My question now is which replication technology should we try to implement? I've looked into both Oracle Advanced Replication and Dataguard, each seems to have its own advantages and drawbacks:
    Replication - can easily switch between the two databases using multimaster implementation, however data recovery/synchronization may be difficult in case of failure, and possibly will lose data (pending implementation). There has been a few posts in this forum that suggested that replication should not really be considered as an option for high availability, why is that?
    Dataguard - zero data loss in failover/switchover, however manual intervention is required to initiate failover/switchover. Once the primary site fails over to the standby, the standby becomes the primary until DBA manually goes back in and switch the roles. In Oracle 10g release 2, seems that automatic failover is achieved through the use of an extra observer piece. There does not seem to be anyway to do this in Oracle 9i release 2.
    Being new to the implementation of high-availability systems, I am at somewhat of a loss at this point. Both implementations seem to be a possible candidate, but we will need to sacrifice some efforts for both of them also. Would anyone shine some light on this, maybe point out my misconceptions with Advanced Replication and Dataguard, and/or suggest a better architecture/technology to use? Any input is greatly appreciated, thanks in advance.
    Sincerely,
    Peter Tung

    Hi,
    It sounds as if you're talking about the DB_TXN_NOSYNC flag, rather than DB_NOSYNC.
    You mention that in general, you lose uncommitted transactions on system failure. I think what you mean is that you may lose some committed transactions on system failure. This is correct.
    It is also correct that if you use replication you can arrange to have clients have a copy of all committed transactions, so that if the master fails (and enough clients do not fail, of course) then the clients still have the transaction data, even when using DB_TXN_NOSYNC.
    This is a very common usage scenario for Berkeley DB replication/HA, used to achieve high throughput. You will want to pay attention to the configured ack policy, group size setting, setting of the 2SITE_STRICT option (if group size == 2).

  • ODBC, OCI, OCCI

    Hi,
    Could anyone tell me whether I can access a Oracle Spatial instance through Oracle 9i AS Wireless using ODBC, OCI or OCCI. I want to interrogate the spatial database from a Compaq iPAQ running the familiar distribution of embedded Linux.
    Cheers,
    Sean

    iAS is an application server, which pretty much indicates that you'll be writing Java. ODBC, OCI, and OCCI are C or C++ API's. It's probably possible to set up a system that makes JNI calls in iAS to C or C++ libraries, but that hardly seems like the ideal architecture.
    If you want to use a Java application, JDBC is the Java API for database access. JDBC will certainly allow you to work with the Oracle Spatial datatypes.
    You can also use things like mod_plsql with iAS to do the vast majority of middle tier work in PL/SQL.
    If you want to use OCI or OCCI, you could certainly write a middle tier layer there, throw some ASP pages on top, and expose that through IIS (since you mentioned ODBC as an option, I'm assuming you're a Windows shop).
    The ODBC does not provide a particularly useful way to work with Oracle-specific data types like those in interMedia.
    Justin
    Distributed Database Consulting, Inc.
    www.ddbcinc.com

  • Clustering clarification needed please ....

    i am working on webcenter content clustering .
    i have to hosts (host1 and host2 ) ,
    -installed weblogic and UCM on both hosts with the same directory structure .
    -create domain in host 1 ,add a cluster ,complete everything and start ucm on host 1 ,then i used pack and unpack utility to copy domain config from host1 to host 2 .
    -start ucm on both hosts ,add document using ucm on host 1 and i can search for it from host 2 .
    now ,this is called active -active clustering right ? ??
    my managers needs my cluster to work as follow :
    if any user where working on ucm (host1) and something goes wrong with ucm on host1 ,clients should be able to continue their work using ucm on host 2 .without reloading the page .
    -is this possible with active active clustering ? which product will control this ?
    -another question please ,the cluster ware is on host 1 ,so if anything goes wrong with host one ,then whole cluster will be down right ???? so what is the best scenario to do this ?
    -and http server ,what it can help in clustering ?
    some clustering expert explain these points to me please ,i am reading but not able to connect things together well !!!
    thanks in advance .....

    First, I will assume that your architecture is as follows:
    Host 1 - Running two weblogic server instances (AdminServer and UCM_Server1)
    Host 2 - Running one weblogic server instance (UCM_Server2)
    Now, when you open the WebLogic console, you should see that both UCM_Server1 and UCM_Server2 are part of the same cluster.
    Since you will have your UCM application deployed to the cluster (UCM_Server1 and UCM_Server2), it should be available and running on both the managed servers.
    And this is exactly what clustering means, the user will be able to continue their work on server2 when failed over from server1
    In weblogic we only have active-active clustering and your requirement is the typical usecase of a cluster
    While the continuation of the work is controlled by the replication activity between the managed servers (UCM_Server1 and UCM_Server2) which occurs in the background, the actual failover needs to be performed by the WebServer.
    WebServer significance:
    Let's say we did not have a WebServer, in the browser when you provide a URL to access the application, it will look as below:
    http://UCM_Server1sIPorDNS:Port/ucm
    Now, what will happen if host1 is down or UCM_Server1 is down, your browser will not know where to go to and user will also realize that a backend server has failed, which we do not want to happen.
    So, the ideal architecture is
    Client (Browser) --> WebServer (OHS) --> WebLogicServer (UCM_Server1 OR UCM_Server2)
    In this case, the URL in the browser will be the WebServer URL.
    http://WebServerIPorDNS:Port/ucm
    Now, even when UCM_Server1 is down, client will never know as the webserver will automatically failover the request to UCM_Server2.
    Since, UCM_Server1 and UCM_server2 are in a cluster, the session information is available in UCM_Server2 for the request/transaction to continue from where it stopped processing on UCM_Server1
    ONE MORE CLARIFICATION:
    If you understood the above clarification, you will realize that cluster is virtual (not a physical process).
    So, even if host1 goes down, there is no issue as UCM_Server2 (part of the cluster) will continue to serve the request of the appication.
    WebServer will migrate/failover all the user requests that are being currently served by UCM_Server1
    NOTE: It is recommended to run your webserver (if you are using a single webserver only) on a host other than the hosts running the UCM servers.
    Hope this helps to answer your queries.
    Please let me know if you have any further questions

  • OCI OCCI performance

    Hi everyone,
    is there a difference between these two implementation?
    I 've got to develop a C++ program with more than 3 billion database entries, so every tuning would be helpful I guess :-)
    The biggest table would have 1,5 billion entries.
    Is OCCI only a warrper or has it more features? Is there a book or documention about the performance for C++ application with oracle.
    All I 've found about performance tuning is tuning the database itself.
    thx
    marc

    iAS is an application server, which pretty much indicates that you'll be writing Java. ODBC, OCI, and OCCI are C or C++ API's. It's probably possible to set up a system that makes JNI calls in iAS to C or C++ libraries, but that hardly seems like the ideal architecture.
    If you want to use a Java application, JDBC is the Java API for database access. JDBC will certainly allow you to work with the Oracle Spatial datatypes.
    You can also use things like mod_plsql with iAS to do the vast majority of middle tier work in PL/SQL.
    If you want to use OCI or OCCI, you could certainly write a middle tier layer there, throw some ASP pages on top, and expose that through IIS (since you mentioned ODBC as an option, I'm assuming you're a Windows shop).
    The ODBC does not provide a particularly useful way to work with Oracle-specific data types like those in interMedia.
    Justin
    Distributed Database Consulting, Inc.
    www.ddbcinc.com

  • Where can I learn about RAC with APEX .... or APEX architecture?

    Hi All,
    We are desiring to use RAC w/10g and run several APEX applications.
    RAC runs different instances of the same Database so I need to know if there would be a problem with the session information.
    Thank you, BillC

    Bill,
    I believe that the reason you don't see much on this topic is because it is very architecturally simple. For me, at least, this is a good thing. You can contact me here:
    http://concept2completion.net/c2/f?p=9876:20
    if you want some details, but I think what you are looking for can be summed up fairly simply.
    First, let's assume now Webcache because I think Webcache is very unusual for an Apex application.
    You don't need to cluster any of the mid-tier components. Just put a load balancer in front of your App Servers. You will get a slight improvement if you have the load balancer perform a sticky session (that is, it will route the same user to the same App Server for subsequent requests). The reason for this is that if the user is routed to the same App Server it is likely to get the same database session it had previously, and will likely be on the same RAC node.
    Configure your Apex DADs (typically one per App Server) to utilize as many RAC nodes as you like. Generally this would be all of your RAC nodes, but that is not necessary. Your DBA will probably know how to do this for the effect you want (balancing, failover, etc.).
    So, what happens through a request. This is dependent a little on if you let the Apex engine handle session mgt. Generally you do, so we will assume that Apex is setting a cookie and handling the checks. My order might be slightly off... The "Apex Engine" refers to activities that Apex does inside the database.
    1. The user requests a page.
    2. The App Server passes the request along with any cookie information, header info, etc. to mod_plsql, which has a session pool connected to the database instance(s).
    2a. I'm not 100% sure on this, but I believe that if the user has already used a session from that session pool, mod_plsql will try to reuse that same session which is already connected to a database instance. I think this will cause the user to be connected to the same instance that he was in previously (if this is not the first call).
    3. The Apex engine (inside the database) checks for the existence of a cookie and a session id that match a record in the session table.
    3a. If no match is found it will create a record and set the cookie and session id in your browser.
    4. The Apex engine sets up environment variables (nls_lang, apex_user, etc.) and accesses any user session info (in a session info table)
    5a. If a submit, the Apex engine processes the page and, via the App Server, returns a page moved response indicating what page for the browser to branch to (see 5b)
    5b. If a get, the Apex engine creates a page within the db and passes the generated html back through the App Server to the browser.
    As you can see, the App Server does not do much. It is just a conduit between the browser and the database.
    If you want to add Webcache to the mix, you have to tell Webcache to cache the pages generated by Apex. Webcache will generally cache based upon a url, and generally does not cache posts (page submits) but only gets (links). As your links will have a session ID in them (and you will have a unique cookie value), Webcache will only provide a cached response to the user if the user has already visited that page within that session (with precisely the same URL). This can be great, if you have a page that does not change much, but it can be a nightmare if either the data in the database changes (and you want the change reflected) on the page, or if you have some other process that should cause the page to change (e.g. if you set some search criteria in a process, not in the url and the report where clause should change). In either case, if Webcache responds, it never calls the db to regenerate the page, you just get the exact data on the page. The way around this is to have the database issue an invalidation to Webcache whenever either of these cases occur. As you can see, this can be a lot of work. It's a lot to just type up the description of it! The only cases where it makes a lot of sense is when you have a lot of content that does not change on a few pages or really big content (e.g. documents, images) that you will download more than once in a session.
    I hope this helps. Drop me a note if you want more details.
    Anton

  • Architecture for APEX using iAS 10.1.2 and 10g with Dataguard replication

    We are currently using APEX in a stand-alone architecture using iAS 10.1.2 on with 10g as the DB(all on the same server). Our APEX apps are fast becoming a significant part of our production environment and I would like to set up a redundant failover site using Dataguard replication. I am very familiar with the DB failover, but I am struggling with how I can set up iAS on the standby server.
    Oracle guidelines for iAS failover using Dataguard (this is not AS Dataguard) specify that the primary and standby server local hostname must be identical for the failover architecture to work unless you use a virtual host name. Due to restrictions in our environment I am not able to set the lcoal hostname the same on the primary and standby servers. As an alternative I installed iAS(10.1.4) using a virtual hostname on my primary server, but it appears that the servername that shows up in all the installation references after the installation is still the actual server name, not the virtual name.
    I am looking for guidance from someone that has successfully set up a iAS failover architecture using dataguard on servers that do not have the same local hostname.
    thanks

    Hi,
    I have the same problem. Do you find something about putting jsf in OAS 10.1.2 ?
    Regards

  • Apex architecture

    Client needs data accessed and pulled from multiple MS access databases and then build Apex apps based on it. Suggestion was to write a web service and pull data from MS access DB to sql sever and build app from there.
    Any thoughts as far as how to approach this?
    Thank you
    Saki
    Edited by: user10690319 on Feb 11, 2009 12:45 PM

    Apex NEEDS an Oracle database to work in, so having data in an Access or SQL Server database is a wasted effort.. Why not move the data out of those two clunkers and into a REAL database like Oracle??
    Thank you,
    Tony Miller
    Webster, TX

  • Disconnected / Distributed Architecture

    I'm working with a customer that wants to implement a disconnected / distributed architecture. They have groups of inspectors that travel around the world, including 3rd world countries with no possibility of cellular data access. They have a pool of about 70 laptops that they use. Each person in the group of inspectors will have their own laptop and use it to do an on site evaluation. Currently they then export these as files and the existing software brings these files together in a report. The problem is it doesn't really work and the company that built it went out of business.
    Whenever I hear someone wanting to do disconnected mobile I strongly advise against it then run the other way. It's a very challenging problem to sync either the data or the app, let alone both. Someone else proposed APEX, and as much as I love APEX it just doesn't feel like the right technology. If skill-set weren't an issue, I'd probably build it in Adobe Air with a local SQL-Lite database that syncs up to the Oracle "mother-ship". At least Air has the concept of automatically updating the application whenever it connects built into it's core architecture.
    I have a few thoughts on syncing such as:
    - Roll your own data sync with PL/SQL over DB links. Should be less than 20 tables.
    - Advanced Replication
    - GoldenGate (if price is not an issue). Could possible sync the APEX_ tables to sync up the app too
    - Write a VB app that checks for application updates when they are connected, then downloads the APEX app and installs it
    So, my questions are:
    - Has anyone here done something similar in APEX? If so, can you discuss the details including data and application updates?
    - Has anyone done something similar with another technology stack?
    Tyler Muth
    http://tylermuth.wordpress.com
    "Applied Oracle Security: Developing Secure Database and Middleware Environments": http://sn.im/aos.book

    Hi Tyler,
    I share your first thougts - run the other way. ;)
    I haven't done something like that with APEX, but I guess using DB-Links is one option, but not the ideal one. Treating an application like data is possible, though the deployment could get a little tricky, if you want this to happen automaticly.
    - Of course, you could store the application archive as BLOB or even CLOB column in a table along with other meta data (e.g. build number, timestamp, etc.).
    - You could build an application that acts as update manager that stores information (build numbers, timestamps etc.) for locally installed applications and data, and looks up possible updates - for both data and applications.
    - I would try to do the data transfer with web services rather than using data base links. APEX-integration is getting better with every new version and it sounds more handy to me, especially error handling etc.
    Leaves you with the point how to deploy your application. Since the archive itself is SQL, you could download it using a select statement and simply execute them. There exists a nice API, but I don't know if you can use it within APEX easily. As for images and other items: Since you probably use a scenario with EPG as HTTP-Server for APEX, it should not be too hard to retrieve BLOB-data and load it into XDB.
    It get's a lot easier, if the inspectors would do the deployment part. Since this is not too complicated in APEX, it might be just the update manager that offers them the appropriate downloads.
    I'd be intrested to know how you actually realized this architecture, if you ever get it running before it get's you to run. ;)
    -Udo

  • Import of Apex Application to another instance takes a long time

    Hi,
    I have developed an apex application which is of size -24 MB .While importing the appl to another instance it takes a long time (few hours). The imported appl is working fine so far.
    What should be the ideal time for the import of this size appl ?
    Is it the usual due to the size of the app or some thing is wrong with the app ?
    Any one can please throw some light on this, as I am unable to figure out if it the appl issue or the DB issue.
    Thanks in advance!

    The application is getting imported and running, so you do not really have a problem in hand.
    24M is big, but not all that big to run for hours.
    The time taken depends on the resources available on the server where you are trying to import. If the load on that server is high then the resources will be shared between all the processes run on it, and the import process will run longer.
    Monitor disk and CPU usage, it will provide you some clues. Try importing at offpeak time and it should run faster if there is a server resource issue.
    Regards,

Maybe you are looking for

  • How to remove personal information in a PDF document such as name, computer name...................?

    Hello, I want to send a PDF file without giving away all information about myself, my computer, date and time the document was created, modified and so on, does somebody know how to change or totally remove that kind of information?

  • 3.0 update no longer works with dock!

    Hi Since my iphone 3g upgraded to 3.0 I can no longer use my apple usb dock as the computer does nto recognize the device. Why must apple be so annoying? Does anyone else use a dock and can no longer? Having it plugged into a cable on my work desk is

  • Need command to run jrockit jdk

    Hi , i included 'D:\Program Files\bea\jrockit90_150_04\bin' in environment path variable. when i enter the command from command prompt java -Xmanagement to run jrockit jdk ,it is saying unrecognized option. when i enter the same command from bin fold

  • Auto accounting for deferred expenses

    Hi all I would like to know if there is a functionality in Oracle to perform auto accounting for Deferred Expenses. I'm giving below an example of my exact requirement for my professional colleagues to reply. Scenario: The school fee for an employee'

  • Cant load photos to Ipod----Vista issue?

    I've searched tons of threads and forums with no answer to my problem. Got a new Ipod(30 gig w/video) and HAD to buy a new computer to even use the Ipod.(even though old machine met min specs) No problems with music but I cant load photos I've manage