Architecture Advice - XMLTYPE, VPD, PL

Hi, apologies if this is off-topic but wasn't sure which forum to pick for general architecture questions...
I've inherited an existing application architecture that we're struggling to scale beyond about 100 concurrent users on small-scale hardware, and would appreciate any advice.
It is a 3-tier web application, using XMLTYPE (CLOB), VPD and with the vast majority of the business logic and workflow coded in PL/SQL.
We're experiencing that the CLOB-based XMLTYPE columns very quickly eat up all available CPU, and our very complex VPD policies tends to slow tables down about 300% above a few thousand rows. Since the data is directly updated in PL/SQL our middle tier (java) can't cache anything.
The expectation is that this architecture should scale to thousands of concurrent users, with reasonablly large data volumes, on relatively small scale (e.g. 2 or 4 CPU) h/w. Our market cannot really afford/have the expertise to use additional Oracle products such as Grid, Partitioning, RAC etc.
I'm not sure where to start - we've tried tuning individual queries but I think we'll need more than that. We're using the latest edition of 10g RDBMS.

I've used a temporary table the way you propose, and it works pretty well. I override a method of my View Object (executeQuery, I think) to call some PL/SQL to load the temporary table before executing the query. Performance is not quite as good, but isn't a show stopper. My particular application was using a read-only VO, with no underlying Entity Object, but I think you could override an EO method to call some PL/SQL to use data from the temporary table to update the real tables.
I've looked at the original version of Avrom Roy-Faderman's framework extension, but I know he substantially revised it since I looked. It is a cool use of the extensible nature of the ADF BC framework. Avrom has placed the framework in samplecode.oracle.com, and now someone else is leading the project.
This should work well for PL/SQL APIs that have only database native types as parameters to the procedures. But it has the same problems as I mentioned before with PL/SQL records and collections, unless you first make them into database object types with CREATE TYPE.
I have also used the JPublisher interface of JDev. This is pretty neat and it can do the CREATE TYPE commands for you and write conversions to and from PL/SQL types. It also generates A LOT of Java code - the problem with it is that as a code generator it has to generate much more code than you would if you wrote it by hand to handle cases that you know won't occur with your data. And the code it writes can be hard to read and maintain. The classes it writes can be integrated into ADF BC objects, but it still is going to need some hand coding to do the job. Or some people abandon ADF BC, and use these as POJOs which can be made into Data Controls.

Similar Messages

  • TCP architecture advice and suggestions

    Hello All,
    Just trying to come up with some ideas for architecture implementation.  I am needing to communicate to multiple cRIO modules and typical use TCP in the past to communicate with each cRIO module.  I now have the problem of having multiple cRIO modules running and I want to be able to split the command set into generic and specific commands.  i.e a Generic command is received and handled in the same way for each cRIO chassis from the host controlling PC.  This allows me to have a generic type def command set and several specific type def command sets within a project.  I was hoping to use a poly on the cRIO side (and the host) in order to adapt to which command set it has received and use a different state machine (Which will all be similar) depending on which type def command it has received.  This should avoid me having one large type def CMD enum which contains all of the generic commands, all the commands for cRIO A, all the commands for cRIO B etc.
    Essentially I know this isn't going to work but is there any other ways of doing this?  Is this touching on the realms of dynamic dispatch by selecting which vi is run at runtime? Is it time to bite the bullet and use classes? Etc etc
    If anyone can shed some light it would be appreciated.
    I have thought of workarounds but that is not what I am after really, just if there is a way of doing it properly and if so where to go read up next.
    Many Thanks in advance
    Craig
    LabVIEW 2012
    Attachments:
    Example Problem.png ‏27 KB

    Hi Craig,
    If it was me building this program I would go down the dynamic dispatch/ classes route as it will allow the architecture to be very scalable as the system needs to expand.  I think any other method of implimenting this will have limitations which can be avoided by using the dynamic dispatch/ classes design. 
    In terms or where to read up on this and how to get started ni.com has a lot of documentation on this subject a simple search will find many results but I have included a few below that may help to get you started.
    If you have any more questions then please feel free to post back an I will be happy to help you further.
    Intro to LVOOP
    What the Heck is OOP
    Best Regards
    Matt Surridge
    National Instruments

  • C++ coherence, some architectural advice?

    Hi,
    I was wondering if you could offer me some advice. I have a legacy c++ system that has a massive code base, the cache we use is hand rolled and is creaking under the strain. The current cache is not scalable.
    I'm looking to use coherence there's a couple of things I want to achieve, however I'm unsure how to proceed and would like some advice on best practices. Perhaps if I list them, someone could help me with this.
    1) I want to persist all objects in the cache as pof objects (so I can if required write java code to access them in the cluster). Our c++ objects (of which there are 100's) are reflective enough for me to auto-generate the java pof classes. My question is, should I do this? All object changes will be driven by the c++ and the the java pof code will be re-generated should any objects change. Now does something exist in the tool chain to do this for me? I was thinking maybe I could spew out an xml schema of some sort and maybe there was a tool to auto-generate the objects in java? (I am not a java developer so please forgive me if this is trivial in java).
    2) Given I now have a java representation of an object, I need to be able to go to the cache and ask for an object by an ID, if the object is NOT there it needs to go to an Oracle database and fetch it. I've seen TopLink, I don't know if this would help me? I do NOT want to warm the cache as I want the cache to have a zero statup time. Any ideas on how to achieve this would be appreciated.
    3) Each object type will live in it's own cache, however the master object will comprise of multiple smaller objects. So the master objects will say, fetch object type A(id = 5),B(id = 6),C(id = 10),D(id = 7). Now I don't really want to hit the cache with N requests. The last SIG I went to I was told that Coherence now supports aggregate caches. I looked in the examples directory but could see any examples of this. Some help with this would also be appreciated.
    4) Our backend is an Oracle database. We support global replication from global sites, how do I hook coherence up so that it knows that a table from which it constructed objects from (question 1) has been updated and automatically fetches the updated object from the database to the cache. Or if simpler when a database change occurs it invalidates the cache entry so that when that object is request the cache reads through to the database (question 2).
    I realise this is a lot, I've given quite a bit of though on how I want to set coherence up and I think this would work really well for us. However I'd be interested to hear any thoughts to the contrary.
    Thanks again
    Rich
    Edited by: Rich Carless on 16-Sep-2010 05:56
    Edited by: Rich Carless on 16-Sep-2010 05:57
    Edited by: Rich Carless on 16-Sep-2010 05:58

    Hi Rich,
    Rich Carless wrote:
    Hi Robert.
    Thanks for the extensive reply, sorry for the delayed response I've been out of the office.
    If you wouldn't mind I'd like to drill down a little further into some of your ideas?
    +"It really depends on your classes. POF supports maps and lists out of the box, since you mention that your classes are reflective enough, maybe representing them as a map-based tree of attribute names and values are enough, which can automatically be written out by PofWriter. If you need more strong typing, then your approach of generating Java classes and optionally PofSerializers on your own is possibly the simplest way available currently."+
    Create a map tree data structure is an interesting idea. However I think I'd like to lay the foundation for future c#/java development, so I feel having replicated versions of our objects in these target languages would serve us well. I have generated a master XSD, which describes our object structure, I have used xsd.exe to generate c# classes, I know that the equivalent exists for java. Will pof be able to serialize these automatically generated classes (my spidey sense tells me the answer is yes)? And do you foresee any problems (ie versioning).
    If you generate proper implementation for the readExternal/writeExternal methods to implement PortableObject in Java, or if you generate PofSerializers for the class, then yes, it would be able to. You would also need to generate the appropriate pof-config.xml fragments.
    +"You can use the Read-through caching feature of Coherence. If you need to persist it back, you can use write-through/write-behind."+
    Whilst I understand the concept, how does coherence know what sql to use? Is this a method on the object? Could you give me a little more detail on how to set this up, this step is probably one of the most important steps. Or, if you have it an example?
    You need to write a CacheLoader/CacheStore implementation (those are interfaces defined by Coherence) which does the actual reading/writing to the database for certain cache keys which Coherence passes on to you as method parameters. For more about it, please check:
    For more, look at the following Wiki page: http://coherence.oracle.com/display/COH35UG/Read-Through%2C+Write-Through%2C+Write-Behind+and+Refresh-Ahead+Caching
    +"I am not sure which presentation you refer to but I believe you may have misunderstood things. Coherence does not support single-roundtrip cross-cache operations out-of-the-box. Nearest thing is having key-affinity between master and child records, and sending either an entry-aggregator/entry-processor to the associated key and access the backing map directly (dangerous if you do not know what you are doing), or an Invocable agent to the owner node and execute multiple cache operations locally and return the results together."+
    Perhaps I have misunderstood, I'm sure this is something I emailed Brian about. However for the early stages is we make multiple requests, this will suffice, we can always make it go faster at a later date.
    For a start, you can send an invocable agent to the owner node, and do the individual cache requests locally. That will still eliminate the half of the network hops. For even less roundtrips, I believe we will have to wait for a later release which will expose some more of the functionality which, I believe, Coherence XA also uses internally. Gene was mentioning it on the London SIG earlier, but ultimately it did not make it into 3.6.0. You know the usual disclaimer... until it makes it into the release it should not be mentioned as an existing or upcoming feature :-). Believe me, I would love to get my hands on that feature, too :-).
    +"You could use Oracle Advanced Queue to send invalidation messages from Oracle, or actually I believe there are some newer Oracle RDBMS features which could be used to trigger such invalidation with better performance, but I don't remember the name of the feature."+
    I'd be interested to know the name of this technology, and anyone with experience of setting it up.
    Thanks
    RichI will try to remember where I saw it...
    Best regards,
    Robert

  • Need iFS http-client/http-servlet architectural advice.

    I need to design a custom client to connect to an iFS http protocol server in order to allow users of my client to drop document objects (files) on iFS and modify properties (metadata) of those document objects.
    In other words, my client connects to iFS via http, and lets the user upload "files" to iFS, but more importantly, also lets the user enter values for custom properties associated with the class objects corresponding to the iFS document objects being uploaded.
    Fo a while, WebDAV looked like a good solution. But I just discovered WebDAV does not allow me to access custom properties for an iFS object.
    It looks to me now that my only alternative is to also write a custom servlet and have my custom client connect to that servlet. This is not desireable from a product viewpoint since it requires special installation at the server.
    My custom client is to be integrated in an already existing application that manages files, so I really need to write my own special client. I cannot use the WebUI.
    Any idea, tips, or advice from the iFS gurus would be greatly appreciated.
    Is there any way to avoid the custom servlet approach. For instance, is there some existing agent or servlet that I can just send XML requests to ?

    What version are you running?
    If 9iFS v9.0.2 or prior you may be able to achieve what you are after by use of XML files that would update metadata of an existing 9iFS object.
    CM SDK v9.0.3 and later only supports parsing of XML from the CUP protocol - so you would be out of luck here.
    Your best bet it to write the custom servlet unfortunately.
    There is an enhancement filed for the CM SDK WebDAV servlet to allow hooks that will enable you to do your own processing throughout the transaction.
    Matt.

  • Forms server architecture advice needed

    Hi
    We are using Forms & Reports Standalone 10.1.2.3.0. I am trying to get confirmation on two areas: how network traffic flows between client and Forms Server, and if traffic between the client and engine is encrypted.
    Network Traffic
    ==========
    Are the steps below the correct sequence of steps, particularly the last two steps?
    . user enters the appropriate URL into a browser
    . web server serves page containing applet tag (the applets render the forms screens on the client)
    . Browser requests applet if not already on client
    . Web server downloads applet to client
    . Applet on client contacts Forms Listener on server (using socket specified).
    . Forms listener starts a java runtime process on the server and hands off communication to it.
    . All further communication (as the user interacts with the form) is directly between applet on client and runtime process on server.
    Encryption
    =========
    Once communication between the client and the runtime applet is a direct relationship, is this traffic encrypted. I have seen a white paper from 2006 "An overview of Oracle Forms Server Architecture" which states it is, but this paper is 8 years old and I cannot find anything more upto date for 10gRel2.
    Any help would be appreciated.
    Regards
    Andy

    Oracle's reply to my questions were:
    Network Traffic
    =========
    Your understanding for the forms network traffic process is almost correct, except for step (7), where further communication between the applet and the runtime process is done through the Forms Listener Servlet, i.e. not a direct communications as was the case with sockets.
    But I believe this is not a network security issue, as both the Forms Listener Servlet and runtime process should reside on the same server machine.
    Encryption
    =======
    If security is an issue for you, Forms Development do not promote the 40 bit encryption as a secure solution. It dates back to a time when security did not have such a high profile, and most applications were run over the intranet.
    In that case, I would recommend using SSL.
    Regards
    Andy

  • Architecture advice for Effect and AEGP communication

    Happy monday!
    In my current plugin project I need to use an "helper" AEGP for tasks my main effect plugin cannot accomplish (delayed tasks via AEGP idle hook, project modifications that will crash if performed inside the Effect logic, etc.)
    I'm trying to figure out an elegant and robust architecture design for this cross-communication. 
    Heres the way I see it:
    Effect passes info along to AEGP via custom suite data.
    AEGP reads the data at idle, and performs some required task.
    The success/fail of this work and the resulting products are communicated back to the Effect via AEGP_EffectCallGeneric.
    My question is basically surrounding the data/info handoff.  What kind of objects should I be passing back and forth?  Is it considered poor design to simply pass along a pointer to the sequence or custom arb data to the AEGP via the custom suite?  And vice versa when the GP sends EffectCallGeneric to the effect and passes a pointer, it could just be the same pointer that the effect originally sent the AEGP using the custom suite. 
    Aside from undo/redo, is there an advantage to using arb parameter data over sequence data for this task?
    It seems to me that AEGP plugins are loaded once and only once in the session, so if multiple instances of my effect plugin are used in the project, and they are all calling the AEGP there is the potential for a race condition if the custom suite data is used to pass along information.  Is this an accurate assessment of AEGPs? A simple example of this would be that the custom data suite has a variable that stores which effect plugin is calling the AEGP.  If another instance of the effect starts talking to the AEGP, it would overwrite the suite var telling the AEGP which effect instance is calling it, thereby making the AEGP perform the work of one effect instance but sending the results to different instance.
    Even if the AEGP is extremely simple and just performs small atomic operations each time the custom suite data tells it to, how do I prevent n+1 Effect instances from colliding when using the AEGP worker?  Am I missing some key part of the Effect->AEGP communication that prevents this race condition situation?
    This is the first time I've tried my hand at writing inter-process communication where I have the option to exchange actual in-memory objects. I'm hoping someone with more experience with this sort of problem can give some pointers(hah) or at least a few cautionary words about designs to avoid.
    Thanks, sorry for such an open ended question, but this is the place to talk to the pros
    -Andy

    yo andy! what's up?
    i see you were up to no good... naughty.
    you are correct in your assumptions.
    AEGPs are indeed loaded only once per session, but if you're rendering with
    the "multiple frame simultaneously" option, then the AEGP is loaded
    separately for each AE instance.
    (so would your plug-in. each AE instance calls it's own global setup)
    now, plug-ins are called on the main thread. which should mean that only
    one plug-in is is called at a time, which should mean that your AEGP may
    only get one call at a time. i'm not absolutely sure that the base
    assumption is correct here. it's possible that multiple effects are called
    at the same time, so re-entrancy of your AEGP suite is possible.
    how do we know? we can either test, or ask zac lam.
    re-entrancy is a non issue when reading data (all can read at once. it
    doesn't matter). but when writing data... that's a problem.
    to prevent such issues you need to implement some mutex (mutually
    exclusive) mechanism.
    it only allows one caller access at a time, while the other caller stall
    until their turn.
    boost library offers such tools.
    as for what data you should transfer.
    you can either pass data by value, or by reference.
    when a function gets it's data like so:
    function(int val);
    it get's a local copy of the data. no worries about scope here.
    if it gets the data like so:
    function(int *val);
    or function(int &val);
    then we're talking reference. in this case, the val variable is only valid
    while the object to which it refers is valid.
    that depends on stability of that memory chunk. what makes a chunk stable?
    1. if the data is created as a local scope variable it will invalidate when
    the code block finishes. make sure don't access that data at other times.
    2. AE handles of a plug-in instance are only valid during calls to that
    plug-in.
    let's look at the two directions of communications you have.
    1. effect calls AEGP.
    in this case, the call order is as follows:
    AE calls effect.
    effect is now executing. at this point, sequence/global data are valid, and
    anything locally allocated in the call handling function.
    effect calls AEGP.
    AEGP is now executing.
    you can now pass anything you like from the effect to the AEGP.
    if the AEGP returns data, it should be data that remains valid after the
    AEGP finishes executing.
    AEGP finishes it's thing and exits.
    effect is now executing.
    effect finishes the call, exits, and AE is executing.
    at this point, seq/global data handles are no longer valid. (at least you
    can't rely on that)
    also, anything allocated in the effect's calling function as a local
    variable, is no longer valid.
    so now your AEGP gets called on idle time.
    it should only rely on memory that has kept since the effect was executing.
    what memory is kept?
    anything in global scope of the AEGP, or pointers to memory that was not
    deallocated or moved.
    seq data handle may not have been deallocated, but you can't tell if it
    moved or not. so you can't rely on it.
    if the seq data has another pointer in it, which point to a non movable
    piece of data, then you can rely on that pointer.
    seq/global handles are locked an unlocked by AE. memory handles are managed
    by the plug-in (so you know if they can be relied on or not).
    2. AEGP calls effect.
    AE is calling the AEGP on idle time.
    AEGP is executing.
    AEGP calls effect with data.
    effect is now executing.
    effect uses data gotten from AEGP.
    effect optionally returns data to AEGP.
    effect exits.
    AEGP is now back executing.
    AEGP uses data returned from effect. that data can't be references to local
    variables in the effect as they are no longer valid.
    AEGP exits, AE is now executing.
    so in both directions you need to see that at the time the data is used,
    that is is valid.
    as for sequence data vs arb param.
    yes, there is a difference besides undo/redo.
    AE creates (multiple) copies of the data stored in arb params before each
    call to the plug-in and also after.
    if you're storing very large amounts of data in the arb param, then the
    copy operations will start dragging your performance down.
    it's usually a non-issue, but... it's a point to think about.

  • Architecture advice

    Hi Guys,
    I have been working on a application for the last 5 years and have been tasked with upgrading the server side technologies for it.
    At the moment, we are running EJB1.1, DB2, Websphere 6.
    We are planning on moving to EJB 3, the sessions and JPA entities upgrade I think I have figured out easily enough.
    We have BMP beans, that use a custom framework to access Web Services, and map xml objects via xerces to BMP entity beans. Caching needs to be highly configurable here.
    So, I am not sure how to replace the BMP framework we have, I have been looking at JAXB, and JIBX, but unsure, probably JAXB.
    Also need to find a good caching framework for this, can I use JPA caching mechanism for these POJOs?
    What do you guys think, anything else I haven't considered?
    Mac

    If you are going to move to EJB3 then you are best to take advantage of container provided services. Once you have your entities you should let your container manage the persistence. I have used ehcache for caching with satisfactory results.
    EJB3 allows stateless EJBs that are webservice endpoints too.

  • Requesting basic architecture advice, please

    Hello all,
    I've got a situation where a customer will make a request of a web app and needs to receive information back from the web app anywhere from 30 minutes to an hour later. This is well beyond the length of a reasonable HTTP timeout, and even if I extended the timeout and kept the session alive, I'd be worried about running out of resources as this will get many, many hits.
    Can you suggest an approach that might work here, please? Would you put an app server at each end and just pass http calls to servlets back and forth? An ESB with two clients so they can pass messages back and forth?
    Thanks in advance for your time!
    Rgds,
    Bret

    You will want to have the initial request be asynchronous. Simply return a response to the client that the request was properly formatted and is being processed. From that point, you have a few options:
    1. Use email or pager notification to let the user know that the request has completed
    2. Use AJAX or something similar to update an area of the page when the request has completed
    Regardless, you will be passing back to the user a token of some kind. When they request the results of the report/etc. they will furnish the token, which you can use to fetch the results that were previously calculated. You might also have a need to clean out the results of processing, say overnight, if the results should not persist indefinitely.
    - Saish

  • Architecture Sanity Check

    I'm just getting started with TopLink and need some architectural advice. My TopLink project will be used by two different web applications that may or may not be hosted on the same server. The first application uses Jakarta Struts to create a web interface. The second uses Apache SOAP to provide a web service interface. Both applications use the same database.
    Given this, what is the best practice for using TopLink while taking into account concurrency issues? Do I need to use the Remote Sessions (RMI) approach?
    Thanks!

    Thomas,
    Remote Session is not the answer. It is used to get TopLink access from another tier. For multiple applications using the same project os multiple instances of the same project there are a couple of issues to address in your configuration of TopLink.
    1. Concurrency
    2. Stale Cache
    To address concurrency I would recommend using optimistic locking on all tables. This is typically done using a version field (numeric or timestamp) but can also be accomplished using some or all of the fields of the table. Refer to the docs for how to setup this feature. You will also need to ensure that each transaction also check for OptimisticLockingException.
    Dealing with the potential for stale cache is caused by multiple applications (including non-Java/legacy) modifying the same database. TopLink's cache can be configured on a per class basis. For reference/static data I would use a FullCacheWeakIdentityMap. For classes of data that may change more frequently you could use a SoftCacheWekaIdentityMap or to hold objects for the minimal amount of time a WeakIdentityMap.
    Additionally there are some settings that may be of interest.
    - query.refreshIdentyMapResult() - will force the refresh on a given query/finder.
    - descriptor.onlyRefreshIfNewerVersion() - makes use of optimistic locking field(s) to determine if refresh is required
    - Cache-Synchronization will allow changes made in one TopLink session (application instance) to be notified and updated in other sessions using the same project. This will minimize stale cache situations.
    Every application has a different profile for how to handle stale cache and TopLink offers a number of features that are easily and externally configurable. Ensuring that concurrency is addressed through locking is crucial then the configuration can be adjusted to minimize stale objects and optimize the performance benefits of the cache.
    Doug

  • General architecture questions

    Hello,
    I am developing a web application and could use some architectural advice. I've done lots of reading already, but could use some direction from those who have more experience in multi-tier development and administration than I. You'll find my proposed solution listed below and then I have some questions at the bottom. I think my architecture is fairly standard and simple to understand--I probably wrote more than necessary for you to understand it. I'd really appreciate some feedback and practical insights. Here is a description of the system:
    Presentation Layer
    So far, the presentation tier consists of an Apache Tomcat Server to run Servlets and generate one HTML page. The HTML page contains an embedded MDI style Applet with inner frames, etc.; hence, the solution is Applet-centric rather than HTML-centric. The low volume of HTML is why I decided against JSPs for now.
    Business Tier
    I am planning to use the J2EE 1.4 Application Server that is included with the J2EE distribution. All database transactions would be handled by Entity Beans and for computations I'll use Session Beans. The most resource intensive computational process will be a linear optimization program that can compute large matrices.
    Enterprise Tier
    I?ll probably use MySql, although we have an Oracle 8 database at our disposal. Disadvantage of MySql is that it won't have triggers until next release, but maybe I can find a work-around for now. Advantage is that an eventual migration to Linux will be easier on the wallet.
    Additional Information
    We plan to use the system within our company at first, with probably about 5 or less simultaneous users. Our field engineer will also have access from his laptop. That means he?ll download the Applet-embedded HTML page from our server via the Internet. Once loaded, all navigation will be Applet-centered. Data transfer from the Applet to Servlet will be via standard HTTP.
    Eventually we would like to give access of our system to a client firm. In other words, we would be acting as an application service provider and they would access our application via the Internet. The Applet-embedded HTML page would load onto their system. The volume would be low--5 simultaneous users max. All users are well-defined in advance. Again, low volume HTML generation--Applet-centric.
    My Questions
    1). Is the J2EE 1.4 Application Server a good production solution for the conditions that I described above? Or is it better to invest in a commercial product like Sun Java System Application Server 7 ? Or should I forget the application server concept completely?
    2). If I use the J2EE Application Server, is this a good platform for running computational programs (via Session Beans)? Or is it too slow for that? How would it compare with using a standalone Java application--perhaps accessed from the Servlet via RMI? I guess using JNI with C++ in a standalone application would be the fastest, though a bit more complex to develop. I know it is a difficult question, but what is the most practical solution that strikes a balance between ease-of-programming and speed?
    3). Can the J2EE 1.4 Application Server be used for running the presentation tier (Servlets and HTML) internally on our intranet? According to my testing, it seems to work, but is it a practical solution to use it this way?
    4). I am running Tomcat between our inner and outer firewalls. The database would of course be completely inside both firewalls. Should the J2EE (or other) Application Server also be in the so-called ?dmz? with Tomcat? Should it be on the same physical server machine as Tomcat?
    5). Can Tomcat be used externally without the Apache Web Server? Remember, our solution is based on Servlets and a single Applet-embedded HTML page, so high volume HTML generation isn?t necessary. Are there any pros/cons or security issues with running a standalone Tomcat?
    So far I've got Tomcat and the J2EE Application Server running and have tested my small Servlet /Applet test solution on both. Both servers work fine, although I haven't tested any Enterprise Beans on the application server yet. I?d really appreciate if anyone more experienced than I can comment on my design, answer some of my questions, and/or give me some advice or insights before I start full-scale development. Thanks for your help,
    Regards,
    Itchy

    Hi Itchy,
    Sounds like a great problem. You did an excellent job of describing it, too. A refreshing change.
    Here are my opinions on your questions:
    >
    My Questions
    1). Is the J2EE 1.4 Application Server a good
    production solution for the conditions that I
    described above? Or is it better to invest in a
    commercial product like Sun Java System Application
    Server 7 ? Or should I forget the application server
    concept completely?
    It always depends on your wallet, of course. I haven't used the Sun app server. My earlier impression was that it wasn't quite up to production grade, but that was a while ago. You can always consider JBoss, another free J2EE app server. It's gotten a lot of traction in the marketplace.
    2). If I use the J2EE Application Server, is this a
    good platform for running computational programs (via
    Session Beans)? Or is it too slow for that? How
    would it compare with using a standalone Java
    application--perhaps accessed from the Servlet via
    RMI? I guess using JNI with C++ in a standalone
    application would be the fastest, though a bit more
    complex to develop. I know it is a difficult
    question, but what is the most practical solution that
    strikes a balance between ease-of-programming and
    speed?
    People sometimes forget that you can do J2EE with a servlet/JSP engine, JDBC, and POJOs. (Plain Old Java Objects). You can use an object/relational mapping layer like Hibernate to persist objects without having to write JDBC code yourself. It allows transactions if you need them. I think it can be a good alternative.
    The advantage, of course, is that all those POJOs are working objects. Now you have your choice as to how to package and deploy them. RMI? EJB? Servlet? Just have the container instantiate one of your working POJOs and delegate to it. You can defer the deployment choice until later. Or do all of them at once. Your call.
    3). Can the J2EE 1.4 Application Server be used for
    running the presentation tier (Servlets and HTML)
    internally on our intranet? According to my testing,
    it seems to work, but is it a practical solution to
    use it this way?
    I think so. A J2EE app server has both an HTTP server and a servlet/JSP engine built in. It might even be Tomcat in this case, because it's Sun's reference implementation.
    4). I am running Tomcat between our inner and outer
    firewalls. The database would of course be completely
    inside both firewalls. Should the J2EE (or other)
    Application Server also be in the so-called ?dmz? with
    Tomcat? Should it be on the same physical server
    machine as Tomcat?I'd have Tomcat running in the DMZ, authenticating users, and forwarding requests to the J2EE app server running inside the second firewall. They should be on separate servers.
    >
    5). Can Tomcat be used externally without the Apache
    Web Server? Remember, our solution is based on
    Servlets and a single Applet-embedded HTML page, so
    high volume HTML generation isn?t necessary. Are
    there any pros/cons or security issues with running a
    standalone Tomcat?
    Tomcat's performance isn't so bad, so it should be able to handle the load.
    The bigger consideration is that the DMZ Tomcat has to access port 80 in order to be seen from the outside without having to open another hole in your outer firewall. If you piggyback it on top of Apache you can just have those requests forwarded. If you give port 80 to the Tomcat listener, nothing else will be able to get it.
    >
    So far I've got Tomcat and the J2EE Application Server
    running and have tested my small Servlet /Applet test
    solution on both. Both servers work fine, although I
    haven't tested any Enterprise Beans on the application
    server yet. I?d really appreciate if anyone more
    experienced than I can comment on my design, answer
    some of my questions, and/or give me some advice or
    insights before I start full-scale development. Thanks
    for your help,
    Regards,
    Itchy
    There are smarter folks than me on this forum. Perhaps they'll weigh in. Looks to me like you're doing a pretty good job, Itchy. - MOD

  • Should I use EJB?

    Hi-
    I was wondering if anyone could offer some architectural advice. I need to create a service that does the following:
    1.take a plain text file provided by a client
    2.run some non-java executables on that file
    3.return the binary output file to the client (could be anywhere from 2-60 meg)
    Would it be appropriate to use a stateless session bean for this purpose? Is it not wise to transfer large, serialized binary files over the RMI protocol used in EJB?
    Thanks

    I dont see any reason to use EJB for something so simple. I don't see any reason not to use RMI for something so simple.

  • Best practice to develop internet app integrating with backend R/3 modules

    While we wait to upgrade from R/3 4.6c, going forward we want to stop investing in ITS Flow Logic applications.
    What is the best practice around using backend RFCs/BAPIs to expose SAP functionality as a web application that is accessible on the internet? One thought looks like using WAS 6.4 - utilizing JRA to call RFCs and using JSP/Servlet; another is to use Webdynpro based development. Will appreciate some architecture advice along side - especially if we also wanted internet surfers to set up user accounts. Thanks!

    Hi Vito,
    I do have the same situation as you and also some of the guys mentioned above as well. I have Portal only users and also users who uses the SAP GUI.
    Thus, what I would advise, taking into consideration of audit as well, is to have the below scenerios:
    1) Users who login to backend with SAP GUI on Citrix only
    We have changed the system parameter: login/password_change_for_SSO=2
    The password change dialog box appears and the password must be changed (input: old and new password). Also we have setup SNC (CyberSafe) so that in our SAP GUI, users can click on the system with SNC setup and login to backend without having to enter userID and password
    2) Users who login to backend with SAP GUI on client (local)
    Users will login with userID and password
    3) Portal user with SSO and no login to backend vwith SAP GUI 
    Portal users will have their password deactivated.
    Explaination to Audit for Portal users:
    We have 90days password reset on Windows (AD). So our Portal users are respecting the audit request of having 90days password reset, but instead of having it in SAP, its in our Windows. Furthermore, SSO is setup as such that the coinnection for these Portal users to the backend is secure.
    We are not able to set login/password_change_for_SSO=3 as we have sites which does not use Citrix. Thus, these sites will have local SAP GUI install.
    Hope that can share some experience of mine to those who are also in my past situation.
    Ray

  • General Design Questions

    Hello,
    I am developing a web application and could use some architectural advice. I've done lots of reading already, but could use some direction from those who have more experience in multi-tier development and administration than I. You'll find my proposed solution listed below and then I have some questions at the bottom. I think my architecture is fairly standard and simple to understand--I probably wrote more than necessary for you to understand it. I'd really appreciate some feedback and practical insights. Here is a description of the system:
    Presentation Layer
    So far, the presentation tier consists of an Apache Tomcat Server to run Servlets and generate one HTML page. The HTML page contains an embedded MDI style Applet with inner frames, etc.; hence, the solution is Applet-centric rather than HTML-centric. The low volume of HTML is why I decided against JSPs for now.
    Business Tier
    I am planning to use the J2EE 1.4 Application Server that is included with the J2EE distribution. All database transactions would be handled by Entity Beans and for computations I'll use Session Beans. The most resource intensive computational process will be a linear optimization program that can compute large matrices.
    Enterprise Tier
    I?ll probably use MySql, although we have an Oracle 8 database at our disposal. Disadvantage of MySql is that it won't have triggers until next release, but maybe I can find a work-around for now. Advantage is that an eventual migration to Linux will be easier on the wallet.
    Additional Information
    We plan to use the system within our company at first, with probably about 5 or less simultaneous users. Our field engineer will also have access from his laptop. That means he'll download the Applet-embedded HTML page from our server via the Internet. Once loaded, all navigation will be Applet-centered. Data transfer from the Applet to Servlet will be via standard HTTP.
    Eventually we would like to give access of our system to a client firm. In other words, we would be acting as an application service provider and they would access our application via the Internet. The Applet-embedded HTML page would load onto their system. The volume would be low--5 simultaneous users max. All users are well-defined in advance. Again, low volume HTML generation--Applet-centric.
    My Questions
    1). Is the J2EE 1.4 Application Server a good production solution for the conditions that I described above? Or is it better to invest in a commercial product like Sun Java System Application Server 7 ? Or should I forget the application server concept completely?
    2). If I use the J2EE Application Server, is this a good platform for running computational programs (via Session Beans)? Or is it too slow for that? How would it compare with using a standalone Java application--perhaps accessed from the Servlet via RMI? I guess using JNI with C++ in a standalone application would be the fastest, though a bit more complex to develop. I know it is a difficult question, but what is the most practical solution that strikes a balance between ease-of-programming and speed?
    3). Can the J2EE 1.4 Application Server be used for running the presentation tier (Servlets and HTML) internally on our intranet? According to my testing, it seems to work, but is it a practical solution to use it this way?
    4). I am running Tomcat between our inner and outer firewalls. The database would of course be completely inside both firewalls. Should the J2EE (or other) Application Server also be in the so-called 'dmz' with Tomcat? Should it be on the same physical server machine as Tomcat?
    5). Can Tomcat be used externally without the Apache Web Server? Remember, our solution is based on Servlets and a single Applet-embedded HTML page, so high volume HTML generation isn't necessary. Are there any pros/cons or security issues with running a standalone Tomcat?
    So far I've got Tomcat and the J2EE Application Server running and have tested my small Servlet /Applet test solution on both. Both servers work fine, although I haven't tested any Enterprise Beans on the application server yet. I'd really appreciate if anyone more experienced than I can comment on my design, answer some of my questions, and/or give me some advice or insights before I start full-scale development. Thanks for your help,
    Regards,
    Itchy

    I can give my opinion on some of these questions and a resource for the others.
    <question>
    1). Is the J2EE 1.4 Application Server a good production solution for the conditions that I described
    Or is it better to invest in a commercial product like Sun Java System Application Server 7 ? Or should I forget the application server concept completely?
    </question>
    Yes, the J2EE 1.4 app server is a good solution for these conditions. Specifically, I would use SJSAS PE 8.0 version or the J2EE SDK. They are free version that will meet your needs. You get the latest J2EE 1.4 platform features. As your needs grow you can transition to an enterprise version of the appserver (SE/EE). If you choose to go commercial please note that SJSAS 7.0 is a J2EE 1.3 platform.
    <question>
    2). If I use the J2EE Application Server, is this a good platform for running computational programs (via Session Beans)? Or is it too slow for that? How would it compare with using a standalone Java application--perhaps accessed from the Servlet via RMI? I guess using JNI with C++ in a standalone application would be the fastest, though a bit more complex to develop. I know it is a difficult question, but
    what is the most practical solution that strikes a balance between ease-of-programming and speed?
    </question>
    I guess you will not know for sure unless you perform some bench mark tests. But my opinion is the ease of development features you gain by using an app server far out weigh any performance increases you may get using a native soluton. Also, it is not clear that a native solution would actually be faster, with the latest JIT compilers Java performance is comparable to C++, your milage may vary depending on your application.
    <question>
    3). Can the J2EE 1.4 Application Server be used for running the presentation tier (Servlets and HTML) internally on our intranet? According to my testing, it seems to work, but is it a practical solution to use it this way?
    </question>
    Yes I would use the app server for both your presentation tier and business tier.
    As for the other issues you may want to look at the Java Enterprise Blue Prints located at: http://java.sun.com/j2ee/1.4/download.html#blueprints

  • Postponed persistence after confirmation

    Hi all!
    I need an architectural advice on feature implementation.
    First of all, I have a webapp deployed on OC4J 10.1.3.4 (with TopLink 10.1.3 bundled) that works with DB 11g. It uses simple display->edit->save->persist model. We also have a set of objects with all types of relations: one2one, one2many...
    The feature we want to implement is "postponed persistence after confirmation".
    Current scenario looks like that:
    1) user modifies object
    2) object is persisted into DB
    I need to implement:
    1) user modifies object (simpliest case -- one user, one object)
    2) user's modifications are saved into "buffer storage"
    3) administrator overviews the modifications and confirms it (more complicated -- any part of it)
    4) confirmed modifications are persisted into DB
    a) Is there a way to implement that feature using TopLink?
    b) If not, can you advice the architecture of "buffer storage" taking into account that amount of objects may be up to 15 and relations between them are of any type?
    Thanks,
    Alex

    Thanks for your comment, James.
    I see the following problems in options provided by you:
    1) Separate DB -- we would not like to keep 2 copies of same tables because of poor maintainability. Code duplication is the main issue here. That is the simpliest solution first springs to mind but it is hardest to implement.
    2) Serialization of objects -- I consider it as the main solution, but it does not use TopLink in any way.
    3) EclipseLink's history support -- thanks for that note. The scheme looks exactly like 2-thase modification->(commit or rollback) process. I will see if it is applicable to my case.
    Alex
    Edited by: AShananin on Feb 27, 2009 10:38 AM

  • Advice on Security Model Architecture..

    Hi all,
    Just looking for the advice of the experts :)
    I am working on the security model architecture for multi-tiered java application. The application architecture breaks down roughly as follows:
    Presentation Layer (JSP/Java)
    Business Layer (Java)
    Persistence Layer (JDBC/Oracle DB)
    Now, in the DB we will preserve information about various users, as well as the user's application permissions. My question pertains to authentication/authorization. Where is it most appropriate or efficient to verify a user's access to a functionality? Assume that the user and permission information is retrieved upon login and is made available to all levels.
    The options, as I see them, include the following:
    Presention layer - UI exposes only functionality applicable to the user.
    Business layer - Encode the logic in this facade for the backend.
    Persistence layer - Encode the logic in the data access objects.
    Any thoughts?

    Well, the layered approach is one way in which java applications are constructed.. the user interface is the top layer, which is composed of jsp files and other java files, and the objects that talk to the database are the bottom layer. Maybe an example would help..
    You're looking at a page on the Java Discussion Forums. It's a jsp page. You click on the 'Watches' link (upper right). The link points to a servlet, which calls a method in an object that is in what I call the "business" or middle layer/tier. An object in this layer has methods that correspond to any request that needs to be made of the db.
    This method in turn calls method/methods in the backend, or data layer, which queries the database and returns the watches for this particular user...
    So, if you have a request/response transaction (click on a link or button, processing, and new page is loaded), it would make a round trip through the layers:
    Presentation -> Business -> Data -> DB -> Data -> Business -> Presentation

Maybe you are looking for