A scalability question

Hi guys, I'm a bit concerned with the scalability of a proposed design for our servlet-based multi-service handler system.
Here's a brief description:
1. A single controller servlet get's an Xml message.
2. The message contains info, incuding the sub-application it belongs to. The controller uses a Dispatcher object to locate a handler object for that sub-application, and passes along the message to it.
3. The app-handler gets info from the message describing a specific action that need be executed, and the required parameters. It maintains a pool of action handlers, so it locates the proper one and passes along the data.
4. The action handler executes and returns a response message.
5. The response is propagated back to the original servlet and is sent back to the caller.
Although that "architecture" enables us to dynamically add sub-applicatins & actions to the system, I'm afraid that it might not scale well. Everything is executed within the original servlet's thread, and the application handler is usually a single instance saved in the Application attributes.
Possible solutions:
1. A first proposed amend is to use JNDI to get a reference to the sub-application handlers through a Factory object that maintains a pool of application handler instances. I'm assuming that the Factory will be run in a separate VM on the same machine, or even another one. (Perhaps it could even be a SessionBean on an App. Server ? ). Still however, the response need be passed back to the original servlet instance that serviced the call.
I'd like to be able to pass the output stream to the handler and just forget about it from the servlet, just like I'd do if i were using a plain socket server, but I doubt I could give it an OutputStream parameter through the two separate VM's ( or could I ? )
2. Perhaps maintain a pool of sub-application handlers within the servlet, in a member variable that get's initialised in the servlet's init() method, and get the reference through JNDI, but wouldn't that place a heavy bourdon on my little Tomcat servler ? And again, the original servlet that serviced the call would have to give the response back to the caller ...
Before we start trying out the possible alternatives, I thought I'd ask around for some opinions. It's no use spending the time to proof-test a solution that costs valuable time before asking a second opinion.
Anyhow, thanks for the help people, any ideas would be much appreciated.
Cheers,
Angel
O:]

I'm not sure what you are trying to do... Seems to me points 1-5 would be covered by:
class SuperDuperDispatcherServlet
    extends HttpServlet
    doGet()
        String app_name = ...get string from XML message...;
        SubApplication app = SubApplicationManager.get(app_name);
        app.doit(request, response);
class SubApplicationManager
    private static HashMap by_name = new HashMap();
    void register(String name, SubApplication app)
        by_name.put(name, app);
    SubApplication get(String name)
        return (SubApplication) by_name.get(name);
interface SubApplication
    void doit(HttpServletRequest request, HttpServletResponse response);
}I.e.: stick applications in a HashMap, indexed by the application's name. The "generic sub-application dispatcher engine (TM) (pat.pend.)" has a fancy name but it doesn't mean you need to over-engineer it...
There is no "pool" of SubApplication objects. The best way to deal with that is not to have instance variables in SubApplication objects (just like servlets.) That way you need only one object to represent each sub-application.
Is there something I missed or you didn't mention that prevents such a simple approach...?
Although that "architecture" enables us to dynamically add
sub-applicatins & actions to the system, I'm afraid that it might not
scale well. Everything is executed within the original servlet's
thread, and the application handler is usually a single instance saved
in the Application attributes.Every instruction is executed in some thread. Might as well use the thread you have. A function call is a couple of machine instructions; you can only go downhill from there. Handing work to another thread or process, and then waiting for it to finish, is just silly.
How many requests per second do you expect? Is there heavy computation involved? I.e. do you need to transfer parts of sub-application executions to other computers? There is non-trivial overhead in that; only do it if you can't avoid it. Consider clustering instead.

Similar Messages

  • P2p network java implementation -design/scalability questions

    Hi,
    i am trying to build a (distributed) p2p network simultor. What's this? Each PC called "farm" will have tenths of thousands -i hope- "Peer" threads running (say level "0") . These threads will be referenced by a Farm thread running on each PC (say level "1"). In the case where a message needs to be forwarded to a "remote" Peer (a Peer running in another farm), there will be also a server and a client communication module for each Farm (say level "2").
    The farms will also communicate with FarmMonitor via a client module (in level "2" also) , a Process running on a single PC, mainly handling user requests.
             |---------------------------------------------------|
            \|/                                                 \|/
    Farm server/client mod          Farm server/client mod
    |-----------------------------------|         |------------------------------------|
    |          farm1               |         |            farm2               |
    |-----------------------------------|         |------------------------------------|
    FarmMonitor client                    FarmMonitor client
              |                                            |
              |                                            |                      
              \____________________________\_____________ |  FarmMonitor Server
                                                                                   |----------------------------------|
                                                                                   |    FarmMonitor         |
                                                                                   |----------------------------------|I have built a demo -roughly reach the Farm-Farm communication part , which works with nio Pipes betwen Peers, but looks complicated and i would also like to use nio SocketChannels for the communication. Since however i am inexperienced in nio (and in general), i have little time, i mainly work on my own and i must decide for once in my life to use the K.I.S.S. rule, i have thought of another alternative.
    Peers (threads) -not running most of the time- communicating in a Producer-Consumer fashion via a (incoming) message mailbox/buffer. Each Peer will have these mailboxes of its neighbouring peers mapped to the corresponding neighbouring peer id. Each peer will temporarily buffer its outcoming messages internally and once awoken forward them to right mailbox. The communication modules will serialize the msgs and use old-fashioned i/o.
    Am i heading to a disaster? Will i have scalability problems, because of all these monitors and threads? Any remarks useful.
    Thanks

    Jason, quick update on this.  I might not have used the correct terminology regarding the publishing in AD.  What I try to say when you have extended the schema in AD with the SCCM Specifics, and you then install Management Point that object is
    also created under the System Management container.  When checking client logs I could see that the client is querying AD and retrieves list of Management points, once it determines the site that it will assign to it get the list of MP's for that site.
    In my development environment I have not extended the schema, so I'm using SMSSITECODE=XXX SMSMP=NETBIOSNAME.   What I did see now, after I got the IBCM working correctly when installing new SCCM Client, using the following install parameters "SMSSITECODE=LA0
    SMSMP=ABC ...." checking locationservices.log, I see that the client is "assigning to Management point ABC" and few seconds after that message it also get the Internet MP information, and once client is installed and then checking the Control Panel applet
    "Network" the Internet Management Point information is populated, that already is good news.
    My software distribution is working for some new apps that I created after IBCM was configured, however for few apps that already existed before IBCM was created, I still have the issue that it cannot find any DP's, however I'm able to install the same app
    from my intranet without issues.  Need to check on this further, I hope when implementing this in my Production environment that I don't run into same issue with all my existing applications.
    thx again for all the help, appreciated.

  • Application partitioning and scalability questions

    I split my application into dozen or so jars...
    1) Original project had ADF BC BaseClasses overridden... Any way to "share" them now among projects?
    2) How to handle common java files to all projects such as JsfUtil, AdfUtil?
    3) Common resource files, png's
    4) Jdev 11.1.2: How can I exclude folder from jar deployment (test jsfs)? Don't see any filter options on deployment profile ?
    5) What do you guys do with common EO, VO, lookups as far as sharing or will each project have to have own copy?
    6) How can I minimize number of database connections?
    Thanks,
    Brian

    +1) Original project had ADF BC BaseClasses overridden... Any way to "share" them now among projects?+
    Put them into a JAR file and configure them in the JDev preferences for ADF BC.
    +2) How to handle common java files to all projects such as JsfUtil, AdfUtil?+
    WLS shared libraries if you want to maintain a single deployment. Otherwise use a JAR and add them to each project
    +3) Common resource files, png's+
    See sample 86 on http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html#CodeCornerSamples
    +4) Jdev 11.1.2: How can I exclude folder from jar deployment (test jsfs)? Don't see any filter options on deployment profile ?+
    Check the project source settings and remove the folder
    +5) What do you guys do with common EO, VO, lookups as far as sharing or will each project have to have own copy?+
    Depends on the use case. If it is common for an application you can use Shared AM to avoid each session to create its own instance. If its about sharing the code therein, use ADF libraries for cross application use
    +6) How can I minimize number of database connections?+
    Use AM pooling, avoid too many root AM, use DataControl sharing on bounded task flows
    Frank

  • Scalability and Architecture Question

    I am currently working on an app that will generate a resume
    from a set of user defined input into several different formats
    from an XML file (MS Word, PDF, TXT, HR-XML, and HTML). We are
    thinking that we will write all the files once at publish time and
    then store them (not sure where yet). We are doing this because we
    will be hosting the online version of the resume as a CFM file with
    access to all the other formats of the resume from their online
    resume. We are assuming that there will be many more reads then
    their will be writes over the life of the resume. So we don't want
    to compile these each time a user requests one (that is a Word,
    PDF, HTML, or HR-XML version).
    The question I have now is should we store the files in the
    database or the webserver.
    I would think that it makes sense to store them on the
    webserver. But as this will need to be in a clustered environment
    then I will need to replicate these across the farm as each new
    user creates a resume. So does anyone know if the penalty for
    replicating these across the farm is higher then calling them from
    database. Assuming that the average file size is 50K and on average
    50 files will be called over the life of the resume. Thoughts?

    Originally posted by: fappel.innoopract.com
    Hi,
    RAP doesn't support session switch over at the moment, that's true. But
    it supports load-balancing by using multiple workers. But once a session
    is opened at one worker all requests of that session are dispatched to
    this worker.
    Ciao
    Frank
    -----Ursprüngliche Nachricht-----
    Von: Mike Wrighton [mailto:[email protected]]
    Bereitgestellt: Freitag, 22. August 2008 11:35
    Bereitgestellt in: eclipse.technology.rap
    Unterhaltung: Will RAP work in a load-balanced system?
    Betreff: Will RAP work in a load-balanced system?
    Hi,
    Some of my colleagues were reviewing scalability in our web architecture
    and the question was raised about RAP scalability, in particular the
    issue that since session data is stored in memory and not in a central
    database, RAP sessions would not survive a server switch-over by a load
    balancer. Hope that makes sense?
    I was just wondering if anyone had come across this issue before and
    found a decent solution? It may just be a case of configuring the load
    balancer properly.
    Thanks,
    Mike

  • Architecture Question - End-User scalability of queues

    Hi,
    I have a question around the design / usage of Queues.
    We have a need for many client instances (sitting across business borders) to send data to a server instance and we're looking at Azure Service Bus / Message Queues to wire the clients-server together. One analogy of this architecture would be of an order
    system where many clients are sending their orders in, and the server responds with order status updates.
    Using a single queue to send/receive the data for the server appears straight forward, however my scalability query resolves around how the server queues notification messages back to a specific client in a secure manner. Would we need to create a queue
    per client to guarantee messages for one client are not picked up by another, or is there another "single-queue" scalable mechanism we could rely on (similar to a conversation)?
    Thanks,
    Jay :)
    If you shake a kettle, does it boil faster?

    Instead of creating a dedicated queue for each client you can create a single topic for order status updates. Here each client would receive from their own subscription with SAS authentication so that no client can read from someone else's messages.
    Here  is a good sample on using SAS over topics/subscriptions:
    https://code.msdn.microsoft.com/windowsazure/Using-Shared-Access-e605b37c

  • Simpe Question: How to make .swf non scalable

    Hey guys,
    I have been trying to search for an answer on how to make a whole .swf file unscalable but I have only found information on how to scale flash files. I did find a post that mentioned the code "Stage.scaleMode = "noScale";" but when I add that to the first frame in my timeline, nothing happens. So any help on this issue would be much appreciated. Thanks so much.
    Note: I am using action script 3.

    You can also use this frame action:
    fscommand("fullscreen", "false");
    to disable the menu use:
    fscommand("showmenu", "false");
    Hope that helps

  • PL/SQL Ad-hoc Reporting Implementation Questions

    Recently, I have been put on a development team that is developing a small reporting module for one of our applications.
    I'm trying to mask the underlying structure from the application by having the application run PL/SQL procedures which return REF CURSORS to the client with the requested data. Right now I'm struggling with how to implement the PL/SQL in the best fashion.
    Since this is a reporting tool the user has many combinations of selections. For example at a high level this application has a combination of multiple dropdowns (4-6), radio buttons, and check boxes. As you can see this results in a high amount of possible combinations that have to be sent to the database.
    Basically, the user chooses the following:
    1. Columns to receive (ie different SELECT lists in the PL/SQL)
    2. Specific conditions (ie different WHERE clauses in the PL/SQL)
    3. Aggregate functions (SUMS, TOTALS, AVERAGES based on #1 and #2)
    4. Trends based on #3.
    So... with that said I see two possibilities:
    1. Create a static query for each combination of parameters (in this case that would most likely result in at least 300 queries that would have to be written, possibly 600+).
    The problem I see with this is that I will have to write a significant amount of queries. This is a lot of front end work that, while is tedious, could result in a better performing system because it would be a parse once, execute many scenario which is scalable.
    The downside though is that if any of the underlying structure changes I have to go through and change tens of queries.
    2. Use DBMS_SQL and dynamically generate the queries based on input conditions.
    This approach (possibly) sacrifices performance (parse once, execute once situation), but has increased maintainability because it is more likely that I'll have to make one change vice a number of changes in scenario 1.
    A downside to this is that, it may be harder to debug (and hence maintain) because the SQL is generated on the fly.
    My questions to all is:
    1. Which is the approach that would best manage maintainability / performance?
    2. Is there any other approaches to using PL/SQL as a reporting tool that I am not thinking of?
    The database is 10.2.0.3, and the 'application' is PHP 5.1 running on IIS 6.
    If you need me to provide any additional information please let me know.
    Thanks!

    Ref cursors are an ugly solution (different though in 11g).
    You build a dynamic SQL. It must/should have bind variables. But a ref cursor does not allow you to dynamically bind values to bind variables. You need to code the actual bind variables into the code as part of the open cursor command. And at coding time you have no idea how many bind variables there will be in that dynamic SQL for the ref cursor.
    The proper solution is DBMS_SQL as it allows exactly this. Also one of the reasons why APEX uses this for its report queries.
    The only sensible way to implement this type of thing in PL/SQL is by not trying to make it generic (as one could with DBMS_SQL). Instead use polymorphisms and have each procedure construct the appropriate ref cursor with bind variables.
    E.g.
    SQL> create or replace package query as
    2 procedure Emp( c in out sys_refcursor );
    3 procedure Emp( c in out sys_refcursor, nameLike varchar2 );
    4 procedure EMp( c in out sys_refcursor, deptID number );
    5 end;
    6 /
    Package created.
    SQL>
    SQL> create or replace package body query as
    2
    3 procedure Emp( c in out sys_refcursor ) is
    4 begin
    5 open c for select * from emp order by 1;
    6 end;
    7
    8 procedure Emp( c in out sys_refcursor, nameLike varchar2 ) is
    9 begin
    10 open c for select * from emp where ename like nameLike order by 1;
    11 end;
    12
    13 procedure EMp( c in out sys_refcursor, deptID number ) is
    14 begin
    15 open c for select * from emp where deptno = deptID order by sal;
    16 end;
    17 end;
    18 /
    Package body created.
    SQL>
    SQL> var c refcursor
    SQL> exec query.Emp( :c, 'S%' )
    PL/SQL procedure successfully completed.
    SQL>
    SQL> print c
    EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
    7369 SMITH CLERK 7902 1980-12-17 00:00:00 800 20
    7788 SCOTT ANALYST 7566 1987-04-19 00:00:00 3000 20
    SQL>

  • License Question for SharePoint Foundation 2010

    I have a question as it regards to the user agreement for SharePoint Foundation 2010. 
    When I deploy code to the front-end of the Foundation service which directly modifies the code on the front-end to make the system do anything different out of the box, does this mean I have violated the code agreement. 
    For instance, using JQuery to manipulate the look and feel of a form, use third party tools like SPServices, attach SharePoint Designer workflows, modify the server end by using STSADM command line to turn features on and off, create and deploy site
    collection features using Visual Studio, or create web parts? 
    Also, if I am using this software for business purposes of any kind, does it mean I need to purchase the software?
    I have received a copy of Share Foundation along with a Server 2008 R2 OS which was purchased. 
    IF I use Foundation to run my business processes, am I in violation of the user agreement and do I need to purchase SharePoint Server 2010 with CALs?

    The official answer: "I'm not a licensing person. Please contact Microsoft."
    In general:
    Foundation is a "free" download, but can only be installed on Windows Server. You can install Foundation 2010 on Windows 7, but only for testing/development, not as a production server.
    Foundation is "free" in that it is licensed as part of the Windows Server it is installed on.
    If you have 500 users, then Windows Server will need to be licensed for 500 users. The Windows Server licensing will also depend on if the users are internal or on the internet.
    SQL Server: If you are using SQL Express, then no license costs, but the number of SharePoint users may impact the licensing costs of your SQL Server.
    The choice between Foundation vs. Standard or Enterprise is the list of features needed and scalability (how many users?).
    "JQuery, SPServices, SharePoint Designer workflows, STSADM, Visual Studio..."
    None of those will impact your licensing. Anything that you do to directly query or manipulate the SQL tables will impact your support. SharePoint is a platform and you are expected to configure and customize it.  
    Mike Smith TechTrainingNotes.blogspot.com
    Books:
    SharePoint 2007 2010 Customization for the Site Owner,
    SharePoint 2010 Security for the Site Owner

  • MSI Big Bang Marshal - Questions

    Hello all
    I am new here and I have been doing research on components for a new computer I am making a shopping list for. I have a few questions I have not been able to find any information online. Here goes
    1. Anything specific I should know about the motherboard other then whats in the specs? Problems, quirks, etc?
    2. Does the motherboard only support 2x Dual Video cards like 2x Nvidia GTX 590 or can I use 4x Nvidia GTX 480s?
    3. P67 motherboards with the nf200 on them support 24 pci-e lanes. if I use:
    PCI_E1 - 8x
    PCI_E3 - 8x
    PCI_E5 - 8x
    PCI_E7 - 8x
    Makes up 24
    But there are still:
    PCI_E2 - 1x
    PCI_E4 - 1x
    PCI_E6 - 1x
    PCI_E8 - 1x
    Left. Does that mean there are no lanes left and I coudnt even fill anything in the other 4 slots like PCI-E Sound, etc?
    4. If you do Dual, Tri, Quad-SLI and have a PhysX card, how many PCI-E lanes will your PhysX card consume? Is it set by the NVIDIA control panel, the number of lanes of the chipset divided by the mount of video cards or by the card itself?
    5. When people say Dual, Tri, Quad-SLI +PhysX, does it mean (Dual -1(PhysX), Tri -1(PhysX), Quad -1(PhysX) or does it mean (Dual +1, Tri +1, Quad +1)?
    6. I know Scalability of increasing arrays of video cards is pretty poor, but could I do Quad SLI +PhysX using the MSI Big Bang Marshal?
    7. Which is most advantageous, Tri SLI +PhysX, Quad SLI, Quad SLI +PhysX?
    8. Sorry, probably not related, but I wanted to do Raid with SSD's for very fast reads and writes yet have redundancy. I wanted to get some OCZ Vertex 3s and I was reading about how Raid 5 was interesting but apparently Raid 5 increases the degredation of SSD? Has anyone experienced this or have any suggestions on best raid configurations?
    Thank you for all of your help

    Quote
    P67 motherboards with the nf200 on them support 24 pci-e lanes. if I use:
    PCI_E1 - 8x
    PCI_E3 - 8x
    PCI_E5 - 8x
    PCI_E7 - 8x
    Makes up 24
    By my count, that makes 32 lanes. If not done so, a lot of pre preparation information can be gathered at the MSI Global Website. All the basic & more detailed information can be found there. The downloadable pdf manual is usually much more comprehensive than the manual that comes with the board as well, & can probably answer a lot of your questions. I don't believe the board can do Quad GTX 480 SLI.

  • Questions before an internal lab POC (on old underperformant hardware)

    Hello VDI users,
    NB: I initially asked this list of questions to my friends at the
    SunRay-Users mailing list, but then I found this forum as a
    more relevant place.
    It's a bit long and I understand that many questions may have
    already been answered in detail on the forum or VDI wiki. If it's
    not too much a burden - please just reply with a link in this case.
    I'd like this thread to become a reference of sorts to point to our
    management and customers.
    I'm growing an interest to try out a Sun VDI POC in our lab
    with VirtualBox and Sunrays, so I have a number of questions
    popping up. Not all of them are sunray-specific (in fact, most
    are VirtualBox-related), but I humbly hope you won't all flame
    me for that?
    I think I can get most of the answers by experiment, but if
    anyone feels like sharing their experience on these matters
    so I can expect something a'priori - you're welcome to do so ;)
    Some questions involve best practices, however. I understand
    that all mileages vary, but perhaps you can warn me (and others)
    about some known-not-working configurations...
    1) VDI core involves a replicated database (such as MySQL)
    for redundant configuration of its working set storage...
    1.1) What is the typical load on this database? Should it
    just be "available", or should it also have a high mark
    in performance?
    For example, we have a number of old Sun Netra servers
    (UltraSPARC-II 450-650MHz) which even have a shared SCSI
    JBOD (Sun StorEdge S1 with up to 3 disks).
    Would these old horses plow the field well? (Some of them
    do run our SRSS/uttsc tasks okay)
    1.2) This idea seems crippled a bit anyway - if the database
    master node goes down, it seems (but I may be wrong) that
    one of the slave DB nodes should be promoted to a master
    status, and then when the master goes up, their statuses
    should be sorted out again.
    Or the master should be made HA in shared-storage cluster.
    (Wonder question) Why didn't they use Sun DSEE or OpenDS
    with built-in multimaster replication instead?
    2) The documentation I've seen refers to a specific version
    of VirtualBox - 2.0.8, as the supported platform for VDI3.
    It was implied that there were specific features in that
    build made for Sun VDI 3 to work with it. Or so I got it.
    2.1) A few versions rolled out since that one, 3.0 is out
    now. Will they work together okay, work but as unsupported
    config, not work at all?
    2.2) If specifically VirtualBox 2.0.8 is to be used, is there
    some secret build available, or the one from Old Versions
    download page will do?
    3) How much a bad idea is it to roll out a POC deployment
    (up to 10 virtual desktop machines) with VirtualBox VMs
    running on the same server which contains their data
    (such as Sun Fire X4500 with snv_114 or newer)?
    3.1) If this is possible at all, and if the VM data is a
    (cloned) ZFS dataset/volume, should a networked protocol
    (iSCSI) be used for VM data access anyway, or is it
    possible (better?) to use local disk access methods?
    3.2) Is it possible to do a POC deployment (forfeiting such
    features as failover, scalability, etc.) on a single
    machine alltogether?
    3.3) Is it feasible to extend a single-machine deployment
    to a multiple-machine deployment in the future (that
    is, without reinstalling/reconfiguring from scratch)?
    4) Does VBox RDP server and its VDI interaction with SRSS
    have any specific benefits to native Windows RDP, such
    as responsiveness, bandwidth, features (say, microphone
    input)?
    Am I correct to say that VBox RDP server enables the
    touted 3D acceleration (OpenGL 2.0 and DX8/9), and lets
    connections over RDP to any VM BIOS and OSes, not just
    Windows ones?
    4.1) Does the presence of a graphics accelerator card on
    the VirtualBox server matter for remote use of the VM's
    (such as through a Sun Ray and VDI)?
    4.2) Concerning the microphone input, as the question often
    asked for SRSS+uttsc and replied by RDP protocol limits...
    Is it possible to pass the audio over some virtualized
    device for the virtual machine? Is it (not) implemented
    already? ;)
    5) Are there known DO's and DONT's for VM desktop workloads?
    For example, simply office productivity software users
    and software developers with Java IDEs or ongoing C/C++
    compilations should have different RAM/disk footprints.
    Graphics designers heavy on Adobe Photoshop are another
    breed (which we've seen to crawl miserably in Windows
    RDP regardless of win/mstsc or srss/uttsc clients).
    Can it be predicted that some class of desktops can
    virtualize well and others should remain "physical"?
    NB: I guess this is a double-question - on virtualization
    of remote desktop tasks over X11/RDP/ALP (graphics bound),
    as well as a question on virtualization of whole desktop
    machines (IO/RAM/CPU bound).
    6) Are there any rule-of-thumb values for virtualized
    HDD and networking filesystems (NFS, CIFS) throughput?
    (I've seen the sizing guides on VDI Wiki; anything else
    to consider?)
    For example, the particular users' data (their roaming
    profiles, etc.) should be provisioned off the networked
    storage server, temporary files (browser caches, etc.)
    should only exist in the virtual machine, and home dirs
    with working files may better be served off the network
    share altogether.
    I wonder how well this idea works in real life?
    In particular, how well does a virtualized networked
    or "local" homedir work for typical software compile
    tasks (r/w access to many small files)?
    7) I'm also interested in the scenario of VMs spawned
    from "golden image" and destroyed after logout and/or
    manually (i.e. after "golden image"'s update/patching).
    It would be interesting to enable the cloned machine
    to get an individual hostname, join the Windows domain
    (if applicable), promote the user's login to the VM's
    local Administrators group or assign RBAC profiles or
    sudoer permissions, perhaps download the user's domain
    roaming profile - all prior to the first login on this
    VM...
    Is there a way to pass some specific parameters to the
    VM cloning method (i.e. the user's login name, machine's
    hostname and VM's OS)?
    If not, perhaps there are some best-practice suggestions
    on similar provisioning of cloned hosts during first boot
    (this problem is not as new as VDI, anyways)?
    8) How great is the overhead (quantitative or subjective)
    of VM desktops overall (if more specific than values in
    sizing guide on Wiki)? I've already asked on HDD/networking
    above. Other aspects involve:
    How much more RAM does a VM-executing process typically
    use than is configured for the VM? In the JavaOne demo
    webinar screenshots I think I've seen a Windows 7 host
    with 512Mb RAM, and a VM process sized about 575Mb.
    The Wiki suggests 1.2 times more. Is this a typical value?
    Are there "hidden costs" in other VBox processes?
    How efficiently is the CPU emulated/provided (if the VBox
    host has the relevant VT-x extensions), especially for
    such CPU-intensive tasks as compilation?
    *) Question from our bookkeeping team:
    Does creating such a POC lab and testing it in office's
    daily work (placing some employees or guests in front of
    virtual desktops instead of real computers or SR Solaris
    desktops) violate some licenses for Sun VDI, VirtualBox,
    Sun Rays, Sun SGD, Solaris, etc? (The SRSS and SSGD are
    licensed; Solaris is, I guess, licensed by the download
    form asking for how many hosts we have).
    Since all of the products involved (sans SGD) don't need
    a proof of license to install and run, and they can be
    downloaded somewhat freely (after quickly clicking thru
    the tomes of license agreements), it's hard for a mere
    admin to reply such questions ;)
    If there are some limits (# of users, connections, VMs,
    CPUs, days of use, whatever) which differentiate a legal
    deployment for demo (or even legal for day-to-day work)
    from a pirated abuse - please let me know.
    //Jim
    Edited by: JimKlimov on Jul 7, 2009 10:59 AM
    Added licensing question

    Hello VDI users,
    NB: I initially asked this list of questions to my friends at the
    SunRay-Users mailing list, but then I found this forum as a
    more relevant place.
    It's a bit long and I understand that many questions may have
    already been answered in detail on the forum or VDI wiki. If it's
    not too much a burden - please just reply with a link in this case.
    I'd like this thread to become a reference of sorts to point to our
    management and customers.
    I'm growing an interest to try out a Sun VDI POC in our lab
    with VirtualBox and Sunrays, so I have a number of questions
    popping up. Not all of them are sunray-specific (in fact, most
    are VirtualBox-related), but I humbly hope you won't all flame
    me for that?
    I think I can get most of the answers by experiment, but if
    anyone feels like sharing their experience on these matters
    so I can expect something a'priori - you're welcome to do so ;)
    Some questions involve best practices, however. I understand
    that all mileages vary, but perhaps you can warn me (and others)
    about some known-not-working configurations...
    1) VDI core involves a replicated database (such as MySQL)
    for redundant configuration of its working set storage...
    1.1) What is the typical load on this database? Should it
    just be "available", or should it also have a high mark
    in performance?
    For example, we have a number of old Sun Netra servers
    (UltraSPARC-II 450-650MHz) which even have a shared SCSI
    JBOD (Sun StorEdge S1 with up to 3 disks).
    Would these old horses plow the field well? (Some of them
    do run our SRSS/uttsc tasks okay)
    1.2) This idea seems crippled a bit anyway - if the database
    master node goes down, it seems (but I may be wrong) that
    one of the slave DB nodes should be promoted to a master
    status, and then when the master goes up, their statuses
    should be sorted out again.
    Or the master should be made HA in shared-storage cluster.
    (Wonder question) Why didn't they use Sun DSEE or OpenDS
    with built-in multimaster replication instead?
    2) The documentation I've seen refers to a specific version
    of VirtualBox - 2.0.8, as the supported platform for VDI3.
    It was implied that there were specific features in that
    build made for Sun VDI 3 to work with it. Or so I got it.
    2.1) A few versions rolled out since that one, 3.0 is out
    now. Will they work together okay, work but as unsupported
    config, not work at all?
    2.2) If specifically VirtualBox 2.0.8 is to be used, is there
    some secret build available, or the one from Old Versions
    download page will do?
    3) How much a bad idea is it to roll out a POC deployment
    (up to 10 virtual desktop machines) with VirtualBox VMs
    running on the same server which contains their data
    (such as Sun Fire X4500 with snv_114 or newer)?
    3.1) If this is possible at all, and if the VM data is a
    (cloned) ZFS dataset/volume, should a networked protocol
    (iSCSI) be used for VM data access anyway, or is it
    possible (better?) to use local disk access methods?
    3.2) Is it possible to do a POC deployment (forfeiting such
    features as failover, scalability, etc.) on a single
    machine alltogether?
    3.3) Is it feasible to extend a single-machine deployment
    to a multiple-machine deployment in the future (that
    is, without reinstalling/reconfiguring from scratch)?
    4) Does VBox RDP server and its VDI interaction with SRSS
    have any specific benefits to native Windows RDP, such
    as responsiveness, bandwidth, features (say, microphone
    input)?
    Am I correct to say that VBox RDP server enables the
    touted 3D acceleration (OpenGL 2.0 and DX8/9), and lets
    connections over RDP to any VM BIOS and OSes, not just
    Windows ones?
    4.1) Does the presence of a graphics accelerator card on
    the VirtualBox server matter for remote use of the VM's
    (such as through a Sun Ray and VDI)?
    4.2) Concerning the microphone input, as the question often
    asked for SRSS+uttsc and replied by RDP protocol limits...
    Is it possible to pass the audio over some virtualized
    device for the virtual machine? Is it (not) implemented
    already? ;)
    5) Are there known DO's and DONT's for VM desktop workloads?
    For example, simply office productivity software users
    and software developers with Java IDEs or ongoing C/C++
    compilations should have different RAM/disk footprints.
    Graphics designers heavy on Adobe Photoshop are another
    breed (which we've seen to crawl miserably in Windows
    RDP regardless of win/mstsc or srss/uttsc clients).
    Can it be predicted that some class of desktops can
    virtualize well and others should remain "physical"?
    NB: I guess this is a double-question - on virtualization
    of remote desktop tasks over X11/RDP/ALP (graphics bound),
    as well as a question on virtualization of whole desktop
    machines (IO/RAM/CPU bound).
    6) Are there any rule-of-thumb values for virtualized
    HDD and networking filesystems (NFS, CIFS) throughput?
    (I've seen the sizing guides on VDI Wiki; anything else
    to consider?)
    For example, the particular users' data (their roaming
    profiles, etc.) should be provisioned off the networked
    storage server, temporary files (browser caches, etc.)
    should only exist in the virtual machine, and home dirs
    with working files may better be served off the network
    share altogether.
    I wonder how well this idea works in real life?
    In particular, how well does a virtualized networked
    or "local" homedir work for typical software compile
    tasks (r/w access to many small files)?
    7) I'm also interested in the scenario of VMs spawned
    from "golden image" and destroyed after logout and/or
    manually (i.e. after "golden image"'s update/patching).
    It would be interesting to enable the cloned machine
    to get an individual hostname, join the Windows domain
    (if applicable), promote the user's login to the VM's
    local Administrators group or assign RBAC profiles or
    sudoer permissions, perhaps download the user's domain
    roaming profile - all prior to the first login on this
    VM...
    Is there a way to pass some specific parameters to the
    VM cloning method (i.e. the user's login name, machine's
    hostname and VM's OS)?
    If not, perhaps there are some best-practice suggestions
    on similar provisioning of cloned hosts during first boot
    (this problem is not as new as VDI, anyways)?
    8) How great is the overhead (quantitative or subjective)
    of VM desktops overall (if more specific than values in
    sizing guide on Wiki)? I've already asked on HDD/networking
    above. Other aspects involve:
    How much more RAM does a VM-executing process typically
    use than is configured for the VM? In the JavaOne demo
    webinar screenshots I think I've seen a Windows 7 host
    with 512Mb RAM, and a VM process sized about 575Mb.
    The Wiki suggests 1.2 times more. Is this a typical value?
    Are there "hidden costs" in other VBox processes?
    How efficiently is the CPU emulated/provided (if the VBox
    host has the relevant VT-x extensions), especially for
    such CPU-intensive tasks as compilation?
    *) Question from our bookkeeping team:
    Does creating such a POC lab and testing it in office's
    daily work (placing some employees or guests in front of
    virtual desktops instead of real computers or SR Solaris
    desktops) violate some licenses for Sun VDI, VirtualBox,
    Sun Rays, Sun SGD, Solaris, etc? (The SRSS and SSGD are
    licensed; Solaris is, I guess, licensed by the download
    form asking for how many hosts we have).
    Since all of the products involved (sans SGD) don't need
    a proof of license to install and run, and they can be
    downloaded somewhat freely (after quickly clicking thru
    the tomes of license agreements), it's hard for a mere
    admin to reply such questions ;)
    If there are some limits (# of users, connections, VMs,
    CPUs, days of use, whatever) which differentiate a legal
    deployment for demo (or even legal for day-to-day work)
    from a pirated abuse - please let me know.
    //Jim
    Edited by: JimKlimov on Jul 7, 2009 10:59 AM
    Added licensing question

  • Questions on Weblogic RMI

    Hi
              I've a few questions on WL RMI that I couldn't figure out from the
              documentation.
              Suppose I have a cluster containing Weblogic servers server1 and server2.
              Server1 hosts an RMI object o1 that is bind to the cluster wide JNDI tree.
              O1 implements remote interface i1 that has a method m1 which takes as an
              argument an object o2 implementing remote interface i2. That is:
              public class o1 implements i1 {...}
              public interface i1 extends weblogic.rmi.Remote {
              public void m1 (i2 _o2);
              public class o2 implements i2 {...}
              public interface i2 extends weblogic.rmi.Remote {...}
              Now if inside server2 I create o2 using the default constructor, lookup a
              reference to o1 and call method m1, will o2 get passed by value or by
              reference? Is there any way to control this? What if I don't use the
              constructor to create the object but the object is hosted in server2 and I
              get a reference to the object using JNDI lookup? In short, how does WL RMI
              decide when to pass an object by reference and when by value.
              I'm trying to increase the scalability of a system by distributing its
              modules as RMI objects to several machines running Weblogic server, and I
              need to know the details on how WL RMI works. The documentation seems to be
              rather inadequate...
              Thanks for reading this far :)
              - Juha
              

    Actually, O2 will always be passed by reference because it is an RMI object
              (i.e., it implements weblogic.rmi.Remote). If O2 were a non-RMI object, it would
              be passed by value if O1 is in a different process and by reference if O1 and O2
              are in the same process.
              Edwin Marcial wrote:
              > My 2 cents on this:
              >
              > I believe in this case, since O2 is a remote object, it will get passed by
              > reference. If it were not a remote object, it would be passed by value.
              >
              > Edwin
              >
              > "Juha Lindström" wrote:
              >
              > > Hi
              > >
              > > I've a few questions on WL RMI that I couldn't figure out from the
              > > documentation.
              > > Suppose I have a cluster containing Weblogic servers server1 and server2.
              > > Server1 hosts an RMI object o1 that is bind to the cluster wide JNDI tree.
              > > O1 implements remote interface i1 that has a method m1 which takes as an
              > > argument an object o2 implementing remote interface i2. That is:
              > >
              > > public class o1 implements i1 {...}
              > >
              > > public interface i1 extends weblogic.rmi.Remote {
              > > public void m1 (i2 _o2);
              > > }
              > >
              > > public class o2 implements i2 {...}
              > >
              > > public interface i2 extends weblogic.rmi.Remote {...}
              > >
              > > Now if inside server2 I create o2 using the default constructor, lookup a
              > > reference to o1 and call method m1, will o2 get passed by value or by
              > > reference? Is there any way to control this? What if I don't use the
              > > constructor to create the object but the object is hosted in server2 and I
              > > get a reference to the object using JNDI lookup? In short, how does WL RMI
              > > decide when to pass an object by reference and when by value.
              > >
              > > I'm trying to increase the scalability of a system by distributing its
              > > modules as RMI objects to several machines running Weblogic server, and I
              > > need to know the details on how WL RMI works. The documentation seems to be
              > > rather inadequate...
              > >
              > > Thanks for reading this far :)
              > >
              > > - Juha
              

  • Few questions about netGroup neighbors limits

    I'm playing around with a p2p messenger type of client where i'm catching neighbors when the netgroup.neighbor.connect event goes off, and then using add neighbor by adding their peerID (not the neighborID, i'm not really quite sure how/why that would be used) then i'm having message objects instantly send on the neighbor.connect.success to update the others array of "online users"
    So far it's working really well and i feel as if its a pretty effective system. But, I just wanted to ask how well it will scale and if it's worth continuing to develope or not because how well would it work if it had between 100-1000+ people. What is the max amount of neighbors? Will the object i have sending off when a neighbor connects to them cause an issue when it gets to be a bunch of clients all sending each other their username info on the neighbor connect event?
    Another question would be if when 2 cleints decide to connect and say video/text chat i've been having them do a Direct Connection connect vi peerIDs and closing their netgroup connection so the disconnect event can delete the object with their peerID/username in it. Should I be leaving them connected as neighbors and use neighborids somehow? or should be be doing somethign where i have a pubish and play stream going at all times, but just have them pick a similar generated play/publish channel and them neigborsend that to each other? I just though If i have neighbors constantly disconnecting and doing direct connects it could help prevent the neighbor overload incase their is a limit.
    It would be awesome if someone could help share some of their knowlege on these questions, thanks!

    there is no technical limit to the number of neighbors you can have.  however, you can't force a client to always be connected to a specific peer.  using "NetGroup.addNeighbor()" for a peer that is currently a neighbor (that is, for which you just got a NetGroup.Neighbor.Connect event) doesn't do anything.  and if you do add a new neighbor, the group topology manager may automatically choose to disconnect later if that neighbor isn't strictly necessary to maintain the desired topology.
    each member of a group will naturally have about O(log N) directly connected neighbors in a group of N peers.  the actual number is approximately 2 * log2 N + 13.  groups have full transitive connectivity but are not necessarily fully meshed (where each member has a direct-neighbor connection to every other member).  groups will typically be naturally fully meshed below about 17 members.
    as you've supposed, a full mesh isn't scalable to 100-1000+ members, and there's no reliable way to maintain a full mesh with the existing ActionScript APIs anyway (you can't set a neighbor to be "permanent" in the ActionScript API).  note that in a group operating normally, if an average member had 100 neighbors in the steady state, you would expect the group size to be about 2^43 =~ 9,000,000,000,000 (9 trillion) members.  groups that large are unlikely.
    if you want to send a short message to every member of the group, use NetGroup.post().  that disseminates a message through a group efficiently to each member, but not instantaneously.  scalable distributed presence in a very large group is a complex problem.  unfortunately, a naïve approach will not be scalable or efficient.  i encourage you to search for "newscast" and other "graph diffusion" topics in the context of P2P to get a flavor for how you might approach this problem for very large groups (of like 1000+).
    for smaller groups (of up to like 100ish) you can probably get by with having each member periodically posting a presence announcement to the group, and each member keeping track of the age of each announcement it hears, expiring ones after a reasonable timeout.
    personally, i wouldn't leave the "everybody" group to do a 1:1 chat.

  • External memory allocation and management using C / LabVIEW 8.20 poor scalability

    Hi,
    I have multiple C functions that I need to interface. I need
    to support numeric scalars, strings and booleans and 1-4 dimensional
    arrays of these. The programming problem I try to avoid is that I have
    multiple different functions in my DLLs that all take as an input or
    return all these datatypes. Now I can create a polymorphic interface
    for all these functions, but I end-up having about 100 interface VIs
    for each of my C function. This was still somehow acceptable in LabVIEW
    8.0 but in LabVIEW 8.2 all these polymorphic VIs in my LVOOP project
    gets read into memory at project open. So I have close to 1000 VIs read into memory when ever I open my project. It takes now about ten minutes to
    open the project and some 150 MB of memory is consumed instantly. I
    still need to expand my C interface library and LabVIEW doesn't simply
    scale up to meet the needs of my project anymore.
    I now
    reserve my LabVIEW datatypes using DSNewHandle and DSNewPtr functions.
    I then initialize the allocated memory blocks correctly and return the
    handles to LabVIEW. LabVIEW complier interprets Call Library Function
    Node terminals of my memory block as a specific data type.
    So
    what I thought was following. I don't want LabVIEW compiler to
    interpret the data type at compile time. What I want to do is to return
    a handle to the memory structure together with some metadata describing
    the data type. Then all of my many functions would return this kind of
    handle. Let's call this a data handle. Then I can later convert this
    handle into a real datatype either by typecasting it somehow or by
    passing it back to C code and expecting a certain type as a return.
    This way I can reduce the number of needed interface VIs close to 100
    which is still acceptable (i.e. LabVIEW 8.2 doesn't freeze).
    So
    I practically need a similar functionality as variant has. I cannot use
    variants, since I need to avoid making memory copies and when I convert
    to and from variant, my memory consumption increases to three fold. I
    handle arrays that consume almos all available memory and I cannot
    accept that memory is consumed ineffectively.
    The question is,
    can I use DSNewPtr and DSNewHandle functions to reserve a memory block
    but not to return a LabVIEW structure of that size. Does LabVIEW
    carbage collection automatically decide to dispose my block if I don't
    correctly return it from my C immediately but only later at next call
    to C code. Can I typecast a 1D U8 array to array of any dimensionality and any numeric data type without memory copy (i.e. does typecast work the way it works in C)?
    If I cannot find a solution with this LabVIEW 8.20 scalability issue, I have to really consider transferring our project from LabVIEW to some other development environent like C++ or some of the .NET languages.
    Regards,
    Tomi
    Tomi Maila

    I have to answer to myself since nobody else has yet answered me. I came up with one solution that relies on LabVIEW queues. Queues of different type are all referred the same way and can also be typecased from one type to another. This means that one can use single element queues as a kind of variant data type, which is quite safe. However, one copy of the data is made when you enqueue and dequeue the data.
    See the attached image for details.
    Tomi Maila
    Attachments:
    variant.PNG ‏9 KB

  • Question about Kurts comments discussing the seperation of AIA & CDP - Test Lab Guide: Deploying an AD CS Two-Tier PKI Hierarchy - Kurt L Hudson MSFT

    Question about the sentence in bold. What is the meaning behind this comment?
    How would you separate the role of the AIA and CDP from a CA subordinate server? I can see where I add a CES and CEP server which has those as well, but I don't completely understand his comment. Because in this second step, (http://technet.microsoft.com/en-us/library/tlg-key-based-renewal.aspx)
    he shows how to implement CES and CEP.
    This is from the guide located at: http://technet.microsoft.com/library/hh831348.aspx
    Step 3: Configure APP1 to distribute certificates and CRLs
    In the extensions of the root CA, it was stated that the CRL from the root CA would be available via http://www.contoso.com/pki. Currently, there is not a PKI virtual directory on APP1, so one must be created.
    In a production environment, you would typically separate the issuing CA role from the role of hosting the AIA and CDP.
    However, this lab combines both in order to reduce the number of resources needed to complete the lab.
    Thanks,
    James

    My concern is, they have a 2-3k base of xp systems, over this year they are migrating them to Windows 7. During this time they will also be upgrading hardware for the existing windows 7 machines. The turnover of certificates are going to be high, which
    from what I've read here, it worries me.
    http://blogs.technet.com/b/askds/archive/2009/06/24/implementing-an-ocsp-responder-part-i-introducing-ocsp.aspx
    The application then can go to those locations to download the CRL. There are, however, some potential issues with this scenario. CRLs over time can get rather large
    depending on the number of certificates issued and revoked. If CRLs grow to a large size, and many clients have to download CRLs, this can have a negative impact on network performance. More importantly, by
    default Windows clients will timeout after 15 seconds while trying to download a CRL. Additionally,
    CRLs have information about every currently valid certificate that has been revoked, which is an excessive amount of data given the fact that an application may only need the revocation status for a few certificates. So,
    aside from downloading the CRL, the application or the OS has to parse the CRL and find a match for the serial number of the certificate that has been revoked.
    With the above limitations, which mostly revolve around scalability, it is clear that there are some drawbacks to using CRLs. Hence, the introduction of Online Certificate
    Status Protocol (OCSP). OCSP reduces the overhead associated with CRLs. There are server/client components to OCSP: The OCSP responder, which is the server component, and the OCSP Client. The OCSP Responder accepts status
    requests from OCSP Clients. When the OCSP Responder receives the request from the client it then needs to determine the status of the certificate using the serial number presented by the client. First the OCSP Responder determines if it has any cached responses
    for the same request. If it does, it can then send that response to the client. If there is no cached response, the OCSP Responder then checks to see if it has the CRL issued by the CA cached locally on the OCSP. If it does, it can check the revocation status
    locally, and send a response to the client stating whether the certificate is valid or revoked. The response is signed by the OCSP Signing Certificate that is selected during installation. If the OCSP does not have the CRL cached locally, the OCSP Responder
    can retrieve the CRL from the CDP locations listed in the certificate. The OCSP Responder then can parse the CRL to determine the revocation status, and send the appropriate response to the client.

  • To use SQL or to not use SQL ..... That is the question

    A couple of posts lately have brought something to my attention that I wanted to discuss between the folks that view this forum because I believe it is important. I highly value the opinions of many of the members here so I think getting your insight would not only benefit me, but many other forum members as well.
    This discussion stems from two posts:
    {message:id=3786432} (Billy)
    ...The question that you need to ask yourself is why use such a technique? For rendering the data a specific way in the client? Well, rendering data is NOT a SQL function and in essence a result of ignorance of how to correctly use client-server. Rendering on the client is dealt with by the client itself. Using SQL to do it.. not only nasty (as many of these examples above are), but also far from optimal and efficient SQL. And in most cases, will not scale. Increase the data volume of the table queried and there will be a hefty performance knock as SQL is incorrectly used.
    ...>
    {message:id=3914362} (Sven W.)
    ...For the Pivot considerations. It is usually much better not to try to do this inside the database. If you think about it. The data itself can be fetch from the database very easily. To do a PIVOT is a kind of GUI/Layout representation for this data. This should be done in the GUI Layer.
    >
    I tried to respond to the thread Billy posted in, so I'll cut and paste my response here:
    Discussion
    Where do we as database developers draw the line between the correct use of SQL or not? Or between rendering on the client and just returning data?
    Now with LISTAGG, PIVOT and UNPIVOT all available to us would these be considered correct uses of SQL?
    Where does this leave the TO_CHAR function? Is this considered rendering?
    I'm fully expecting a fuzzy answer with something along the lines of "do the work where it makes the most sense" from a ease of development and maintainability perspective but I just wanted to ask.
    Hopefully this is a valuable discussion.
    Thanks!

    Let me give a simple example. You can store images in a table as a LOB. You can serve these images to a web browser client via mod_plsql.
    However, the data is static. It requires I/O (and some hefty ones for larger images). What is the biggest performance penalty we have in Oracle? I/O? What is affected by doing I/O to read these images? The buffer cache (which will age out other data in the cache).
    Where else can we store this data? The web server. At what cost to the performance of Oracle? None. Impact on web server? Heck, web servers are designed at their very core to do this!
    So where is the best place to storage static images in this specific case? Not the database, but the web server.
    Now simply extend this concept to the client - where is the best place to render data?
    Should the data be formatted for rendering (e.g. converted into HTML) in the database layer, or should it rather be done in the presentation layer?
    Now I can already hear the argument that the former is exactly what we are doing using APEX. We create dynamic HTML pages on the Oracle server side and then dish that up to the rendering layer to display.
    Two issues that need to be considered. Firstly, this is not done using SQL. This is done using a procedure language called PL/SQL - not using native SQL. PL/SQL in this case is used exactly as Java or PHP or Perl or any other "+app layer+" language would be used. It only happens that PL/SQL resides in the database too. But do not mistake it for what it really is - the application layer.
    The second issue drives home the point that even in 3 tier client server, the application layer is not the best place to do the formatting for the rendering layer. Web 2.0 aka AJAX.. Where the app layer delivers a dynamic rendering engine (as Javascript) to the rendering layer. After which rendering and formatting are done solely inside that rendering layer. And interaction between that and the app layer is requests for new/fresh data to be rendered.
    Why is AJAX becoming so popular? Key issues and concepts like performance, and a rich client interface and so on.
    This all points that the fundamental principle of using the rendering layer to do its thing and using the SQL layer to do its (separate and different) thing, still holds true.
    Yes, we may not always stick to this principle - as we do with doing the rendering (creating HTML) in PL/SQL using APEX for example.. but this is not because the principle is unsound. It is because of technology reasons (different browsers, different behaviour), lack of support for W3C standards (hello IE) and so on.
    It is only recently that these problem areas have been meaningfully addressed.. and why rendering frameworks like extJS is the (rendering layer) future of 3 tier client server.
    If the concept of using SQL to perform rendering and formatting had any substance.. then there would have been a lot of resistance to AJAX for example. The reverse is true.. as we all want to use SQL to do SQL and want the rendering layer to do its thing without us having to code in SQL to specifically support rendering and formatting. It is clunky. It slows down the SQL (every formatting function is a tiny overhead that adds up). It does not bode well for maintenance and changes to the presentation layer. And all those tiny overheads can spell doom for scalability.
    I do not see any gray lines here, or a question of "+opinion+", or "+it depends+". The architecture is clear. The fundamentals are sound.
    The real issue is how we choose to apply these. But (the "+incorrect+") application (of these fundamentals) does not invalidate the fundamentals.

Maybe you are looking for

  • How do i get rid of the last import album on my iPhone?

    How do i get rid of the last import album on my iPhone its taking up almost half my storage of my whole phone?!!

  • Adding a photo to the selection AFTER creating a book in iphoto 09

    Hi all - Just starting my first book creation on iPhoto 09... I created the book while in the smart folder I want to use the pics from... great so far... but there's just one pic which isn't in the smart folder, and when I amend it so that it does ap

  • How to make ship to address fields mandatroy while creating sales order

    Dear forum Members, My client oraganizes trade shows and hence ship to address vries everytime when sales order is entered for the same sold to party.So one times ship to customer is used in the sales order and address is entered at sales order level

  • How can I get a new rear cover for iphone (not 3G)

    I have the older iphone 16GB (not 3G). it was broken when I dropped it. I tried taking it apart, and of course, I bent the aluminum back cover of the phone and can't get it back to the exact shape it was in before I bent it. Does anyone know where I

  • Icon problem in forms 10g

    hii I have oracle 10g forms installed on my pc, I want to display some icons on the button.i have done all of the required things like:I have kept all the gif icons in the particular place,& have changed the formsweb.cfg file by changing imagebase pa