Schema Design for Worklist Application - best practice?

Hello,
we are designing the Schema for a workflow application now. I'm wondering what kind of XML Schema would be best suited for the JSP generation of the Workflow Wizard.
So far I've found out with some tests (please correct me if I'm wrong):
- Only elements will be mapped to JSP fields, not attributes
- If elements have single-letter name, the field label will be eliminated totally in JSP (bug?!)
- For EVERY parent node, an HTML table is generated in the JSP containing all the simple nodes in the parent. If a parent node contains another parent node, both tables will be generated on the same level.
And I haven't found any way to create drop-down list or checkbox/radiobuttons out of the XSD definition (enumeration as element type).
I would really appreciate it if someone could share some experience in this area, many thanks in advance!
regards
ZHU Jia

Hello,
we are designing the Schema for a workflow application now. I'm wondering what kind of XML Schema would be best suited for the JSP generation of the Workflow Wizard.
So far I've found out with some tests (please correct me if I'm wrong):
- Only elements will be mapped to JSP fields, not attributes
- If elements have single-letter name, the field label will be eliminated totally in JSP (bug?!)
- For EVERY parent node, an HTML table is generated in the JSP containing all the simple nodes in the parent. If a parent node contains another parent node, both tables will be generated on the same level.
And I haven't found any way to create drop-down list or checkbox/radiobuttons out of the XSD definition (enumeration as element type).
I would really appreciate it if someone could share some experience in this area, many thanks in advance!
regards
ZHU Jia

Similar Messages

  • Designing for Mobile Applications

    I have a basic 6 page web site for our organization.  Since many of our members now use their smart phones for web browsing I am thinking of adjusting my site to accomodate them.  Since I have never designed for mobile applications I just wanted to get some professional feedback so I know the proper path to take. The site is a simple three-column layout with header and footer, primarily text with scattered graphics.
    My first thought is to just redesign the site with a smaller overall size so it would be viewable in smaller devices, such as smart phones and IPods.  Although it would require a total re-design change as far as layout and reduced use of graphics, it wouldn't take me long to create since the site is small to begin with.
    My second option is to use a mobile template via jQuery.  However, I have never used jQuery and am wondering if re-designing my site using jQuery would be the preferred route to go, or overkill due to the small size of my site.
    I guess my question is this:  At what point does it become preferable to use jQuery to design mobile web sites, as opposed to just re- creating a smaller physical version of  my current web site?  Or does using jQuery for mobile design the accepted common practice?
    I'd appreciate anyone's professional opinion on the subject.  Thanks.

    Your application's main module structure can contain different sub-modules like common/base, BCC/versioning and other sub-modules. As you mentioned all of your common piece of server-side functionality, common components and repository definitions can go in the common/base sub-module within your main module structure. There can be one common-ui kind of a sub-module also which can hold common front-end stuff like JS, common CSS, images etc. Then there can be one sub-module for the actual customer facing application. These sub-modules can declare dependency on the common/base sub-module.
    I would also go with having a single EAR for the customer facing app sub-module. Now within that EAR you can have different web-apps (WARs) with different context-roots as per your requirements. Having different web-apps will also allow you to bring in device specific changes to the application e.g. adding a device specific filters in case required.

  • Routing error in Application Best Practice App (TDG) ?

    Hi All,
    i try Application Best Practices ...
    but if i select always another product ...
    the category and the Vendor are always the same. But if we check the json files there
    are different.
    Or should it not work in mock modus?
    regards
    Bernard

    Hi,
    I'm not talking about the add. if you select a product ....
    the Vendoraddress is always the same ... but in the mockdata json file is different !
    cheers
    Bernard

  • Slow starup of Java application - best practices for fine tuning JVM?

    We are having problems with a java application, which takes a long time to startup.
    In order to understand our question we better start with some background info. You will find the question(s) after that.
    Background:
    The setup is as follows:
    In a client-server solution we have a win xp, fat client running java 1.6.0.18.
    (Sun JRE). The fat client containt a lot of GUI, and connects to a server for DB access. Client machines are typical 1 to 3 years old (there are problems even on brand new machines). They have the client version of JRE - standard edition installed (Java SE 6 update 10 or better) Pretty much usual stuff so far.
    We have done a lot of profiling on the client code, and yes we have found parts of our own Java code that needs improving. we are all over this. Server side seems ok with good response times. So far, we havent found anything about shaky net connections or endless loops in the java client code or similiar.
    Still, things are not good. Starting the application takes a long time. too long.
    There are many complicating factors, but here is what we think we have observed:
    There is a problem with cold vs. varm starts of the application. Apparently, after a reboot of the client PC - things are really, really bad - and it takes (sometimes) up to 30-40 secs to start the application (until we arrive at the start GUI in our app).
    If we run our application, close it down, and then restart
    without rebooting, things are a lot better. It then usually takes
    something like 15 - 20 sec. which is "acceptable". Not good, but acceptable,
    Any ideas why?
    I have googled it, and some links seems to suggest that the reason could be disk cache. Where vital jar are already in disk cache on th warm start? Does that make any sense? Virus scanners presumable runs in both cases.
    People still think that 15 - 20 sec in start up on the warm start is an awful long time, even though there is a lot, a lot, of functionality in the application.
    We got a suggestion to use IBMs JRE - as it can do some tricks (not sure what) our SUN JRE cant do concerning the warm and cold start problem. But thats is not an option for us. And noone has come up with any really good suggestions with the SUN JRE so far?
    On the Java Quick Starter (JQS) -
    improves initial startup time for most java applets and applications.
    Which might be helpful? People on the internet seem more interested
    in uninstalling the thing than actually installing it though?
    And it seems very proprietary, where we cant give our Jar files to it?
    We could obviously try to "hide" the problem in some way and make it "seem" quicker. Where perceived performance can be just as good as actual performance. But it does seem a bad solution. So for the cold start we will probably try reading the jar files and thereby have them in disk cache before startup of our application. And see if that helps us.
    Still, ok the cold start is the real killer, but warm start isn't exactly wonderfull either.
    People have suggested that we read more on the JVM and performance.
    java.sun.com.javase/technologies/performance.jsp
    java.sun.com.docs/hotspot/gc5.0/gc_tuning_5.html
    the use of JVM flags "-Xms" "-Xmx" etc etc.
    And here comes the question .. da da ...
    Concerning various suggested reading material.
    it is very much appreciated - but we will like to ask people here - if it is possibe to get more specific pointers. to where the gold might be buried.
    I.e. in a an ideal world we would have time to read and understand all of these documents in depth. However, in this less than ideal world we are also doing a lot of very timeconsuming profiling in our own java code.
    E.g. java garbage collection is is a huge subject - and JVm settings also. Sure, in the end we will probably have to do this all very thoroughly. But for now we are hoping for some heuristics on what other people are doing when facing a problem like ours..?
    Young generation, large memory pages, garbage collection threads ect. all sounds interesting - but what would you start with?
    If you don't have info to decide - what kind of profiling would you be running and then adjust what JVM setting in your trials?
    In this pressed for time scenario. Ignorance is not bliss. But makes it hard to pinpoint the or those JVM parameters to adjust. So some good pointers from experienced JVM "configurators" will be much appreciated!
    Actually, If we can establish that finetuning of these parameters is a good idea, it will certainly also be much easier to allocate the time for doing so. - reading, experimenting etc. in our project.
    So, All in all , what kinds of performance improvements can we hope for? 5 out of 20 secs on the warm start? Or is it 10 % nitpicking? Whats the ball park figure for what we can hope to achieve here given our setup? What do you think based on above?
    Maybe someone out there have done some finetuning of JVM parameters in a similiar PC environments like, with similiar fat clients...? Finetuning so and so - gave 5 secs. So start your work with these one-two parameters?
    Something like that - some best practices? Thats what we are hoping for.
    best wishes
    -Simon

    Thanks for helpful answer from both you and kajbj.
    The app doesn't use shared network drives.
    What are you doing between main starts to get executed and the UI is
    displayed?
    Basicly, Calculating what to show in the UI. Accessing server - not so much, there are some reads from a cache, but the profiling doesnt indicate that it should be a problem. Sure, I could shift the startup time to some other slot, but sofar I havent found a place where the end-user wouldnt be annoyed.> Caching of something would seem most obvious. Normal VM stuff >seems unlikely. With profiling i basicly find that ''everything'' takes a lot longer in the cold start scenario. Some of our local java methods are going to be rewritten following our review. But what else can be tuned?You guys dont think the Java Quick Start approach, with more jars in disk cache will give something? And how should that be done/ what does people do?I.e. For the class loader I read something about
    1.Bootstrap class loader
    2.Extensions class loader
    3.System class loader
    and is wondering if this has something to do with the cold start problem?
    The extensions class loader loads the code in the extensions directories (<JAVA_HOME>/lib/ext
    So, we should move app classes to ext? Put them in one jar file? (We have many). Best practice about that?
    Otherwise it seems to me that it must be about finetuning the JVM?
    I imagine that it is a question about:
    1. the right heap size
    2. the right garbage collection scheme
    Googling heap size for XP
    CHE22 writes:
    You are right; -Xms1600M works well, but -Xms1700M bombs
    Thats one best practice or what?
    On garbage collection, there are numerous posts, and much "masters of Java black art" IMHO, And according to profiling GC is not really that much of a problem anyway? Still,
    Based on my description I was hoping for a short reply like "try setting these two parameters on your xp box, it worked for me" ...or something like that. With no takers on that one, I fear people are saying that there is nothing to be gained there?
    we read:
    [ -Xmx3800m -Xms3800m
    Configures a large Java heap to take advantage of the large memory system.
    -Xmn2g
    Configures a large heap for the young generation (which can be collected in parallel), again taking advantage of the large memory system. It helps prevent short lived objects from being prematurely promoted to the old generation, where garbage collection is more expensive.
    Unless you have problems with pauses, try granting as much memory as possible to the virtual machine. The default size (64MB) is often too small.
    Setting -Xms and -Xmx to the same value increases predictability by removing the most important sizing decision from the virtual machine. On the other hand, the virtual machine can't compensate if you make a poor choice.
    The -XX:+AggressiveHeap+ option inspects the machine resources (size of memory and number of processors) and attempts to set various parameters to be optimal for long-running, memory allocation-intensive jobs]
    So is Setting -Xms and -Xmx and -XX:AggressiveHeap
    best practice? What kind of performance improvement should we expect?
    Concerning JIT:
    I read this one
    [the impact of the JIT compiler is obvious on the graph: at startup the time taken is around 500us for the first few values, then quickly drops to 130us, before falling again to 70us, where it stays for 30 minutes,
    for this specific issue, I greatly improved my performances by configuring another VM argument: I set -XX:CompileThreshold=50]
    The size of the cache can be changed with
    -Xmaxjitcodesize
    This sounds like you should do something with JIT args, but reading
    // We disable the JIT during toolkit initialization. This
    // tends to touch lots of classes that aren't needed again
    // later and therefore JITing is counter-productiive.
    java.lang.Compiler.disable();
    However, finding
    the sweet spots for compilation thresholds has been tricky, so we're
    still experimenting with the recompilation policy. Work on it
    continues.
    sounds like there is no such straigth forward path, it all depends...
    Ok, its good, when
    [Small methods that can be more easily analyzed, optimized, and inlined where necessary (and not inlined where not necessary). Clearly delineated uses of data so that usage patterns and lifetimes are apparent. ]
    but when I read this:
    [The virtual machine is responsible for byte code execution, storage allocation, thread synchronization, etc. Running with the virtual machine are native code libraries that handle input and output through the operating system, especially graphics operations through the window system. Programs that spend significant portions of their time in those native code libraries will not see their performance on HotSpot improved as much as programs that spend most of their time executing byte codes.]
    I have the feeling that we might not able to improve performance that way?
    Any comments?
    otherwise i was wondering about
    -XX:CompileThreshold=50 -Xmaxjitcodesize (large, how large?)
    Somehow, we still feel that someone out there should have experienced similiar problems? But obviously there is no guarantee that the someone should surf by here!
    In c++ we used to just write everything ourselves. Here it does seem to be a question about the right use of other peoples stuff?
    Where you are kind of hoping for a shortcut, so you dont have to read endless number of documents, but can find a short document that actually addresses your problem ... well.
    -Simon
    Edited by: simoncpm on Mar 15, 2010 3:43 PM
    Edited by: simoncpm on Mar 15, 2010 3:53 PM

  • Need to know if there is UI designer for Mobile application on NWDS

    Hello Experts....
    I am planning to create a mobile application using NWDs.
    Can I know if we can design a Mobile Application on the MWDs-> MI Application using UI tools?
    can I use Web dynpro to develop mobile appliction for both HandHeld as well as Laptops.
    If we can, can someone please send any supporting documents for the same.
    I would also like to know what is the best way to design our own screens (for new functionality not existing in SAP) for a mobile device (PDAs and Laptops)
    thanks in advance.
    Raju

    Take a look at this link, https://discussions.apple.com/thread/2401746?start=0&tstart=0

  • Looking for Some Examples / Best Practices on User Profile Customization in RDS 2012 R2

    We're currently running RDS on Windows 2008 R2. We're controlling user's Desktops largely with Group Policy. We're using Folder Redirection to configure their Start Menus as well.
    We've installed a Server 2012 R2 RDS box and all the applications that users will need. Should we follow the same customization steps for 2012 R2 that we used in 2012 R2? I would love to see some articles on someone who has customized a user profile/Desktop
    in 2012 R2 to see what's possible.
    Orange County District Attorney

    Hi Sandy,
    Here are some related articles below for you:
    Easier User Data Management with User Profile Disks in Windows Server 2012
    http://blogs.msdn.com/b/rds/archive/2012/11/13/easier-user-data-management-with-user-profile-disks-in-windows-server-2012.aspx
    User Profile Best Practices
    http://social.technet.microsoft.com/wiki/contents/articles/15871.user-profile-best-practices.aspx
    Since you want to customize user profile, here is another blog for you:
    Customizing Default users profile using CopyProfile
    http://blogs.technet.com/b/askcore/archive/2010/07/28/customizing-default-users-profile-using-copyprofile.aspx
    Best Regards,
    Amy
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]

  • Search for ABAP Webdynpro Best practice or/and Evaluation grid

    Hi Gurus,
    Managers or Team Leaders are facing of the development of SAP application on the web. Functional people propose to business people Web applications.  I'm searching for Best practice for Web Dynpro ABAP Development. We use SAP Netweaver 7.0 and an SAP ECC 6.0 SP4.
    We are facing of claims about Webdynpro response time. The business wants to have 3 sec response time and we have 20 or  25 sec.
    I want to communicate to functional people a kind of recommendation document explaining that in certain case the usage of Webdynpro will not be a benefit for the business.
    I know that the transfer of data, the complexity of the screen and also the hardware are one of the keys but I expect some advices from the SDN community.
    Thanks for your answers.
    Rgds,
    Christophe

    Hi,
    25s is a lot. I wouldn't like to use an application with response time that big. Anyway, Thomas Jung has just recently published a series of video blogs about WDA performance tools. It may help you analyzing why your web dynpro application is so slow. Here is the link to the [first part|http://enterprisegeeks.com/blog/2010/03/03/abap-freakshow-u2013-march-3-2010-wda-performance-tools-part-1/]. There is also a [dedicated forum|Web Dynpro ABAP; to WDA here on SDN. I would search there for some tips and tricks.
    Cheers

  • SOA composite application best practice

    Hi All,
    We are running SOA Suite 11g. One of my colleagues said that we should always have a mediator in our composite applications instead of just exposing the BPEL Process as a SOAP service. Is this a correct statement? If so why is that good practice. Still trying to grasp the concepts and best practices for SOA so any information is greatly appreciated.
    Thanks,
    S

    if you place a mediator in between them, you can change the bpel interface without having to change your composite soap interface
    that's one thing which could be a best practice

  • NSM Event Agent for AD location - Best practice

    Hello,
    We are currently designing our NSM 3.1 for AD implementation and would like some guidance with regard to installing the NSM Event Agent. We have come up with two options:
    The first option is to install the NSM Event Agent on a Domain Controller where new user accounts are provisioned.
    The second option is to install the NSM Event Agent on a server with the other NSM components.
    The argument for option 1 is that NSM will be notified as soon as an account is created.
    The argument for option 2 is that MS best practice is that no other software should be installed on a DC and that the NSM Event Agent will perform a network request to talk to the nearest domain controller to obtain a list of changes since it last connected.
    Is there any preferred option, or does it not matter?
    Regards,
    Jonathan

    On 10/28/2013 7:16 AM, JonathanCox wrote:
    >
    > Hello,
    >
    > We are currently designing our NSM 3.1 for AD implementation and would
    > like some guidance with regard to installing the NSM Event Agent. We
    > have come up with two options:
    >
    >
    > - The first option is to install the NSM Event Agent on a Domain
    > Controller where new user accounts are provisioned.
    > - The second option is to install the NSM Event Agent on a server with
    > the other NSM components.
    >
    >
    >
    > The argument for option 1 is that NSM will be notified as soon as an
    > account is created.
    > The argument for option 2 is that MS best practice is that no other
    > software should be installed on a DC and that the NSM Event Agent will
    > perform a network request to talk to the nearest domain controller to
    > obtain a list of changes since it last connected.
    >
    > Is there any preferred option, or does it not matter?
    >
    > Regards,
    >
    > Jonathan
    >
    >
    Jonathan,
    Unlike eDirectory event monitoring, Active Directory event monitoring is
    accomplished with a polling mechanism. Therefore putting your Event
    Monitor on the domain controller will not significantly increase
    performance. As long as the Event Monitor is in a site with a domain
    controller, it should pick up events as quickly as it can.
    For further reading on AD sites and domain/forest topology we recommend
    reviewing http://technet.microsoft.com/en-us/l.../cc755294.aspx.
    Remember that for AD, NSM requires only one Event Monitor per domain
    (and in fact you'll only be able to authorize one Event Monitor per
    domain through the NSM Admin client.) However, deploying a second Event
    Monitor as a backup may be helpful. When the AD Event Monitor is
    installed and configured for the first time, it first has to build a
    locally-cached replica of the domain it resides in. In a large domain
    this can take a long time, so having a second EM already running, which
    can be authorized immediately if the primary EM goes down, will ensure
    that you catch up with events in AD more quickly.
    -- NFMS Support Team

  • Looking for team development best practices

    We are new to Flex and have a team of five developers with
    JEE background. My question is how to best organize a flex project,
    so it's efficient for everyone to work together. Coming from
    typical JEE Web application development, it's quite straightforward
    to break up features into separate Java classes and JSP pages. It
    reduces chances of multiple people working on the same file and the
    merging hassle. I am looking for best practices for breaking up
    flex code especially for MXML, so it is easy for a team of
    developers to work on the project.

    We are new to Flex and have a team of five developers with
    JEE background. My question is how to best organize a flex project,
    so it's efficient for everyone to work together. Coming from
    typical JEE Web application development, it's quite straightforward
    to break up features into separate Java classes and JSP pages. It
    reduces chances of multiple people working on the same file and the
    merging hassle. I am looking for best practices for breaking up
    flex code especially for MXML, so it is easy for a team of
    developers to work on the project.

  • Hotfix Application Best Practices

    I have a twofer with regards to applying a hotfix. We deployed Config Manager 2012 RTM, upgraded to SP1, and then upgraded to R2. We have never applied a hotfix or CU before, so there is a bit of mysteriousness with regards to what the best practices are.
    We are applying hotfix 2910552 to address slow imaging speeds. These questions are pretty basic but I wanted to get some informed opinions.
    What is the best rollback procedure in the event of problems? I consider the hotifx a low risk but there is some concern from some others above me. We are planning on taking snapshots of the 3 site servers and the DB server in our hierarchy, but not the
    DPs. Does this seem sound or is there a better technique?
    How essential is it to update the clients in our environment in a timely fashion, or at all? I am going to have the packages created, but I did not know if I should deploy them immediately or not. Our server group has some concerns about applying the patch
    to the config manager clients on our servers during our patching windows.
    Any insight is appreciated. Thanks!
    Bryan

    There's more risk in taking snapshots as they are completely and explicitly unsupported and almost certainly would cause issues particularly since your DB is separate from your site server.
    Rollback is simply uninstalling the hotfix. That hotfix addresses a very niche issue that only manifests itself during OSD, thus, it's only important roll it out to clients before you reimage them in a refresh scenario. An alternative rollback is simply
    reinstalling the site and restoring your DB. This sounds painful and while it would take a bit of time, it's actually rather painless and works quite well.
    This all begs the question though, you really should just do CU3. There are tons of other meaningful and impactful fixes in the CUs that will improve the overall stability and even functionality of the site and clients.
    Concerns for applying hotfixes should be addressed by performing the update in a lab first. There is no other way to comfort risk averse folks except by showing them that it works. Additionally, you put forth evidence from the community that CU application
    to ConfigMgr is almost always smooth and uneventful. Can something go wrong? Of course. I could get hit by lightning sitting in my chair but that doesn't mean I stay in bed all day.
    Jason | http://blog.configmgrftw.com | @jasonsandys

  • Bandwidth Utilization Avg or Max for capacity Planning best practice

    Hello All - This is a conceptual or Non-Cisco product question. Hope you can help me to get this best industry practice
    I am doing a Capacity planning for the WAN Link Bandwidth. To study the last month bandwidth utilization in the MRTG graph, i am seeing  two values
    Average
    Maximum.
    To measure how much bandwidth my remote location is using which value i have to use. Average or Max?
    Average is always low eg. 20% to 30%
    Maximum is continuous 100% for 3 hour in 3 different intervals in a day and become 60% in rest of the day
    What is the best practice followed in the networking industry to derive the upgrade size of the bandwidth by using the Utilization graph
    regards,
    SAIRAM

    Hello.
    It makes no sense to use average during whole day (or a month), as you do the capacity management to avoid business impact due to link utilization; and average does not help you to catch is the end-users experience any performance issues.
    Typically your capacity management algorithm/thresholds are dependent on traffic patterns. As theses are really different cases if you run SAP+VoIP vs. youtube+Outlook. If you have any business critical traffic, you need to deploy QoS (unless you are allowed to increase link bandwidth infinitely).
    So, I would recommend to use 95-percentile of maximum values on 5-15 minutes interval (your algorithm/thresholds will be really sensitive to pooling interval, so choose it carefully). After to collect baseline (for a month or so)  - go and ask users about their experience and try to correlate poor experience with traffic bursts. This would help you to define thresholds for link upgrade triggers.
    PS: proactive capacity management includes link planning for new sites and their impact on existing links (in HQ and other spoke).
    PS2: also I would recommend to separately track utilization during business hours (business traffic) and non-business (service or backup traffic).

  • Authorizations for tasks (R_UC_TASK) / Best Practice SEM-BCS authorization

    Dear Experts,
    I am quite new to authorizations and in particular to SEM-BCS authorization. So I would be happy if you could help me with the following requirement:
    We have to setup an authorization concepts for SEM-BCS. Among others we want to setup authorizations for consolidations tasks using authorization object R_UC_TASK. With this authorization object certain tasks can be restricted to certain characteristic values u2013 e.g. for a certain consolidation group or a certain consolidation unit. We have defined a role each for certain consolidation tasks. These roles are not restricted to any characteristic value yet. We have for instance a role u201Cregional controlleru201D who is allowed to perform certain BCS tasks on a regional level (consolidation unit level). This would mean that we would have to create the role u201Cregional controlleru201D for all consolidation units u2013 see example below:
    Role 1: Regional Controller u2013 Cons. Unit 1000
    Role 2: Regional Controller u2013 Cons. Unit 1100
    Role 3: Regional Controller u2013 Cons. Unit 1200
    Role n: Regional Controller u2013 Cons. Unit n
    We have more than 400 consolidation units. So this would require a high effort. Is there instead a possibility of creating one role based on authorization object R_UC_TASK which just defines which activities can be performed (without restricting access to a certain consolidation unit). , and using second role which defines the consolidation unit access? u2013 see example below:
    A
    Role: Regional Controller
    Role: Cons Unit 1000
    B
    Role: Regional Controller
    Role: Cons Unit 1100
    C
    Role: Regional Controller
    Role: Cons Unit 1200
    In this case we only would have to maintain one role u201CRegional Controlleru201D and we only would have to assign the restriction for the consolidation unit. How could this be realized?  Or do you have any other ideas to solve this requirement in a simple way?
    Moreover I would be happy if you could tell me where I could find best practice scenarios for SEM-BCS authorizations.
    Thanks a lot in advance!
    Best regards
    Marco

    Hello Marco,
    you can enter a master role in the description tab of a role. All fields populated via program PFCG_ORGFIELD_CREATE can be maintained in the role. All other fields will be taken from the master role. So you only need to populate the field for unit with the program.
    Good luck
    Harry

  • Large ADF Applications - Best Practice

    Hi
    We have a single ADF project (one model, one view/controller) with the following model components:
    68 AMs
    387 VOs
    175 EOs
    This project is ever expanding, and we are suffering some well-known performance problems when open JDeveloper or open the view/controller project.
    Are there any best-practice guidelines on how to structure your ADF projects?
    e.g. what is the maximum recommended number of AMs/VOs/EOs in a single project?
    We have kept everything in a single project after some advice from Oracle to help us to re-use common modules easily.
    We use JDeveloper v10.1.3 and use JHeadstart v10.1.3 SU 1 to generate our view/controller layer, using multiple faces-config files.
    Thanks
    Denis

    Hi Denis,
    We have exactly the same problem in expanding our application but we have a single AM and less EOs, VOs than you right now. There are some threads discussing this issue but I haven't found a complete and standard solution yet. The only thing I know is that almost every thing can be segmented in ADF like projects, application module, faces-config.xml, ...
    Another thing which is very important is that segmenting an application is a tradeoff, it has some advantaes but problems in SCM, security, ...
    S/\EE|)

  • Setting Disks/Caches/Vault for multiple projects - Best Practices

    Please confirm a couple assumptions for me:
    1. Because Scratch Disk, Cache and Autosave preferences are all contained in System Settings, I cannot choose different settings for different projects at the same time (i.e. I have to change the settings upon launch of a new project, if I desire a change).
    2. It is good practice to set the Video/Render Disks to an external drive, and keep the Cache and Autosave Vault set to the primary drive (e.g. user:Documents:FCP Documents). It is also best practice to save the Project File to your primary drive.
    And a question: I see that the Autosave Vault distinguishes between projects, and the Waveform Cache Files distinguishes between clips. But what happens in the Thumbnail Cache Files folder when you have more than one project targeting that folder? Does it lump it into the same file? Overwrite it? Is that something about which I should be concerned?
    Thanks!

    maxwell wrote:
    Please confirm a couple assumptions for me:
    1. Because Scratch Disk, Cache and Autosave preferences are all contained in System Settings, I cannot choose different settings for different projects at the same time (i.e. I have to change the settings upon launch of a new project, if I desire a change).
    Yes
    2. It is good practice to set the Video/Render Disks to an external drive, and keep the Cache and Autosave Vault set to the primary drive (e.g. user:Documents:FCP Documents).
    Yes
    It is also best practice to save the Project File to your primary drive.
    I don't. And I don't think it matters. But you should back that file up to some other drive (like Time Machine).
    And a question: I see that the Autosave Vault distinguishes between projects, and the Waveform Cache Files distinguishes between clips. But what happens in the Thumbnail Cache Files folder when you have more than one project targeting that folder? Does it lump it into the same file? Overwrite it? Is that something about which I should be concerned?
    I wouldn't worry about it.
    o| TOnyTOny |o

Maybe you are looking for

  • Open Order Shipment

    Hi experts, Open Order Shipment,Stock on Hand report, Delivery Order History, Credit/Debit Notes for this report what are the tables and fields are used. thanks in advance.

  • Preview in Browser malfunction

    In DW3, not having selected a testing server, when I select 'Preview in Browser', each browser attempts to go online to a testing server. There is no testing server selected in the site definition. It used to work perfectly until it just stopped work

  • Email conc program output file by using printer driver

    Hi , we have one interface program which has been scheduled every week. My req is that after completing successfully/error the interface program i need to send an email to the given email address with output file as attachment. This should be happene

  • Is there a zoom SLIDER anywhere in Lightroom 5?

    When developing photos, I want to be able to more gradually zoom in and out. Am I missing where there is a zoom SLIDER in Lightroom 5? (The stepped zoom is driving me crazy -- it's not subtle enough!) Thanks! C.H.

  • Video screen looks squished!

    So I put my video which was created at 720 x 480 into the video slot. Works fine but it appears squashed. Im assuming that is because the aspect ratio is not standard. Any way to make it look correct to my original or do I have to live with fat looki