ZFS and fragmentation

I do not see Oracle on ZFS often, in fact, i was called in too meet the first. The database was experiencing heavy IO problems, both by undersized IOPS capability, but also a lack of performance on the backups - the reading part of it. The IOPS capability was easily extended by adding more LUNS, so i was left with the very poor bandwidth experienced by RMAN reading the datafiles. iostat showed that during a simple datafile copy (both cp and dd with 1MiB blocksize), the average IO blocksize was very small, and varying wildly. i feared fragmentation, so i set off to test.
i wrote a small C program that initializes a 10 GiB datafile on ZFS, and repeatedly does
1 - 1000 random 8KiB writes with random data (contents) at 8KiB boundaries (mimicking a 8KiB database block size)
2 - a full read of the datafile from start to finish in 128*8KiB=1MiB IO's. (mimicking datafile copies, rman backups, full table scans, index fast full scans)
3 - goto 1
so it's a datafile that gets random writes and is full scanned to see the impact of the random writes on the multiblock read performance. note that the datafile is not grown, all writes are over existing data.
even though i expected fragmentation (it must have come from somewhere), is was appalled by the results. ZFS truly sucks big time in this scenario. Where EXT3, on which i ran the same tests (on the exact same storage), the read timings were stable (around 10ms for a 1MiB IO), ZFS started of with 10ms and went up to 35ms for 1 128*8Kib IO after 100.000 random writes into the file. it has not reached the end of the test yet - the service times are still increasing, so the test is taking very long. i do expect it to stop somewhere - as the file would eventually be completely fragmented and cannot be fragmented more.
I started noticing statements that seem to acknowledge this behavior in some Oracle whitepapers, such as the otherwise unexplained advice to copy datafiles regularly. Indeed, copying the file back and forth defragments it. I don't have to tell you all this means downtime.
On the production server this issue has gotten so bad that migrating to a new different filesystem by copying the files will take much longer than restoring from disk backup - the disk backups are written once and are not fragmented. They are lucky the application does not require full table scans or index fast full scans, or perhaps unlucky, because this issue would have been become impossible to ignore earlier.
I observed the fragmentation with all settings for logbias and recordsize that are recommended by Oracle for ZFS. The ZFS caches were allowed to use 14GiB RAM (and moslty did), bigger than the file itself.
The question is, of course, am i missing something here? Who else has seen this behavior?

Stephan,
"well i got a multi billion dollar enterprise client running his whole Oracle infrastructure on ZFS (Solaris x86) and it runs pretty good."
for random reads there is almost no penalty because randomness is not increased by fragmentation. the problem is in scan-reads (aka scattered reads). the SAN cache may reduce the impact, or in the case of tiered storage, SSD's abviously do not suffer as much from fragmentation as rotational devices.
"In fact ZFS introduces a "new level of complexity", but it is worth for some clients (especially the snapshot feature for example)."
certainly, ZFS has some very nice features.
"Maybe you hit a sync I/O issue. I have written a blog post about a ZFS issue and its sync I/O behavior with RMAN: [Oracle] RMAN (backup) performance with synchronous I/O dependent on OS limitations
Unfortunately you have not provided enough information to confirm this."
thanks for that article,  in my case it is a simple fact that the datafiles are getting fragmented by random writes. this fact is easily established by doing large scanning read IO's and observing the average block size during the read. moreover, fragmentation MUST be happening because that's what ZFS is designed to do with random writes - it allocates a new block for each write, data is not overwritten in place. i can 'make' test files fragmented by simply doing random writes to it, and this reproduces on both Solaris and Linux. obviously this ruins scanning read performance on rotational devices (eg devices for which the seek time is a function of the 'distance between consecutive file offsets).
"How does the ZFS pool layout look like?"
separate pools for datafiles, redo+control, archives, disk backups and oracle_home+diag. there is no separate device for the ZIL (zfs intent log), but i tested with setups that do have a seprate ZIL device, fragmentation still occurs.
"Is the whole database in the same pool?"
as in all the datafiles: yes.
"At first you should separate the log and data files into different pools. ZFS works with "copy on write""
it's already configured like that.
"How does the ZFS free space look like? Depending on the free space of the ZFS pool you can delay the "ZFS ganging" or sometimes let (depending on the pool usage) it disappear completely."
yes, i have read that. we never surpassed 55% pool usage.
thanks!

Similar Messages

  • How can I read the bootstrap files and extract the fragment-URLs and fragment-numbers in plain text?

    How can I read the bootstrap files of any HDS Live stream and extract the fragment-URLs and fragment-numbers in plain text?
    Could it be that it is some kind of compressed format in the bootstrap? Can I uncompress it wirh  f4fpackager.exe? Could not find any download for f4fpackager.exe. I would prefere less code to do so. Is there something in Java of JavaScript, that can extract the fragment-numbers?
    Thank you!

    Doesn't sound too hard to me. Your class User (the convention says to capitalize class names) will have an ArrayList or Vector in it to represent the queue, and a method to store a Packet object into the List. An array or ArrayList or Vector will hold the 10 user objects. You will find the right user object from packet.user_id and call the method.
    Please try to write some code yourself. You won't learn anything from having someone else write it for you. Look at sample code using ArrayList and Vector, there's plenty out there. Post in the forum again if your code turns out not to behave.

  • EBS 7.4 with ZFS and Zones

    The EBS 7.4 product claims to support ZFS and zones, yet it fails to explain how to recover systems running this type of configuration.
    Has anyone out there been able to recover a server using the EBS software, that is running with ZFS file systems both in the Global zone and sub zones (NB: The servers system file store / /usr /var, is UFS for all zones).
    Edited by: neilnewman on Apr 3, 2008 6:42 AM

    The EBS 7.4 product claims to support ZFS and zones, yet it fails to explain how to recover systems running this type of configuration.
    Has anyone out there been able to recover a server using the EBS software, that is running with ZFS file systems both in the Global zone and sub zones (NB: The servers system file store / /usr /var, is UFS for all zones).
    Edited by: neilnewman on Apr 3, 2008 6:42 AM

  • Available objects and fragmentes depending upon template selection

    Hi
    We have a use case where we have several different templates. For each template the user is to have access to only some objects and fragmentes. For example for the car template the user have access to car_registration, number_of_doors. While for instance for the house template the user have access to address, city etc. Anyone know if this is possible with the LiveCycle designer?
    Cheers
    Tore

    I've finally found a useful reference for doing this here:
    http://developers.sun.com/prodtech/javatools/jscreator/reference/techart/intro_buildingcomps.html

  • Aggregation and Fragmentation

    Hi Gurus,
    what is meant by aggregation and fragmentation in OBIEE .Do we use these only when we want to improve the performance of the report if no in which type of scenarios when will go for these.Please any one help me with exact scenario.
    Regards,
    Rafi

    Hi,
    As per the below link
    http://108obiee.blogspot.com/2009/01/fragmentation-in-obiee.html
    In fragmentation after fragmenting the table in database and reimporting the tables and making the joins in Physical layer. we have to drag the table in BMM layer .
    When we generating the report on channels table with channel id greater than 5 then how the biserver will go to the particular table(i.e.Channels_Other). How the biserver will follow the link between two tables(i.e. CHENNELS and CHENNELS_OTHERS) when fetching the data in the report.
    In logical table source why we have to enable the option This source should be combined with other source at this level if we won’t enable this option what it will happens.
    Thanks,
    Edited by: Rafi.B on Aug 2, 2012 1:06 AM

  • Wireless Router RTS and Fragmentation

    If I'm streaming video, does anyone know if it's better to have a large Fragmentation and RTS, or a smaller one?
    My D-Link router recommends an RTS of 2347 and Fragmentation of 2346, but I would like the best performance for video streaming. Thanks!

    well on the router's setup page, click on the advanced wireless settings... try reducing the beacon interval to 50, fragmentation and RTS threshold to 2304...also N transmission rate to 270mbps...check whether it makes any difference or not.
    (Edited post for guideline compliance. Thanks!)
    Message Edited by JOHNDOE_06 on 09-22-2007 11:35 AM

  • Floating fields and fragment subforms in Outlook 2007 do not display correctly

    I'm using LCES Forms 8.0 Update 1 to render a non-interactive HTML forms using dynamic content (customer info). The email appears ok in clients except in Outlook 2007.
    The Floating text and Text Fields appear multiple times on the form (scattered around their placement point on the form) and the subform fragments are appearing with borders surrounding them.
    I have heard that there are issues with Outlook 2007 not displaying HTML correctly. Is there a way to setup the Text Fields/Subforms in Designer so they display correctly in Outlook 2007?
    Thanks!

    Create a new profile as a test to check if your current profile is causing the problems.
    See "Basic Troubleshooting: Make a new profile":
    *https://support.mozilla.org/kb/Basic+Troubleshooting#w_8-make-a-new-profile
    There may be extensions and plugins installed by default in a new profile, so check that in "Tools > Add-ons > Extensions & Plugins" in case there are still problems.
    If the new profile works then you can transfer some files from the old profile to that new profile, but be careful not to copy corrupted files.
    See:
    *http://kb.mozillazine.org/Transferring_data_to_a_new_profile_-_Firefox

  • Performance problems when running PostgreSQL on ZFS and tomcat

    Hi all,
    I need help with some analysis and problem solution related to the below case.
    The long story:
    I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
    The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
    Within a non-global zone I’m running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU                            
        49 postgres  749M  669M   4,7%   7:14:38  13%
         1 jboss    2519M 2536M    18%  50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
        1 jboss    3104M  913M   6,4%   0:22:48 0,1%
    #sar -g 5 5
    SunOS vbn-back 5.10 Generic_142901-03 i86pc    05/28/2010
    07:49:08  pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
    07:49:13    27.67   316.01   318.58 14854.15     0.00
    07:49:18    61.58   664.75   668.51 43377.43     0.00
    07:49:23   122.02  1214.09  1222.22 32618.65     0.00
    07:49:28   121.19  1052.28  1065.94  5000.59     0.00
    07:49:33    54.37   572.82   583.33  2553.77     0.00
    Average     77.34   763.71   771.43 19680.67     0.00Making more memory available to tomcat seemed to worsen the problem or at least didn’t prove to have any positive effect.
    My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
    An unofficial performance evaluation on the database with “vacuum analyze” took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
    The short story:
    I’m trying different steps but running out of ideas. We’ve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didn’t find much information on the matter so if any can help please recommend how to make this change…
    Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
    Any help appreciated and I will try to provide additional information on request if needed…
    Thanks in advance,
    Kasper

    raidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
    Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
    You can change the record size by "zfs set recordsize=8k <dataset>"
    It will only take effect for newly written data. Not existing data.

  • Lucreate not working with ZFS and non-global zones

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here... I'm experiencing the exact same issue on my system. Below is the lucreate and zfs list output.
    # lucreate -n patch20130408
    Creating Live Upgrade boot environment...
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <s10s_u10wos_17b>.
    Current boot environment is named <s10s_u10wos_17b>.
    Creating initial configuration for primary boot environment <s10s_u10wos_17b>.
    INFORMATION: No BEs are configured on this system.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <s10s_u10wos_17b> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <patch20130408>.
    Source boot environment is <s10s_u10wos_17b>.
    Creating file systems on boot environment <patch20130408>.
    Populating file systems on boot environment <patch20130408>.
    Temporarily mounting zones in PBE <s10s_u10wos_17b>.
    Analyzing zones.
    WARNING: Directory </zones/APP> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/APP-patch20130408>.
    WARNING: Device <tank/zones/APP> is shared between BEs, remapping to <tank/zones/APP-patch20130408>.
    WARNING: Directory </zones/DB> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/DB-patch20130408>.
    WARNING: Device <tank/zones/DB> is shared between BEs, remapping to <tank/zones/DB-patch20130408>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b@patch20130408> on <rpool/ROOT/patch20130408>.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b/var> on <rpool/ROOT/s10s_u10wos_17b/var@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b/var@patch20130408> on <rpool/ROOT/patch20130408/var>.
    Creating snapshot for <tank/zones/DB> on <tank/zones/DB@patch20130408>.
    Creating clone for <tank/zones/DB@patch20130408> on <tank/zones/DB-patch20130408>.
    Creating snapshot for <tank/zones/APP> on <tank/zones/APP@patch20130408>.
    Creating clone for <tank/zones/APP@patch20130408> on <tank/zones/APP-patch20130408>.
    Mounting ABE <patch20130408>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <patch20130408>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <s10s_u10wos_17b>.
    Making boot environment <patch20130408> bootable.
    Population of boot environment <patch20130408> successful.
    Creation of boot environment <patch20130408> successful.
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 16.6G 257G 106K /rpool
    rpool/ROOT 4.47G 257G 31K legacy
    rpool/ROOT/s10s_u10wos_17b 4.34G 257G 4.23G /
    rpool/ROOT/s10s_u10wos_17b@patch20130408 3.12M - 4.23G -
    rpool/ROOT/s10s_u10wos_17b/var 113M 257G 112M /var
    rpool/ROOT/s10s_u10wos_17b/var@patch20130408 864K - 110M -
    rpool/ROOT/patch20130408 134M 257G 4.22G /.alt.patch20130408
    rpool/ROOT/patch20130408/var 26.0M 257G 118M /.alt.patch20130408/var
    rpool/dump 1.55G 257G 1.50G -
    rpool/export 63K 257G 32K /export
    rpool/export/home 31K 257G 31K /export/home
    rpool/h 2.27G 257G 2.27G /h
    rpool/security1 28.4M 257G 28.4M /security1
    rpool/swap 8.25G 257G 8.00G -
    tank 12.9G 261G 31K /tank
    tank/swap 8.25G 261G 8.00G -
    tank/zones 4.69G 261G 36K /zones
    tank/zones/DB 1.30G 261G 1.30G /zones/DB
    tank/zones/DB@patch20130408 1.75M - 1.30G -
    tank/zones/DB-patch20130408 22.3M 261G 1.30G /.alt.patch20130408/zones/DB-patch20130408
    tank/zones/APP 3.34G 261G 3.34G /zones/APP
    tank/zones/APP@patch20130408 2.39M - 3.34G -
    tank/zones/APP-patch20130408 27.3M 261G 3.33G /.alt.patch20130408/zones/APP-patch20130408

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

  • ZFS and Windows 2003 Diskpart

    I was given some space on a thumper that has zfs drives. I connect from a Windows 2003 using ISCI. I was running out of space and the admin gave me more space which ended up with my losing the drive, but it came back (is that normal). When I went to use diskpart to expand the drive to the additional space, it would not work. Can I not use diskpart to extend the drive size or do I need to something additional?
    Thanks for your help.

    Earl,
    I'm stuck with the 2003 itunes install problem, too. Can you post or email your solution?
    THANKS,
    Glenn

  • Solaris 10 6/06 ZFS and Zones, not quite there yet...

    I was all excited to get ZFS working in our environment, but alas, a warning appeared in the docs, which drained my excitement! :
    http://docs.sun.com/app/docs/doc/817-1592/6mhahuous?a=view
    essentially it says that ZFS should not be used for non-global zone root file systems.. I was hoping to do this, and make it easy, global zone root UFS, and another disk all ZFS where all non-global whole root zones would live.
    One can only do so much with only 4 drives that require mirroring! (x4200's, not utilizing an array)
    Sigh.. Maybe in the next release (I'll assume ZFS will be supported to be 'bootable' by then...
    Dave

    I was all excited to get ZFS working in our
    environment, but alas, a warning appeared in the
    docs, which drained my excitement! :
    http://docs.sun.com/app/docs/doc/817-1592/6mhahuous?a=
    view
    essentially it says that ZFS should not be used for
    non-global zone root file systems..Yes. If you can live with the warning it gives (you may not be able to upgrade the system), then you can do it. The problem is that the the installer packages (which get run during an upgrade) don't currently handle ZFS.
    Sigh.. Maybe in the next release (I'll assume ZFS
    will be supported to be 'bootable' by then...Certainly one of the items needed for bootable ZFS is awareness in the installer. So yes it should be fixed by the time full support for ZFS root filesystems is released. However last I heard, full root ZFS support was being targeted for update 4, not update 3.
    Darren

  • Zfs and encryption

    we are looking for a filesystem level encryption technology. At this point most of our services are on zfs. At one time I saw encryption on the roadmap for zfs features. Where does this sit now?
    Are there test bed versions of opensolaris where we can test this?
    Is the answer known as to if and when zfs encryption will be in Solaris 10 or beyond??
    Thanks.

    I don't believe that the feature is ready yet, but you may find some more information about the project here: [http://hub.opensolaris.org/bin/view/Project+zfs-crypto/]
    You would probably also be better of with asking for a status on the forum/mailinglist for the project: [http://opensolaris.org/jive/forum.jspa?forumID=105]
    Edited by: Tenzer on May 11, 2010 9:31 AM

  • ZFS and grown disk space

    Hello,
    I installed Solaris 10 x86 10/09 using ZFS in vSphere, and the disk image was expanded from 15G to 18G.
    But Solaris still sees 15G.
    How can I convince it to make notice of the expanded disk image, how can I grow the rpool?
    Searched a lot, but all documents give answers about adding a disk, but not if the space is additionally allocated on the same disk.
    -- Nick

    nikitelli wrote:
    if that is really true what you are saying, then this is really disappointing!
    Solaris can so many tricks, and in this specific case it drops behind linux, aix and even windows?
    Not even growfs can help?Growfs will expand a UFS filesystem so that it can address additional space in its container (slice, metadevice, volume, etc.). ZFS doesn't need that particular tool, it can expand itself based on the autogrow property.
    The problem is that the OS does not make the LUN expansion visible so that other things (like the filesystems) can use that space. Years and years ago, "disks" were static things that you didn't expect to change size. That assumption is hard coded into the Solaris disk label mechanics. I would guess that redoing things to remove that assumption isn't the easiest task.
    If you have an EFI label, it's easier (still not great), but fewer steps. But you can't boot from an EFI disk, so you have to solve the problem with a VTOC/SMI label if you want it to work for boot disks.
    Darren

  • Where can I find the latest research on Solaris 10, zfs and SANs?

    I know Arul and Christian Bilien have done a lot of writing about storage technologies as they related to Oracle. Where are the latest findings? Obviously there are some exotic configurations that can be implemented to optimizer performance, but is there a set of "best practices" that generally work for "most people"? Is there common advice for folks using Solaris 10 and zfs on SAN hardware (ie, EMC)? Does double-striping have to be configured with meticulous care, or does it work "pretty well" just by taking some rough guesses?
    Thanks much!

    Hello,
    I have a couple of links that I have used:
    http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
    http://www.solarisinternals.com/wiki/index.php/ZFS_for_Databases
    These are not exactly new, so you may have encountered them already.
    List of ZFS blogs follows:
    http://www.opensolaris.org/os/community/zfs/blogs/
    Again, there does not seem to be huge activity on the blogs featured there.
    jason.
    http://jarneil.wordpress.com

  • Setting initial focus amidst multiple templates, task flows, and fragments.

    Hi Guys,
    Using JDev 11.1.1.4.
    I've got page fragments for which I'd like to set the initial focus. The issue is that oftentimes these .jsff pages are nested within page templates, train templates, dynamic regions, etc. As far as I know, the initial focus for a component is set on the document. The trick is finding out what all the prefixes are before the .jsff component.
    pt1:dynamicRegion:3:pt1:t1:0:it1Is there any easy way to figure out all the prefixes before this :it1, which can oftentimes be dramatically different?
    Would the easiest way be to set the initial focus to say "defaultFocus" and then have every .jsff have a component id called "defaultFocus"? Feels like cheating, but any other way I can think of seems way too complicated.
    Thanks,
    Will

    The method we use is mainly programmed by: Marianne Horsch.
    So again, 1 page:
    <af:document ...  initialFocusId="#{backingBeanScope.bolsysPageBean.initialFocus}">
    <af:form .... defaultCommand="#{backingBeanScope.bolsysPageBean.defaultCommand}">Within the body of this page a dynamic region is defined, this is all that is ever refreshed.
    Bean, not all logging etc removed:
      private static final String DEFAULT_COMMAND_ATTRIBUTE = "defaultCommand";
      private static final String INITIAL_FOCUS_ATTRIBUTE = "initialFocus";
      private String defaultCommand;
      private String initialFocus;
      public BolsysPageBean() {
        super();
        initPage();
      public final void initPage() {
        List<UIComponent> childrenList = getPageChildrenList();
        if (!childrenList.isEmpty()) {
          UIComponent defaultCommandComponent =
            UIComponentUtils.findComponentWithAttribute(childrenList, DEFAULT_COMMAND_ATTRIBUTE);
          if (defaultCommandComponent != null) {
            defaultCommand = defaultCommandComponent.getClientId(FacesContext.getCurrentInstance());
          UIComponent initialFocusComponent =
            UIComponentUtils.findComponentWithAttribute(childrenList, INITIAL_FOCUS_ATTRIBUTE);
          if (initialFocusComponent != null) {
            initialFocus = initialFocusComponent.getClientId(FacesContext.getCurrentInstance());
      private List<UIComponent> getPageChildrenList() {
        UIViewRoot root = FacesContext.getCurrentInstance().getViewRoot();
        if (FacesContext.getCurrentInstance() != null && FacesContext.getCurrentInstance().getViewRoot() != null) {
          return UIComponentUtils.getAllChildComponents(root);
        return Collections.<UIComponent>emptyList();
      public String getDefaultCommand() {
        return defaultCommand;
      public String getInitialFocus() {
        return initialFocus;
      }Util code:
      public static List<UIComponent> getAllChildComponents(UIComponent root) {
        List<UIComponent> list = new ArrayList<UIComponent>();
        if (root.getFacetCount() > 0) {
          Map<String, UIComponent> facetMap = root.getFacets();
          for (Map.Entry<String, UIComponent> entry : facetMap.entrySet()) {
            UIComponent facetComponent = entry.getValue();
            list.add(facetComponent);
            if (facetComponent.getChildCount() > 0 || facetComponent.getFacetCount() > 0) {
              list.addAll(getAllChildComponents(facetComponent));
        list.addAll(getOwnChildren(root));
        return list;
      private static List<UIComponent> getOwnChildren(UIComponent root) {
        List<UIComponent> list = new ArrayList<UIComponent>();
        if (root.getChildCount() > 0) {
          for (UIComponent child : root.getChildren()) {
            list.add(child);
            if (child.getChildCount() > 0 || child.getFacetCount() > 0) {
              list.addAll(getAllChildComponents(child));
        return list;
      }  The dynamic region is based on a backing bean as well.
    As I said before, when you want it right use beans (:
    -Anton

Maybe you are looking for

  • 1366 X 768 16:9 question

    I just bought an LCD TV with a 16:9 widescreen 1366 X 768 resolution. What do you have to do to achieve this widescreen resolution? I have the ATI 9600 256mb graphics card that doesn't have this resolution native. The screen appears to be stretched.

  • Troubles saving flash text (or buttons) in CS3

    I have been trying to build a banner in dreamweaver cs3 with flash text, but i can neither save nor apply the text as i keep getting the error message, "text1.swf is an invalid file name. Please enter a different name." and i continue to get the mess

  • Itunes keeps getting the error report

    when i try to open itunes, that annoying error report comes up saying "itunes has encountered a problem and needs to close. we are sorry for the inconvienence"... and it happens everytime. i try to reseach about this problem, and i've called windows

  • Problem in importing oracle.xml.sql.query.*

    When i am importing this import oracle.xml.sql.query.*; it is giving me error that "package oracle.xml.sql.query.* does not exist". What Should i install or set .

  • Link for Business Partner number and Address number.

    Hi, I have few Address Numbers below for which I need the corresponding Business Partner numbers. These Address numbers which are below aer from Table  CDPOS in objectid field. BP  0000025867 BP  0000049717 BP  0000049718 BP  0000049719 BP  000004972