Looking for opinions, experiences

Hello.
I'm evaluating Contribute for my company's needs, and while
I've gone through all the marketing collateral, I wanted to get
some feedback from the users in the field, as it were. What do you
love about Contribute? What do you think can be improved? Does it
really save you time, taking out extraneous steps in the wed
editing area, or is it simply replacing a single step?
Is the Drafting mode like a WYSIWIG? How much control over
the HTML does Contribute allow?
Really just want to know what the reality of the experiences
are from the people using it.
Thanks!
-screen

Mr or Mrs. Weasel (or shall I call you Screen?) <sorry,
too much coffee this morning>,
Yes, the drafting mode is (mostly) WYSIWYG, but Contribute 3
(C3) has some (limited) trouble renedering CSS, so YMMV. I run our
corporate intranet (with ~30 departmental users), and giving people
a (non-technical) solution to edit pages is well worth the hassle.
If you want the users to do anything complex within C3, forget it.
Basic text, images, hyperlinks are easy enough to do within
Contribute.
Contribute gives me enough control over what users are doing
to be more of a net benefit than a hassle. At our company,
independent thinking is highly encouraged, which is good, but it is
bad for my goal of a uniform intranet web experience.
As far as control over the HTML, I have not found a way for
the users to see HTML code (I don't think they can). If you need
users to modify actual HTML code, then DW is the way to go. C3 uses
CSS that you set up, and you can hide the styles that you don't
want visible (by prefixing "mmhide" to your CSS elements) - I wish
DW had the same when creating pages from templates.
I think that C3 is good (and worth implementing), but C4 or
C5 will be the true product to watch (and implement with DW).
Hope this helps.
Mark

Similar Messages

  • Looking for opinions

    Hi All,
    I am looking for any opinions on the WebAssist Web Developer
    suite. Any
    comparisons to the interAkt MX_Kollection or the Adobe DDT
    would be great.
    Thanks.
    Chris

    I upgraded the validation extension last year only to find
    many of the
    server validations didnt work along with a host of other
    issues. I have
    dropped them a line a few times requiring info as to when
    they might have
    another release, but web assist dont seem to inform their
    customers of such
    things until the day of release.
    kenny
    "Murray *ACE*" <[email protected]> wrote
    in message
    news:f8398r$l1a$[email protected]..
    > Buggy? I don't think so. Certainly not in my experience.
    But, as you
    > know, one person's bug is another's <shrug>.
    >
    > --
    > Murray --- ICQ 71997575
    > Adobe Community Expert
    > (If you *MUST* email me, don't LAUGH when you do so!)
    > ==================
    >
    http://www.dreamweavermx-templates.com
    - Template Triage!
    >
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    >
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    >
    http://www.macromedia.com/support/search/
    - Macromedia (MM) Technotes
    > ==================
    >
    >
    > "P@tty Ayers ~ACE"
    <[email protected]> wrote in
    > message news:f837nh$j79$[email protected]..
    >>
    >> "Murray *ACE*"
    <[email protected]> wrote in message
    >> news:f82mir$rt9$[email protected]..
    >>
    >>> Funny - I know lots of people who use them,
    including myself (I use them
    >>> all the time), and have never heard such
    comments from them.
    >>
    >> I've heard more than a few comments along those
    lines, Murray,
    >> unfortunately.
    >>
    >>
    >> --
    >> Patty Ayers | Adobe Community Expert
    >> www.WebDevBiz.com
    >> Free Articles on the Business of Web Development
    >> Web Design Contract, Estimate Request Form, Estimate
    Worksheet
    >> --
    >>
    >>
    >
    >

  • Looking for opinions: AppleCare worth to pay for it ?

    Within the next three weeks we have to decide whether or not investing around 1.000 Euro ($1.200 USD) into three year AppleCare support for our company's XServe.
    At this time I had made only (more or less) bad experiences with the 90-days-support Apple provides (I can only talk about the German support, not sure regarding others):
    Most questions became answered by refering me to knowledgebase articles not related in any way to my original issue. Also I heard often that this is a known problem, but cannot be supported since it requires command line access not under the support terms or that they will call me back (I had to call back each single time). One time I even got the advise to visit this forum
    And during my last phone calls I also had the special pleasure to talk with a professional speaking some kind of "Gerglish" (German and English mixed). It was quite hard to get even an idea about what he is speaking, so after explaining my issue three times, I gave up. By the way, he also promised to send me knowledgebase articles - I never received them.
    Only very few support calls were handled professionally. So I had a really hard time on getting anything set up, configured and worked around the bugs Apple ships with the servers. Many thanks to this forum, Apple should charge for using it and I would be the first paying for - hope that they don't read this and take it for serious The people here at this place are much more helpful as all German support representatives together.
    But back to topic. I would like to hear some user experiences made with the three years extended support, especially from Germany-based people (hope that there are some users around). Good as well as bad ones.
    So hopefully I get a better overview whether or not it is worth spending money or better diving into OS X Server by educating myself.

    First, realize that the three-year maintenance agreement is only a hardware maintenance agreement. It is not for assisting you in setting up your server or working around Apple's bugs. Also realize that your experience in Germany may be different from my experience in the U.S.
    That said, for us it has been worth it. We have had four incidents with our Xserve G5, and, in each case, I have been impressed with the telephone support we have received from the Enterprise Server Support Group in Austin, Texas. The first case was an apparent manufacturing error - rather than the 250 GB ADM drive that had been ordered, invoiced, and paid for, the server arrived with an 80 GB drive. Replacement drive overnighted, arrived next day. In the second case, our Apple Hardware RAID card had its firmware settings become corrupted in an odd manner where it still functioned perfectly as two RAID 5 LUNs, but the Xserve would not boot from it. This happened during replacement of the BTO Apple SCSI card (really a rebranded LSI Logic 22320) in the adjacent slot that never worked right with our backup software, replaced by an ATTO UL4D that has never had problems. Because no firmware file was available for re-flashing the card (even though I believed and still believe that the card itself was OK), a replacement Apple Hardware RAID card was overnighted, received next day, and solved the problem with the RAID card. The third instance was when the ribbon cable between the Apple Hardware RAID and the drives was suspected as being intermittent and, no questions asked, a replacement was sent. The fourth instance was when we got intermittent ECC errors. Austin support had us interchange two DIMMs to see whether it was the motherboard that they should send (if the errors did not follow the DIMM) or a replacement DIMM (which they should send if the errors followed the DIMM). Turned out to be the DIMM, and a replacement was sent.
    I still have an open case on the Apple Hardware RAID card that has been open for over a year, now. The card has a bug in that it does not fully flush its write cache on graceful power down. Only workaround is to turn the write caches off, which is what we have done. Because the Xserve G5 (and its Apple Hardware RAID card) is EOL, I doubt that this bug will ever be fixed. But the hardware is broken (unless you think it is normal for a disk controller card not to flush its write cache on graceful power down before disconnecting from the drives).
    My only complaint is with the local on-site subcontractor. There are not many Xserves in our city. When the Apple Hardware RAID card failed, we had to make numerous follow-ups to get the local service person to contact us following the initial incident report with Austin, even though the server would not boot from the Apple Hardware RAID (we kept the server up by booting to an external firewire drive). When the local on-site repair person finally contacted us, I was not impressed by his total lack of knowledge about the Xserve, and I do not believe that he had ever seen an Xserve, much less ever having been trained on the product. I told him that I did not want him touching our server, relayed that information to Apple's support group in Austin, and they just overnighted the replacement Apple Hardware RAID card to me for replacement by me.
    So, if you are expecting setup assistance or assistance working around bugs in the software, you aren't going to get it. It's strictly hardware repair. If you are looking for bug reporting or workarounds, the best route there is to get an Apple Developer membership (they are free if you get the entry level that doesn't provide prerelease software) and to file bug reports via RADAR.
    Hope this helps,
    Russ
    Xserve G5 2.0 GHz 2 GB RAM   Mac OS X (10.4.8)   Apple Hardware RAID, ATTO UL4D, Exabyte VXA-2 1x10 1u

  • Looking for HA experiences for SAP on IBM i

    Hi,
    We are very experienced in disasters, it seems not possible but it is true: in last three years we have suffered three major outages, with big downtimes ( from 13 hours to 36 ) in our main production system, our R/3 which is supposed to be a 24x7 system.
    We started our HA architecture several years ago, using MIMIX to do a logical replica of our production database. It worked, but it needed so much administration at this time. So we moved to a hardware replica, using DS8000 storage subsystems and moving SAP to an iASP, we also use the Rochester Copy Services toolkit to manage all this landscape. At first we used MetroMirror, a synchronous protocol, because we still owned our machines and they were physically close. Recently we evolved our HA architecture, moving our systems to a different sites of an outsourcing partner, we were forced to change the replication protocol to GlobalMirror, asynchronous, because the distances.
    We know what is a 'rare' hardware failure on our storage susbsystem: it started to write zeroes on a disk, without practically no detection, lots of SAP tables were damaged. We also have suffered a human error that deleted an online disk and killed all the iASP ( first real tape recovery in my life ). And finally we know what is a power failure in the technical room of our partner. Imagine that all this failures with a database that is 3TB big. Do you know how much time is needed to restore from tape and run APYJRNCHGX or rebuilding access paths ? I know...
    As you can imagine we have invested a lot of money trying to protect our data, and it worked because we have never lost any bit of information but our recovery times are always far from the ones needed.
    I'm looking for experiences about how other SAP on IBM i customers are managing the HA in his critical systems, and if possible compare real experiences of similar outages. What are we doing wrong ? We cannot be the only ones...
    Regards,
    Joan B. Altadill

    Hi Joan,
    We run MIMIX replication for our ERP system/partition and 4 other partitions with BW, Portal, PI, SRM, and Solution Mgr in them.  There is some administration but it has been worth it for us.  We have duplicate 570 hardware in an offsite DC 35 miles away for failover.  We also do our backups on the replicated systems.  We have been running MIMIX since going live with SAP in 1998.
    Several years ago we used MIMIX replication to migrate to new servers during lease replacement which cut our migration downtime from 8 hrs for backup/restore to about 1 hr while we shut down the system on old servers, started up ERP system on new servers and checked all the interface connections.
    But the real payoff came in March this year when our production server went down hard during a hot maintenance procedure.  We were able to MIMIX switch to our DR server in under 1 hr and the business ran on the DR server for two weeks, while we reverse-replicated, then we switched back.
    We have subsecond replication so we did not lose any data and there were no incomplete transactions on the DR side after the switch.   MIMIX paid for itself, including administration, in that one incident.
    Hope this helps,
    Margie Teppo
    Perrigo Co.

  • Looking for opinions on daemontools vs runit.

    I have used daemontools quite extensively, and for quite a while.
    I am curious if runit is worth switching over to, and how it compares to tried and true daemontools.
    I am considering it, due to daemontools not being very well supported in some cases.
    Does anyone have real world experience with both runit and daemontools, and could provide a comparison?
    Is it worth switching, or is daemontools still the best option out there?
    Last edited by cactus (2011-03-17 20:38:37)

    opinions and observations so far:
    runit's runsvdir only shows the stderr from a ./service/run script in the proctitle ps output. I found this an odd difference, as daemontool's readproctitle shows both stdout and stderr.
    similarly runsv redirects only the stdout to a log service running via ./service/log/run, not the stderr (this goes to readproctitle). This was also somewhat unexpected. Granted it is very common if you are using a logging service to redirect stderr to stdout with an `exec 2>&1` at the start of a ./service/run script...but I still found it another oddity.
    runit has a cool hack for process dependency support.
    #!/bin/sh
    set -e
    sv -w7 check postgresql
    exec some/app/with/a/dep/on/postgresql
    runit's sv adds a service name argument to the path to the SVDIR for you. So you can be anywhere and say `sv up serviceName` and `/etc/service/serviceName/run` is started. daemontool's svc requires the full path to the service directory such as `svc -u /etc/service/serviceName`.
    runit's sv has a few more signals it can send a supervised process for you (sigusr1,2, quit, etc) as apposed to daemontool's svc
    instead of separate programs for setuidgid, setenvgid, etc.. like daemontools has, runit has a single chpst command that accepts arguments to achieve the same goals
    runit's sv status output is much nicer/far more informative
    runit's has man pages!
    Last edited by cactus (2011-05-13 04:01:21)

  • Looking for opinions & advice on Mac G5 -- television setup

    Hi everyone!
    I just moved in to a new home, and have gotten a great new surround system and is just about to invest in a new, modern television as well.
    I have a Mac G5 dual 2 GHz that I would love to be able to hook up to the new TV set somehow. I realize that Front Row isn't supposed to work on my G5, and it's really not necessary per se I guess - but if there is a way to do it that would be my goal.
    I have a remote control for the G5 already that's working great in DVD player and iTunes.
    The questions:
    =========
    1) Can I connect somehow directly from the G5 into a modern telvision set using the second output from the built in video card? (My main goes to a cinema display now). Or do I need to have a separate video card?
    2) What telvision set is recommended? (40" or bigger preferred)
    3) The G5 is pretty far away from the telvision area and the surround sound receiver. Is there a way to get optical cable that's really long somewhere? I dont seem to be able to find optical cable longer than 12 feet - is there some kind of technical limit?
    4) Should I just forget the G5 altogether and get a Mac mini for all TV purposes?
    5) Whatever happened to that iTV product thing that Steve Jobs was talking about?
    All opinions/comments welcome! Thanks!

    Martnlindhe,
    I chose to purchase a MacMini Intel model to hookup to my 36 inch Sony TV.
    I chose this model for several reasons as I also have a Web Camera hooked up to it at the same time via Firewire. I use Front Row and iTunes to distribute music throughout the house wirelessly to other stereos and to view what is playing now via the TV.
    Here are some photos of the MacMini hooked up to the TV and using Front Row.
    http://users.sisqtel.net/jkriz/MacMini/OSX.jpg
    http://users.sisqtel.net/jkriz/MacMini/FrontRow.jpg
    http://users.sisqtel.net/jkriz/MacMini/frontrow_albumcovers.jpg
    The photos aren't that great but you get the idea. The TV screen looks great in person as opposed to the photos with the glare and the darkness.
    Some TV's don't look as good as my TV looks from what I have heard so your TV may or may not look as good. I am about to upgrade to a 42 inch Panasonic HD Plasma TV which should give an even better picture. At this time, my TV is connected to the MacMini via the "S-Video" cable. The HDMI video cable is a better choice if your TV has that capability.
    The new iTV from Apple is rumored to be released sometime this coming Spring.

  • Looking for opinions, flow logic versus MVC

    We are about to start our second major BSP project, so we are relatively new at BSP. Our first project was stateful, MVC driven, and works great. The new application will have up to 50 simultaneous users doing very quick asynchronous tasks (data collection, essentially) but where i want to maintain a session, and so i am making it stateless so the resources are not tied up and using a backing table in SAP to hold state information combined with client side cookies.
    My question is, in a stateless environment, what if any advantage is there to using MVC? It seems like a lot of work to maintain the model for each session request. Is it normal to use MVC for both stateful and stateless, or is flow logic more standard for stateless? I looked around quite a bit in the WIKI, blogs, etc, and dont really see a lot of examples using MVC that are stateless.
    Any advice here would be appreciated, before we chase ourselves down a hole.

    Hi David,
    I totally agree with you that MVC gives you nothing for stateless applications.
    When I build BSP applications I try to always...
      .... go stateless
      .... use page model - not MVC
      .... assign application class and do all backend processing there
    For persistence I typically use server-side cookies - these are managed by the CL_BSP_SERVER_SIDE_COOKIE class.
    In my application class I get the server-side cookie in the IF_BSP_APPLICATION_EVENTS~ON_REQUEST method and I save it in the IF_BSP_APPLICATION_EVENTS~ON_RESPONSE method.
    The appropriate checking of a sequence number on the server-side cookie can mitigate against users hitting refresh buttons, back buttons, etc.
    Cheers
    Graham Robbo

  • SCCM Design Looking for Opinions/Tips

    Hello out there I'm in the middle of designing our SCCM 2012 R2 layout and was wondering if I could get some Opinions/Tips on anything before we
    implement.
    Primary Site
    Site System Server 1 Roles:
    Asset Intell
    Endpoint
    Windows Intune
    Site Database (Local DB)
    Reporting Services
    Site System Server 2 Roles
    Management Point
    Source DP
    Enrollment Point
    Certificate RP
    Site System Server 3 Roles
    Management Point
    Pull DP
    SMP
    EP
    Site System Server 4 Roles
    Management Point
    Source DP
    SMP
    EP
    Site System Server 5 Roles
    Management Point
    Pull DP
    SMP
    EP
    The Servers 1-3 would be located in another Location and Server 4-5 will be located in another primary location the bandwidth between the 2 sites will be a Gig connection so there should not be a problem with low bandwidth. The Other Idea is to have a SQL
    Replica in a HAG  that the MP’s will connect to and have the SQL Replica be the one connection back to the Local Site Database. I also plan on throwing the APP Catalog/WebPoint/SUP Roles on our Load Balancer the idea behind this design is for High Availablity.
    If you have any ideas/opinions I’m all open for criticizim and if you have any more questions on I’ll do my best to answer.

    what is your total count of clients that you want to support using Configmgr ? 1 Primary Management point can support upto 25K clients.Make the design as simple as possible which you can get using configmgr 2012 to reduce the hierarchy structure.
    Eswar Koneti | Configmgr blog:
    www.eskonr.com | Linkedin: Eswar Koneti
    | Twitter: Eskonr
    Sorry, I guess I left that out. We have currently about 500 Workstations, 200+ Servers and the number of Tablets are growing they are expecting to have a Tablet per user which will be another 500. The New Site Will have approximately 300-400 Additional Workstations/Tablets.
    I do understand what you guys are saying but they want to treat this as a system critical to the business. I mean I can always take the Additional Server out of the picture for a simpler hierarchy but according to some performance tips that I read was to offload
    from the Site DB much as possible.

  • Looking for Opinions: custom folders vs. workbk sheet as a subquery

    Hi Everyone,
    Hope all is well....
    I have in the past few months asked the Discoverer administrator to create custom folders for complex SQL situations.
    Another colleague, feels that a better approach that would avoid maintenance by IT is to use the Discoverer
    feature that allows you to subquery another sheet in the Discoverer workbook.
    What is your opinion?
    - which approach has better performance
    - what happens if you need many sheets for many subqueries
    and if you have to join the tables anyway to get the actual field info
    would performance be slow?
    tx for your ideas and opinions....
    sandra

    Hi Sandra,
    First i must say that personally i do not use the sub query and this is due the implementation, we do not
    use the discoverer desktop, we use the Plus and the Plus do not have this functionality
    which approach has better performanceWith sub query you will not be able to tune the query while on custom folder you can examine the SQL and tune it
    to perform better.
    what happens if you need many sheets for many sub querieslarge amount of sheets can cause performance issues.
    You are right that it is easier to work avoiding the dependent on the IT, so my best answer is that if the results are satisfying then you can use the sub query method and where you do have lots of sheets to depend on or you got a very complex query, use the IT to create the custom folder.
    Tamir

  • Looking for Opinions on a base Mac Pro system for running FCP.

    I am being asked to make a recommendation for acquiring a Mac Pro to use for video production, and I need advice. I am not sure what I should recommend. I suppose almost any Mac Pro, even the lowest model, would suffice for my needs right now. But I could sure use some input based on what we need to do now, and where we need to go.
    Right now, I want a system for Web video production. Not sure what camera will be used, but lets assume high def for now. The delivery format will likely be Flash video. And none of the pieces would be over 3 minutes in length. I'd think that was well within the capabilities of even the most basic Mac Pro, but I am not sure.
    Now, let's say I want to eventually expand out to make 30 second TV spots in high definition. Assuming I have access to a good deal of storage space and I am not looking to do 30 layers of video each with 10 effects (more like 3 max with just a few effects), can I still get away with this system? Would I need a beefier video card? Or some type of external RAID storage?
    Money is and +is not+ an object in this case. I'd rather spend a little more cash now to accommodate my future needs (say, by getting a more robust video card and more memory), but I don't need to fully max out the system either for what seems to be rather modest expectations.
    If I can provide any other explanations or define any further constraints, please ask. Thanks for any help you can lend.

    You should repost in the [Final Cut Pro forum|http://discussions.apple.com/forum.jspa?forumID=939]. It's much busier than here, so you will get far more help.

  • Looking for opinions on Apple's software RAID in Disk Utility

    I'm thinking about making a stripped RAID (RAID 0) with Apple's software RAID in Disk Utility. How reliable is it, and will I see a huge speed improvement over just having two independent drives. This RAID will also be my boot drive. Is that a good idea or should I have a separate drive for that function?

    It depends. And seeing you don't have a lot of exposure to RAID, it depends, you still need even more to regularly backup, and depends on the drives to some extent. RAIDs are great for some things, and the sustained reads and writes look good in some simple benchmarks, but for most users, they are better off with a really good fast 10K Raptor or something as a dedicated OS/apps boot drive.
    http://www.barefeats.com/quad07.html
    They did some earlier tests on RAID for boot drive, which you might want to look into.

  • Looking for opinions on tablets

    I am looking to buy a tablet and want some opinions on which one to buy ? Price range is between 200-350

    Some thoughts....
    iPads are out of your budget, unless you are buying used.
    bigger screens offer better readibility
    smaller screens offer better portability
    fast processors get things done quicker
    slower processors offer better battery life
    do you want stereo speakers to better watch your videos?
    I like my Blackberry Playbook

  • Looking for opinions on best new laptops for design work using Adobe Creative Cloud

    Any thoughts on purchasing this device for design work with Adobe Creative Cloud?...HP 15-g077nr 15.6" Notebook with AMD A6-6310 Processor & Windows 8.1

    erinf48698046 wrote:
    4 GB DDR3 SDRAM system memory
    Gives you the power to handle most power-hungry applications and tons of multimedia work
    No. That's wrong. Whatever website you're looking at is out of date. I wouldn't suggest people run AE on less than 16 GB of RAM...
    Anyway, you're not asking about After Effects!
    For Photoshop, 8 GB is recommended. It will run on 4, but...8 is recommended. See this page: System requirements | Photoshop
    Same with InDesign: System requirements | InDesign
    Illustrator, on the other hand, runs just fine with 4 GB: System requirements | Illustrator
    Any other system requirements can be found here: System requirements | Creative Cloud

  • Looking for opinions - anyone using the iPhone in NYC?

    I am NOT an iPhone owner yet. But, have been considering buying one. I live/work in NYC and am using a Treo 650 with T-Mobile. The service is so poor that I no longer have a signal in my office or my apt. I am seriously considering the iPhone and wanted to know how other NYers are managing with the iPhone and AT&T service. Would love to hear your opinions.
    Thanks!

    I live in Manhattan on the east side, near Gramercy.
    I was previously on T-Mobile. I had almost no signal in my apt. with T-Mo. With AT&T, I have full bars. However, in the ad agency I freelance in on occasion, my signal with AT&T is worse.
    But that may mean nothing to you. What it comes down to is where you live/work and your proximity to an AT&T tower in those locations. Unless you can get testimonials from another user(s) in your exact living/working location(s), no amount of anecdotal evidence from other people anywhere else around the city will guarantee you a good or bad signal. This is especially true of the EDGE network, which varies from block to block. You can turn a corner and drop/gain 100kbps.
    My recommendation would be to find a neighbor or co-worker on AT&T, take the sim card out of their phone, put it in yours and note the signal quality. Be sure to do this as close as possible to where you live/work. Using your phone instead of theirs will provide you with a test control. Of course, if you could get your hands on a friend's iPhone, even better.

  • Filtering TreeModels, Looking for opinions

    Hi all,
    I have a rather strange requirement for a JTree/TreeModel that I was hoping that the java developer connection could help me with.
    The problem goes like this.
    We have a TreeModel, which is needed in its completeness for our internal usage within the UI, but some of the JTrees need the ability to "remove" or "filter" out certain levels of treenodes. For example:
    A
    -B
    --C
    ---D
    ---E
    might become
    A
    -B
    --D
    --E
    in the "filtered" TreeModel view. I was wondering if anyone else had this problem, and if so, how did they solve it? (I was thinking of applying a kind of "filter" pattern, but this would require multiple tree models, I think. Perhaps there is a better way)..
    Any help you can provide is appreciated.

    Just thought you'd like to know, this approach appears to be working now, after some tinkering and debugging..
    The hardest part of the operation was "re-directing" the events to remove parts of the path that were not appropriate, thus ensuring the tree is updated correctly..
    Also, found a small bug in your method. ;) should really be:
    public int getChildCount( Object parent )
    int count = delegate.getChildCount( parent );
    int result = count;
    for( int i = 0 ; i < count ; i++ )
    Object child = delegate.getChild( parent, i );
    if( isNodeFiltered( child ))
    result += getChildCount( child ) - 1;
    return result;
    Since we can't iterate over the count while increasing it without getting ArrayIndexOutOfBoundsException..
    I'm probably not allowed to post the entire source code since it was developed for work, but the hardest method (the re-direction of the events) goes like this:
    o Obtain the new list of children from the old list of children in the TreeModelEvent. This is determined by running the filter on each one to see if they are "in" or "out".
    o If there are no children left in the new list, throw the event away, it is inconsequential.
    o Otherwise, obtain the "new" parent from one of the new children, since the parent might be filtered out as well. This involves writing a method which returns the first non-filtered parent of the current node.
    o For each of these new children, obtain their indices respective to their new parent. This can be accomplished by use of a "getChildren" method very similar to your getChildCount method (which returns an ArrayList), and then a search through the list to find the matching child..
    o Place the new indices in a new event, along with the new children and new parent
    o include the new source of the event ("this" in this case).
    ... voila, instant new event, which can be re-fired. The code is the same for all of the tree nodes changed, inserted, removed, or structure changed.

Maybe you are looking for