Monitoring Clusters

          I have one issue like I have created two server instances clst and clst1 along
          with administrator server name srv .Now I have clustered clst and clst1 under
          cluster name clust. After getting it clustered I later deployed .war for the entire
          cluster and saw that the two servers have two different ip address like http://ipaddress:7005
          and http://ipaddress:7006 .I query is can I have single port for both, if yes
          how and if no then how can I monitor the work flow between the two ports.i.e if
          I down one how can I still be sure the other is woking bcos both have different
          ports. So how can I guarantee that clustering is working fine in this scenario?
          Can u please help me out in this matter. Waiting for ur early Response
          

Prakash,
          "Prakash D Patil" <[email protected]> wrote in message
          news:40c44bf8$1@mktnews1...
          >
          > I have one issue like I have created two server instances clst and clst1
          along
          > with administrator server name srv .Now I have clustered clst and clst1
          under
          > cluster name clust. After getting it clustered I later deployed .war for
          the entire
          > cluster and saw that the two servers have two different ip address like
          http://ipaddress:7005
          > and http://ipaddress:7006 .I query is can I have single port for both,
          only if the IP addresses are different.
          >
          > if yes how and if no then how can I monitor the work flow between the two
          ports.i.e if
          > I down one how can I still be sure the other is woking bcos both have
          different
          > ports.
          I'm not at all understanding you use of the word work flow above. If you
          mean the servers handling of requests then this can be accomplished by
          monitoring the server from the admin console or using MBeans. Monitoring is
          done regardless of port assignment.
          >So how can I guarantee that clustering is working fine in this scenario?
          Monitor the cluster using the admin console or ClusterRuntime MBean and
          Server MBean and ServerDebug MBean.
          > Can u please help me out in this matter. Waiting for ur early Response
          http://e-docs.bea.com/wls/docs81/cluster/index.html
          HTH
          ~Ryan Upton
          

Similar Messages

  • OWSM Policy Monitor - clustering

    One of the Oracle document says, In OWSM 10.1.x production deployment, the gateways and PolicyManager can be load balanced (active/active) but the Monitor is not load balanced (active/passive). Why monitor is not load balanced? How monitor will be scalable in a cluster environment.

    You may refer below blog for configuration -
    http://niallcblogs.blogspot.com/2010/07/osb-11g-and-wsm.html
    Regards,
    Anuj

  • SQL Server Agent Windows Service

    I have created a override for parameter, Alert only if service startup type is automatic in monitor SQL Server Agent Windows Service to false, as my SQL Admin informs that this service startup type is Manual. As SQL servers are clustered, even SQL agent
    service runs on even node and odd SQL agent service runs on the odd node. Should this resolve the issue my SQL Admin's are complaining they were not receiving notifications for SQL Agent stopping. Or, should it create noise as I should expect alerts from odd
    SQL Agent service that are not running on even node and vice versa.
    From SQL Server end,
    Thanks, Harry :-)

    Hi,
    The below article should be helpful:
    Monitoring Clusters by Using Operations Manager
    http://technet.microsoft.com/en-us/library/hh212773.aspx
    Regards,
    Yan Li
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

  • DeltaSynchronization Error 29181 InvalidCastException

    I started getting errors on the DeltaSynchronization but have not been able to determine the cause or who to fix it.
    I have already increased the timeout values in ConfigService.config on all the management servers, although i didn't think this would resolve the issue.
    Any help or suggestions would be appreciated.
    Below is the full error message:
    OpsMgr Management Configuration Service failed to execute 'DeltaSynchronization' engine work item due to the following exception
    Microsoft.EnterpriseManagement.ManagementConfiguration.DataAccessLayer.DataAccessException: Data access operation failed
    at Microsoft.EnterpriseManagement.ManagementConfiguration.DataAccessLayer.DataAccessOperation.ExecuteSynchronously(Int32 timeoutSeconds, WaitHandle stopWaitHandle)
    at Microsoft.EnterpriseManagement.ManagementConfiguration.CmdbOperations.CmdbDataProvider.GetConfigurationDelta(String watermark)
    at Microsoft.EnterpriseManagement.ManagementConfiguration.Engine.TracingConfigurationDataProvider.GetConfigurationDelta(String watermark)
    at Microsoft.EnterpriseManagement.ManagementConfiguration.Engine.DeltaSynchronizationWorkItem.TransferData(String watermark)
    at Microsoft.EnterpriseManagement.ManagementConfiguration.Engine.DeltaSynchronizationWorkItem.ExecuteSharedWorkItem()
    at Microsoft.EnterpriseManagement.ManagementConfiguration.Interop.SharedWorkItem.ExecuteWorkItem()
    at Microsoft.EnterpriseManagement.ManagementConfiguration.Interop.ConfigServiceEngineWorkItem.Execute()
    System.InvalidCastException: Specified cast is not valid.
    at Microsoft.EnterpriseManagement.ManagementConfiguration.CmdbOperations.EntityChangeDeltaReadOperation.ReadManagedEntitiesProperties(SqlDataReader reader)
    at Microsoft.EnterpriseManagement.ManagementConfiguration.CmdbOperations.EntityChangeDeltaReadOperation.ReadData(SqlDataReader reader)
    at Microsoft.EnterpriseManagement.ManagementConfiguration.DataAccessLayer.ReaderSqlCommandOperation.SqlCommandCompleted(IAsyncResult asyncResult)

    Figured it out for my issue. A clustered server created additional windows computer objects with IsVirtualNode set to True, representing the cluster. It's an annoyance, but you need to know how to weed these out if your discovery will be run against Microsoft.Windows.Server.Computer
    and where your SCOM box is monitoring clustered servers.
    My discovery script tries to push the discovered object down to the agent. Which is impossible for these weird virtual computer objects.
        $global:discoveryData.AddInstance($instance)
        # force the seed down the the agent
        # To force the RMS to re-assign the local agent as the managing agent for the discovered physical server object ($instance)
        # we have to get a reference to the local health service class and then create a SPECIAL SECRET relationship :)
        $oHealthServiceInstance = $global:discoveryData.CreateClassInstance("$MPElement[Name='SC!Microsoft.SystemCenter.HealthService']$")
        $oHealthServiceInstance.AddProperty("$MPElement[Name='Windows!Microsoft.Windows.Computer']/PrincipalName$", $PrincipalName)
        $global:discoveryData.AddInstance($oHealthServiceInstance)
        $oHsCnRel = $global:discoveryData.CreateRelationshipInstance("$MPElement[Name='SC!Microsoft.SystemCenter.HealthServiceShouldManageEntity']$")
        $oHsCnRel.Source = $oHealthServiceInstance
        $oHsCnRel.Target = $instance
        $global:discoveryData.AddInstance($oHsCnRel)
    It works find if all your servers are unclustered, but if they're clustered you can't push the object down to the agent, there isn't one.  This seems to be the culprit of the call stack described at the beginning of this discussion.
    So... to fix it (ignore the offending objects) ...
    If you're doing a filtered registry discovery, add this to your filter expression...
          <Expression>
            <SimpleExpression>
              <ValueExpression>
                <Value Type="String">IsVirtualNode:$Target/Property[Type="Windows!Microsoft.Windows.Server.Computer"]/IsVirtualNode$</Value>
              </ValueExpression>
              <Operator>NotEqual</Operator>
              <ValueExpression>
                <Value Type="String">IsVirtualNode:True</Value>
              </ValueExpression>
            </SimpleExpression>
          </Expression>
    The SCOM object returns NULL when the property is not True, so this is the best way to check (by prepending $Target with some token text like 'IsVirtualNode:')   The resulting generated text will be with 'IsVirtualNode:True' for the target objects
    we want to ignore, and 'IsVirtualNode:' for the target objects we want to process (these are the non-virtual, real physical computers that you thought you were getting all along).  If you do not prepend the $Target... with 'IsVirtualNode:'  I've
    seen SCOM not even process this with the expression evaluation, presumably because it's trying to compare a null instead of a generated string (which is what you get with my hack).
    Ok, enough of that...  If you're trying to do discovery with a script, you can do something like this, where you return early with an empty discovery payload otherwise proceed with normal discovery:
        $isvirtualnode = "IsVirtualNode:$Target/Property[Type="Windows!Microsoft.Windows.Server.Computer"]/IsVirtualNode$"
        if ($isvirtualnode -eq "IsVirtualNode:True")
            #Write-ErrorInfo "Tried to discover Active/Idle for $PrincipalName (IsVirtualNode: $isvirtualnode)"
            #return the empty payload back to SCOM
            $discoveryData
            return
        else
            #Write-Info "Tried to discover Active/Idle for $PrincipalName (IsVirtualNode: $isvirtualnode)"
        # proceed with normal discovery
    Hope this helps someone out there. I spent a lot of time learning about how SCOM deals with clusters a few years back. It was quite a painful experience trying to develop a cluster-aware SCOM pack using the minimal white papers that were available. 
    This piece of info that I'm sharing took some digging as it wasn't documented well in the white papers.

  • Server Monitoring with clustered instances

    Anyone using the server monitor or multiserver monitor with
    clustered instances of coldfusion? In CF 8.0.1 on Solaris, enabling
    monitoring produces a vast number of repeated errors of the form
    included below. This occurs on the both clustered instances as the
    instances are setup to replicate session data using J2EE session
    variables. The monitoring appears to work but the frequency of the
    errors produced in the ouput log of *BOTH* of the cluster instances
    is extensive. These errors do not occur when monitoring the
    "cfusion" admin instance. Is this a product issue or a
    configuration issue?
    MM/DD HH:MM:SS error Setup of session replication failed.
    [2]java.io.StreamCorruptedException: unexpected end of block
    data
    at
    java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
    at
    java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1945)
    at
    java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1869)
    at
    java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1753)
    at
    java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
    at
    java.io.ObjectInputStream.readObject(ObjectInputStream.java:351)
    at java.util.Hashtable.readObject(Hashtable.java:859)
    ...

    Dear Jan,
    Already I have added the plugin but while adding the target i am getting below error. Can u please give some idea on this
    Test Connection failed: [_WinAuthDLLToLoadDynamicProp;em_error=DLL file 'D:\12c_agent\plugins\oracle.em.smss.agent.plugin_12.1.0.2.0\scripts\emx\microsoft_sqlserver_database..\..\..\..\dependencies\oracle.em.smss\jdbcdriver\sqljdbc_auth.dll' is found missing or not was never copied manually. Please copy amd64 version of sqljdbc_auth.dll at the above location and re-try, MSSQL_NumClusterNodes;Can't resolve a non-optional query descriptor property [dllFile] (dllFile), WbemRemote_Determination_DynamicProperty;Can't resolve a non-optional query descriptor property [dllFile] (dllFile), MSSQLInstance_TestMetric_DynamicProperty;Can't resolve a non-optional query descriptor property [dllFile] (dllFile), OSType_TargetHost_DynamicProperty;Can't resolve a non-optional query descriptor property [STDINWBEM_HOST] (ms_sqlserver_host), MSSQL_NumClusterNodes;Can't resolve a non-optional query descriptor property [dllFile] (dllFile)]

  • Monitoring a Clustered Resource

    Can you let me know if it's possible to monitor a service like SMTP on the
    clustered resource. I'd like idealy to know if SMTP had failed, even if the
    server is still up and running, but this resource could be on one of two
    servers, even though it's IP would always be the same.
    I'm running a clustered environment and I'd like to check if the GWIA is
    running. I can monitor SMTP with Zen for Servers and tell it to let me know
    if it stops working. Problem is that the GWIA is not on the Physical IP
    address, it's a virtual resource mapped to a secondary IP, while I can use
    the DB editor to add in the secondard IP, it's asking me for a MAC as well -
    the MAC of course will change if the resource is failed over onto another
    node.
    Any ideas??

    Tony,
    It appears that in the past few days you have not received a response to your posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Do a search of our knowledgebase at http://support.novell.com/search/kb_index.jsp
    - Check all of the other support tools and options available at http://support.novell.com in both the "free product support" and "paid product support" drop down boxes.
    - You could also try posting your message again. Make sure it is posted in the correct newsgroup. (http://support.novell.com/forums)
    If this is a reply to a duplicate posting, please ignore and accept our apologies and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://support.novell.com/forums/

  • Multiple Compressor/Qmaster problems...clusters and batch monitor launch

    Hi All,
    I am continuing to have problems with Compressor and Qmaster. My original problem was that I was trying to create clusters to speed up my workflow. My computer is the only computer in the network. The issue came up when I created a cluster and it would work fine if I dragged in a QT from outside FCP to process in compressor, but if I tried to export a QT from FCP to compressor it would fail. Yesterday as I was about to leave work I was trying to export stuff through compressor and the batch monitor wouldn't launch. Sometimes I could get it to, but the QT it was exporting would disappear once it was processed.
    I've deleted the compressor/FCP prefs and I also tried reinstalling compressor/qmaster. I trashed all the files I was supposed to and it couldn't get rid of all of them because it said they were still in use. AHHH!!!! I'm getting a little frustrated. I called apple and in so many words, they said, "Well, that's what happens with Studio 2. Try reinstalling it."
    Help! I'm on FCP studio 2.

    As mentioned in other threads, virtual clusters are very tricky to setup properly and, unless you're doing a lot of H.264 encoding there's almost no benefit in doing it.
    I highly suggest that you (and anyone struggling with VC's) pick up a copy of "Compressor 3 Quick Reference Guide, Brian Gary" and get a solid understanding of the environment VC's create - and what they're really good for.

  • Unable to monitor the webcache clustering using EM console

    Hi..
    Is it possible to monitor the webcache clustering through enterprise Manager console?
    Help pls.....
    Regards
    Gayathri j

    Hi GPR,
    I assume there are no compilation errors and you are able to deloy the process successfully. Do you see any errors in the log file while deploying the process? You may want to check the $SOA_HOME/opmn/logs/default~oc4j_soa~xxx.log file.
    Do you see the process in BPEL Console and able to unit-test it?
    Regards
    Rohit

  • SCOM 2012 SP1 SQL Server 2008 R2 Clustering Monitoring?

    We have a new print management system that was setup using SQL server 2008 R2 clustering.  Right now we have SQL MP (monitoring)(discover) 6.3.173.1.  They had a failover occur a week ago and SCOM didnt throw any alerts.  As a temporary
    fix, I found some correlated events and setup a monitor for those events on the servers.  However, the system owner wants to know if there is a MP that will monitor for a SQL cluster failover event.  I have looked and looked and cant seem to find
    anything that gives a whole lot of detail on this.
    Thanks
    EFD
    Warm Fuzzies!

    Yes, you can monitor SQL Cluster 2008 using SCOM 2012 SP1. but you need to install SQL MP and Windows Cluster MP and Enable proxy Agent on physical nodes.
    You can refer below link {it's same for SCOM 2012}
    http://blogs.technet.com/b/birojitn/archive/2010/04/14/sql-server-2008-cluster-monitoring.aspx
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"
    Mai Ali | My blog: Technical | Twitter:
    Mai Ali

  • Thread Monitoring in a clustered BPEL environment

    Hi BPEL community,
    does anybody know how I can monitor the "Pending Requests" and "Thread Allocation Activity" (BPEL Console - Threads) over all cluster-nodes? Inside the BPEL Console I only see the data of the cluster-node I'm logged in.
    I was not able to see an over-all cluster-nodes view of the load on the bpel-engine.
    Regards, Harald

    I am not familior with anything called Quartz but I think this issue should be handled task scheduler itself.
    In the place I work the task scheduler we use (I house developed one) has following approach
    Once the task is posted it is in "posted" state and once a batch server (Thats what we call the service that executes it) picks a task up it changes the state to "executing". Once the execution is complete it change the state to "ready". If an exception occures it will abort the operation and set the state to "error".
    Batch Server can pick up only the tasks with state "Posted" so two services will not pick up same task.
    By the way the tasks with error state can be reset to posted state by the user.
    probably you need a solution like this. Either you have to develop one or find one which considers the existance of multiple execution services

  • HP Managment Pack for Monitoring Gen8 Server DMS Service can be clustered or not.

    Hello ALL,
    I am trying to configure HP Managment Pack for Monitoring Gen8 Server monitoring and i have encoutered very small problem but it is now a big problem for me. Currently we have deployed one server to monitoring the HP Gen8 Server
    Esx monitoring. But we want to create DR Site where we want to add same devices which will help us to failover when existing server will be down or HP DMC Service is down. Any one have tried this solution please let us know in DR or cluster way.
    Omkar umarani SCOM STUDENT

    Hi,
    If you are looking for SCOM cluster way to monitor Gen8 Server, then we can build multi management servers within the management group, or create multi management groups to monitor Gen8 server.
    More details:
    http://technet.microsoft.com/en-us/library/hh298610.aspx
    And if you are looking for ways to failover HP DMC serice, then I would like to suggest you post in the HP forum.
    Regards,
    Yan Li
    Regards, Yan Li

  • Advice Requested - High Availability WITHOUT Failover Clustering

    We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2.  My question is:  Can we accomplish high availability WITHOUT using failover clustering?
    So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover.  Here's what I mean:
    In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment.  In other words, there is at least a domain
    controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring.  The SQL Server VM on each host has about 75% of the
    physical memory resources dedicated to it (for performance reasons).  We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
    So now, to high availability.  The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
    (we are using an iSCSI SAN for storage).
    BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
    and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted.  With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
    or so only in the event of a major failure, rather than running at 50% ALL the time.
    Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
    So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability.  I guess
    I'm looking for validation on my thinking.
    So what do you think?  What am I missing or forgetting?  What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
    Thanks in advance for your thoughts!

    Udo -
    Yes your responses are very helpful.
    Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access?  Or can that not run on the same physical box as the Hyper-V host?  I guess if the physical box goes down
    the LUN would go down anyway, huh?  Or can I cluster that role (iSCSI target) as well?  If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
    - Morgan
    That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
    point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
    for cheap". See:
    What's new in iSCSI target with Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn305893.aspx
    Improved optimization to allow disk-level caching
    Updated
    iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
    Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
    Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
    usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
    any Microsoft iSCSI target and you'll be happy. For references see:
    MSFT iSCSI Target in HA mode
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    Cluster MSFT iSCSI Target with SAS back end
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Guest
    VM Cluster Storage Options
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Storage options
    The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
    Storage Type
    Description
    Shared virtual hard disk
    New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
    would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
    Virtual Fibre Channel
    Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
    Virtual Fibre Channel Overview.
    iSCSI
    The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
    Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
    Server 2012.
    Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
    role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
    the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
    Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
    StarWind VSAN [Virtual SAN] for Hyper-V
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
    There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
    Hope this helped a bit :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • QMASTER hints 4 usual trouble (QM NOT running/CLUSTEREd nodes/Networks etc

    All, I just posted this with some hints & workaround with very common issues people have on this forum and keep asking concerning the use of APPLE QMASTER with FCP, SHAKE, COMPRESSOR and MOTION. I've had many over the last 2 years and see them coming up frequently.
    Perhaps these symptoms are fixed in FCS2 at MAY 2007 (now). However if not here's some ROTS that i used for FCP to compressor via QMASTER cluster for example. NO special order but might help someone get around the stuff with QMASTER V2.3, FCP V5.1.4, compressor.app V2.3
    I saw the latest QMASTER UI and usage at NAB2007 and it looked a little more solid with some "EASY SETUP" stuff. I hope it has been reworked underneath.. I guess I will know soon if it has.
    For most FCP/COMPRESSOR, SHAKE. MOTION and COMPRESSOR:
    • provide access from ALL nodes to ALL the source and target objects (files) on their VOLUMES. Simply MOUNT those volumes through the APPLE file system (via NFS) using +k (cmd+k) or finder/go/connect to server. OR using an SSAFS such as XSAN™ where the file systems are all shared over FC not the network. YOu will notice the CPU's going very busy for a small while. THhis is the APPLE FILE SYSTEM task,,, I guess it's doing 'spotlight stuff". This goes away after a few minutes.
    • set the COMPRESSOR preferences for "CLUSTER OPTIONS" to "Never copy source to Cluster". This means that all nodes can access your source and target objects (files) over NFS (as above). Failure to to this means LENGTHY times to COPY material back an forth, in some cases undermining the pleasure gained from initially using clustering (reduced job times)
    • DONT mix the PHYSICAL or LOGICAL networks in your local cluster. I dont know why but I could never get this to work. Physical mean stick with eother ETHERNET or FIREWIRE or your other (airport etc whic will be generally way to slow and useless), Logical measn leepin all nodes on the SAME subnet. You can do this siply by setting theis up in the system preferences/QMASTER/advanced tab under "Use Network Interfaces". In my currnet QUAd I set this to use BUILT IN ETHERNET1 and in the MPBDC's I set this to their BUILTIN ETHERNET.
    • LOGICAL NETWORKS (Subnet): simply HARDCODE an IP address on the ETHERNET (for eample) for your cluster nodes andthe service controller. FOr eample 3.1.1.x .... it will all connect fine.
    • Physical Networks: As above (1) DONT MIX firewire (IPoFW) and Ethernet(IPoE). (2) if more than extra service node USE A HUB or SWITCH. I went and bought a 10 port GbE HUB for about $HK400 (€40) and it worked fine. I was NEVER able to get a stable system of QMASTER mixing FW and ETHERNET. (3) fwiw using IP of FW caused me a LOAD of DISK errors and timouts (I/O errors) on thosse DISKs that were FW400 (al gone now) but it showed this was not stable overall
    • for the cluster controller node MAKE SURE you set the CLUSTER STORAGE (system preferences/QMASTER/shared cluster storage) for the CLUSTER CONTROLLER NODE IS ON A SHARED volume (See above). This seems essential for SHAKE to work. (if not check the Qmaster errors in the console.app [see below] ). IF you have an SSAFS like XSAN™ then just add this cluster storage on a share file path. NOte that QMASTER does not permit the cluster storage to be on a NETWORK NODE for some reason. So in short just MOUNT the volume where the SHARED CLUSTER file is maintained for the CLUSTER controller.
    • FCP - avoid EXPORT to COMPRESSOR from the TIMELINE - it never seems to work properly (see later). Instead EXPORT FROM SEQUENCE in the BROWSER - consistent results
    • FCP - "media missing " messages on EXPORT to COMPRESSOR.. seems a defect in FCP 5.1 when you EXPORT using a sequence that is NOT in the "root" or primary trry in the FCP PROJECT BROWSER. Simply if you have browser/bin A contains(Bin B (contains Bin C (contains sequence X))) this will FAIL (wont work) for "EXPORT TO COMPRESSOR" if you use EXPORT to COMPRESSOR in a FCP browser PANE that is separately OPEN. To get around this, simply OPEN/EXPOSE the triangles/trees in the BROWSER PANE for the PROJECT and select the SEQUENCE you want and "EXPORT to COMPRESSOR" from there. This has been documented in a few places in this forum I think.
    • FCP -> COMPRESSOR -> .M2V (for DVDSP3): some things here. EXPORTING from an FCP SEQUENCE with CHAPTER MARKERS to an MPEG2 .M2V encoding USING A CLUSTER causes errors in the placement of the chapter makers when it is imported to DVDSP3. In fact CONSISTENTLY, ALL the chapter markers are all PLACED AT THE END of the TRACK in DVD SP# - somewhat useless. This seems to happen ALSO when the source is an FCP reference movie, although inconsistent. A simple work around if you have the machines is TRUN OF SEGMENTING in the COMPRESSOR ENCODER inspector. let each .M2V transcode run on the same service node. FOr the jobs at hand just set up a CLUSTER and controller for each machine and then SELECT the cluster (myclusterA, hisclusterb, herclusterc) for each transcode job.. anyway for me.. the time spent resolving all this I could have TRANSCODED all this on my QUAD and it would all have ben done by sooner! (LOL)
    • CONSOLE logs: IF QMASTER fails, I would suggest your fist port of diagnosis should be /Library/Logs/Qmaster in there you will see (on the controller node) compressor.log, jobcontroller.com.apple.qmaster.cluster.admin.log, and lots of others including service controller.com.apple.qmaster.executorX.log (for each cpu/core and node) andd qmasterca.log. All these are worth a look and for me helped me solve 90% of my qmaster errors and failures.
    • MOTION 3 - fwiw.. EXPORT USING COMPRESSOR to a CLUSTER seems to fail EVERY TIME.. seems MOTION is writing stuff out to a /var/spool/qmaster
    TROUBLESHOOTING QMASTER: IF QMASTER seems buggered up (hosed), then follow these steps PRIOR to restarting you machines.
    go read the TROUBLE SHOOTING in the published APPLE docs for COMPRESSOR, SHAKE and "SET UP FOR DISTRIBUTED PROCESSING" and serach these forums CAREFULLY.. the answer is usually there somewhere.
    ELSE THEN,, try these steps....
    You'll feel that QMASTER is in trouble when you
    • see that the QMASTER ICON at the top of the screen says 'NO SERVICES" even though that node is started and
    • that the APPLE QMASTER ADMINSTRATOR is VERY SLOW after an 'APPLY" (like minutes with SPINNING BEACHBALL) or it WONT LET YOU DELETE a cluster or you see 'undefined' nodes in your cluster (meaning that one was shut down or had a network failure)..... all this means it's going to get worse and worse. SO DONT submit any more work to QAMSTER... best count you gains and follow this list next.
    (a) in COMPRESSOR.app / RESET BACKGROUND PROCESSES (its under the COMPRESSOR name list box) see if things get kick started but you will lose all the work that has been done up to that point for COMPRESSOR.app
    b) if no OK, then on EACH node in that cluster, STOP the QMASTER (system preferences/QMASTER/setup [set 0 minutes in the prompt and OK). Then when STOPPED, RESET the shared services my licking OPTION+CLICK on the "START" button to reveal the "RESET SERVICES". Then click "START" on each node to start the services. This has the actin of REMOVING or in the case where the CLUSTER CONTROLLER node is "RESET" f terminating the cluster that's under its control. IF so Simply go to APPLE QMASTER ADMINISTRATOR and REDFINE it. Go restart you cluster.
    c) if step (b) is no help, consult the QMASTER logs in /Library/Logs/Qmaster (using the cosole.app) for any FILE MISSING or FILE not found or FILE ERROR . Look carefully for the NODENAME (the machine_name.local) where the error may have occured. Sometimes it's very chatty. Others it is not. ALso look in the BATCH MONITOR OUTPUT for errors messages. Often these are NEVER written (or I cant find them) in the /var/logs... try and resolve any issues you can see (mostly VOLUME or FILE path issues from my experience)
    (d) if still no joy then - try removing all the 'dead' cluster files from /var/tmp/qmaster , /var/sppol/qmaster and also the file directory that you specified above for the controller to share the clustering. FOR shake issues, go do the same (note also where the shake shared cluster file path is - it can be also specified in the RENDER FILEOUT nodes prompt).
    e) if all this WONT help you, its time to get the BIG hammer out. Simply, STOP all nodes of not stopped. (if status/mode is "STOPPING" then it [QMASTER] is truly buggered). DISMOUNT the network volumes you had mounted. and RESTART ALL YOUR NODES. Tis has the affect of RESTARTING all the QMASTERD tasks. YEs sure you can go in and SUDO restart them but it is dodgy at best because they never seem to terminate cleanly (Kill -9 etc) or FORCE QUIT.... is what one ends up doing and then STILL having to restart.
    f) after restart perform steps from (B) again and it will be usually (but not always) right after that
    LAstly - here's some posts I have made that may help others for QMASTER 2.3 .. and not for the NEW QMASTER as at MAy 2007...
    Topic "qmasterd not running" - how this happened and what we did to fix it. - http://discussions.apple.com/message.jspa?messageID=4168064#4168064
    Topic: IP over Firewire AND Ethernet connected cluster? http://discussions.apple.com/message.jspa?messageID=4171772#4171772
    LAstly spend some DEDICATED time to using OBJECTIVE keywords to search the FINAL CUT PRO, SHAKE, COMPRESSOR , MOTION and QMASTER forums
    hope thats helps.
    G5 QUAD 8GB ram w/3.5TB + 2 x 15in MBPCore   Mac OS X (10.4.9)   FCS1, SHAKE 4.1

    Warwick,
    Thanks for joining the forum and for doing all this work and posting your results for our benefit.
    As FCP2 arrives in our shop, we will try once again to make sense of it and to see if we can boost our efficiencies in rendering big projects and getting Compressor to embrace five or six idle Macs.
    Nonetheless, I am still in "Major Disbelief Mode" that Apple has done so little to make this software actually useful.
    bogiesan

  • References from nested clusters

    Hi,
    Currently our station can test only one product at the time, but we modified the wiring so now we can attach 2 units to the same station. A new application must be written to handle the new scenario. The test has to be executed several times on both the units. The execution is sequential so unit1 first then unit2.
    I have created a CONTROL cluster with the following elements
    - bool: boolean button (means unit enabled/disabled)
    - PARAMS cluster: various text rings. This cluster is disabled and greyed out once the user enabled the starter.
    - MEASUREMENT graph
    Rules:
    - the unit can not be enabeled if any of the text rings is unconfigured.
    - the test must be interrupted immediately for the given unit if the enabled button is pressed during the test (when the user disable the unit runtime). So a reference to this button must be used and continously monitored.
    - the test must be interrupted immediately for both the units if the stop button is pressed during thet test
    - after a test is completed the results must be evaluated and the unit must be disabled if the measured values are outside of the limits.
    now... This would be a very easy task if I would have one unit only. I would just create the neccessary control references drive them to the measurement VI and here we go.
    But its getting inconveniently complex when I have control 2 units. I can not treat the control elements in an array (so like an array with 2 CONTROL clusters) because then I can not disable the PARAMS clusters independently.
    I dont see an easy way to add 2 of the CONTROL clusters to a new cluster (so treat them as one cluster), I am not sure how to get the references in this way. (if I combine them into one cluster its pretty easy to get propertynode/value for any of the elements, but I need control refs)
    So I handle both clusters as an independent control on the front panel, so I have add lot of duplicatinos to handle both units in the same way. I find this very inconvenient and error prone, plus it complicates the block diagram 
    I am wondering what would be right approach to handle these type of problems.
    (I have tried to create reentrant VIs but I gave up because I had to communicate too much between my main VI and the reentrant VIs. That made the code hard to follow)
    I use LV2012, but the attachment in LV8 so hopefully everybody can open it
    Thanks
    Attachments:
    Cluster.vi ‏16 KB

    Well... if I create a reference to the main cluster then I can use the controls[] property which will give me back 3 references in an array. First the button, second the params clusters, third the graph (maybe the order is different, it doesnt matter for now). But when I drive the params cluster reference to another property node it does not offer me a controls[] property, so I can not access the contents of the cluster itself. I may could use some sort of a cast function, but its really counter-intuitive.
    I always have to know the order of any of the given clusters and if I change the order my code will mess up instantly. And hell, should why should I refer to my objects as control[][0], control[][1] etc instead of a real name.
    Not sure if this can be resolved in the current LabVIEW environment...
    The workaround I made is that I have created a cluster in which each element are references. I drive the button, graph and params cluster references into it and as I have two units to control I made an array of this cluster.
    Not sure if you agree but this is overcomplicating the code and I had to create an extra cluster just to access to the references of my original clusters. Pain in the back.
    Let me know your thoughts!
    thx.

  • Announcement: Super 4.00 - a suite of EJB/J2EE monitoring/admin tools.

              Announcement: Super 4.00 - a suite of EJB/J2EE monitoring/admin tools.
              Acelet is the leader in J2EE tools area. If you google "j2ee tools",
              "j2ee logging", "j2ee scheduler" or alike, you will find Acelet
              is at the top of the result.
              Super 4.00 comes with:
              SuperEnvironment
              SuperLogging
              SuperPeekPoke
              SuperReport
              SuperScheduler
              SuperStress
              and SuperPatrol, as a schedule job.
              The evaluation edition can be anonymously downloaded from:
              http://www.ACElet.com.
              Super is a component based monitor and administration tool
              for EJB/J2ee. It provides built-in functionality as well as
              extensions, as SuperComponents. Users can install
              SuperComponents onto it, or uninstall them from it.
              Super has the following functions:
              * A J2EE monitor.
              * A gateway to J2EE/EJB servers from different vendors.
              * A framework holding user defined SuperComponents.
              * A full-featured J2EE logging and J2EE tracing tool for centralized,
              chronological logging.
              * An EJB tool for Peeking and Poking attributes from EJBs.
              * An EJB Stress test tool.
              * A J2EE global environment tool.
              * A J2EE report tool.
              * A J2EE Scheduler tool.
              * A J2EE Business patrol tool.
              It is written entirely in the Java(TM) programming language.
              The current version support:
              * JOnAS 2.4 and 2.6
              * SunONE 7.0
              * Universal servers.
              * Weblogic 6.1, 7.0 and 8.1
              * Websphere 4.0 and 5.0.2
              * jBoss 3.0 and 3.2
              ********** What is new:
              Version 4.00 November, 2003
              Enhancement:
              1. Support for both native protocol (RMI-IIOP) mode and HTTP/HTTPS
              (with/without proxy) protocol mode for SuperEnvironment,
              SuperLogging, SuperReport and SuperScheduler.
              2. SuperLogging 4.00: tracing can work on both live database and retired database.
              3. SuperReport 3.00: works for both live database and retired database.
              4. SuperScheduler 3.00: add URL job type (for Servlet/JSP). Add DoerTalker Table
              Panel.
              Bug fix:
              1. SuperScheduler 3.00: Interval change did not take effect until restart Super.
              Version 3.00 July, 2003
              Enhancement:
              1. SuperLoggingLibrary 3.00: New implementation for change scope adding "Smart"
              scope,
              with enhancements and bug fixes.
              2. SuperLoggingLibrary 3.00: Support mail server which requires user name and
              password.
              Add MenuTreePanel.
              3. Improved GUI and document.
              4. Add support to WebLogic 8.1.
              Bug fix:
              1. SuperScheduler 2.0: Fix a bug in FutureView for Hourly and Minutely.
              2. SuperScheduler 2.0: Startup should never be reported as missed.
              3. SuperScheduler 2.0: Could not reset job for existing task in some situation.
              Version 2.20 Jan. 2003
              Enhancement:
              1. Add desktop and start menu shortcuts for MS-Windows.
              2. Add support for SunONE 7, JOnAS 2.6 and jBoss 3.0.
              3. SuperLogging 2.40: Add new sendAlarmEmail() method.
              4. SuperScheduler 1.40: Add SuperSchedulerEJB for managing when
              direct database is not practical; Allow user to choose
              favorite logging software; Add Last day as Monthly
              repeating attribute.
              Change:
              1. Change Unusual to PatrolAlarm. The name "Unusual" was misleading.
              Bug fix:
              1. SuperEnvironment 1.31: Bug fix: if database is broken, could not
              open Environment Manager.
              2. SuperLogging client 1.52: Annoying exception thrown when you use
              JDK 1.4 (the program runs okay).
              3. SuperPeekPoke 1.61: Fix bug where input object contains
              java.lang.Double and alike.
              4. SuperScheduler 1.40: Bug fixes in: Memory leak; Reporting
              PatrolAlarm for SuperPatrol; Composite task with members;
              Non-scheduled run on other host; Around edges of last
              days in Monthly with holiday policy.
              Version 2.10 July 2002
              Enhancement:
              1. SuperScheduler 1.3: Add Future View to check future schedule in
              both text and Gantt-chart mode.
              2. SuperScheduler 1.3: Add graphic Gantt view for monitoring task's
              activities.
              3. SuperEnvironment 1.3: uses new graphic package adding print and
              preference facilities.
              4. SuperPeekPoke 1.6: uses new graphic package adding print and
              preference facilities.
              5. SuperStress 1.21: uses new graphic package.
              Bug fix:
              1. SuperStress 1.21: fixed graphic related bugs.
              Version 2.01 June 2002
              Enhancement:
              1. Add options for Look & Feel.
              2. Preference is persistent now.
              Bug fix:
              1. Installation for WebLogic 7.0: extEnv may not be installed on the
              right place, so SuperLibrar on the server side was not loaded and
              causes other problems.
              Version 2.00 June 2002
              Enhancement:
              1. SuperScheduler 1.2: All copies of SuperScheduler refresh themselves
              when any Doer causes things to change.
              2. SuperScheduler 1.2: Support default HTML browser for reading HTML document.
              3. SuperReport 1.2: Support default HTML browser for reading HTML document.
              4. Support WebLogic 7.0.
              5. SuperEnvironment 1.21: Database Panel appears when it is necessary.
              6. SuperEnvironment 1.21: New SuperEnvironment tour.
              Bug fix:
              1. WebSphere Envoy did not always list all JNDI names.
              Version 1.90 May 2002
              Enhancement:
              1. Rewritten SuperLogging engine. Add Alarm Email on SuperLogging.
              2.Rewritten SuperScheduler allowing multiple Doers. Add support to holiday policy,
              effective period. Add Patrol job type as SuperPatrol.
              3. Add support for both JOnAS and jBoss.
              4. Add more elements on Report criteria.
              Change:
              1. Now, both left and right mouse clicks are the same on Table Panel: toggle ascend
              and descend.
              2. New log database.
              Bug fix:
              1. Alert email should be sent once in the interval, regarding number of servers
              in the clustering.
              2. Minor bug fixes to make errors handled better on SuperLogging.
              3. If withFileInfo or withTimestamp are changed alone, Style Panel did not save
              them.
              4. Rewritten SuperLogging and SuperScheduler with many bug fixes.
              Version 1.80 March 2002
              Enhancement:
              1. Add new component: SuperScheduler
              Bug fix:
              1. SuperLogging: Verbose should ignore class registration.
              2. SuperLogging-tracing: an exception was thrown if the java class without package
              name.
              Version 1.70 January 2002
              Enhancement:
              1. SuperLogging: Scope can dynamically change both for upgrade to downgrade (for
              weblogic 6.1, need download an application).
              2. Add alias names for log threshold as new Java suggests.
              3. New component: SuperReport.
              Change:
              1. SuperLogging: Log database parameters are specified in a properties file, instead
              of EJB's deployment descriptor. It is more convenient and it avoids some potential
              problems. No change for development, easier for administration.
              Bug fix:
              1. Add Source Path Panel now accepts both directory and jar file.
              2. Bug in SuperEnvironment example (for version 1.60 only).
              Version 1.60 December 2001
              Enhancement:
              1. SuperPeekPoke and SuperStress can use user defined dynamic argument list.
              2. Add timeout parameter to logging access.
              3. New installation program with A). Easy install. B). Remote command line install.
              4. Support EJB 2.0 for Weblogic 6.1.
              5. Support SuperPeekPoke, SuperEnvironment and SuperStress for Websphere 4.0 (SuperLogging
              was supported since version 1.5).
              Change:
              1. Poke: argument list is set at define time, not invoke time.
              2. Default log database change to server mode from web server mode, booting performance
              to 10-20 times.
              Bug fix:
              1. If the returned object is null, Peek did not handle it correctly.
              2. If the value was too big, TimeSeries chart did not handle it correctly. Now
              it can handle up to 1.0E300.
              3. Help message was difficult to access in installation program.
              4. Source code panel now both highlights and marks the line in question (before
              it was only highlight using JDK 1.2, not JDK 1.3).
              5. Delete an item on PeekPoke and add a new one generated an error.
              Version 1.50 August, 2001
              Enhancement:
              1. Source code level tracing supports EJB, JSP, java helper and other
              programs which are written in native languages (as long as you
              write correct log messages in your application).
              2. Redress supports JSP now.
              3. New installation with full help document: hope it will be easier.
              4. Support WebSphere 4.0
              Version 1.40 June, 2001
              Enhancement:
              1. Add SuperEnvironment which is a Kaleidoscope with TableView, TimeSeriesView
              and PieView for GlobalProperties.
              GlobalProperties is an open source program from Acelet.
              2. SuperPeekPoke adds Kaleidoscope with TableView, TimeSeriesView and PieView.
              Changes:
              1. The structure of log database changed. You need delete old installation and
              install everything new.
              2. The format of time stamp of SuperLogging changed. It is not locale dependent:
              better for report utilities.
              3. Time stamp of SuperLogging added machine name: better for clustering environment.
              Bug fix:
              1. Under JDK 1.3, when you close Trace Panel, the timer may not be stopped and
              Style Panel may not show up.
              Version 1.30 May, 2001
              Enhancement:
              1. Add ConnectionPlugin support.
              2. Add support for Borland AppServer.
              Version 1.20 April, 2001
              Enhancement:
              1. Redress with option to save a backup file
              2. More data validation on Dump Panel.
              3. Add uninstall for Super itself.
              4. Add Log Database Panel for changing the log database parameters.
              5. Register Class: you can type in name or browse on file system.
              6. New tour with new examples.
              Bug fix:
              1. Redress: save file may fail.
              2. Install Bean: some may fail due to missing manifest file. Now, it is treated
              as foreign beans.
              3. Installation: Both installServerSideLibrary and installLogDatabase can be worked
              on the original file, do not need copy to a temporary directory anymore.
              4. PeekPoke: if there is no stub available, JNDI list would be empty for Weblogic5-6.
              Now it pick up all availble ones and give warning messages.
              5. Stress: Launch>Save>Cancel generated a null pointer exception.
              Changes:
              1. installLogDatabase has been changed from .zip file to .jar file.
              2. SuperLogging: If the log database is broken, the log methods will not try to
              access the log database. It is consistent with the document now.
              3. SuperLogging will not read system properties now. You can put log database
              parameters in SuperLoggingEJB's deployment descriptor.
              Version 1.10 Feb., 2001
              Enhancement:
              1. Re-written PeekPoke with Save/Restore functions.
              2. New SuperComponent: SuperStress for stress test.
              3. Set a mark at the highlighted line on<font size=+0> the Source Code
              Panel (as a work-a-round for JDK 1.3).</font>
              4. Add support for WebLogic 6.0
              Bug fix:
              1. Uninstall bean does physically delete the jar file now.
              2. WebLogic51 Envoy may not always list all JNDI names. This is fixed.
              Version 1.00 Oct., 2000
              Enhancement:
              1. Support Universal server (virtual all EJB servers).
              2. Add Lost and Found for JNDI names, in case you need it.
              3. JNDI ComboBox is editable now, so you can PeekPoke not listed JNDI name (mainly
              for Envoys which do not support JNDI list).
              Version 0.90: Sept, 2000
              Enhancement:
              1. PeekPoke supports arbitrary objects (except for Vector, Hashtable
              and alike) as input values.
              2. Reworked help documents.
              Bug fix:
              1. Clicking Cancel button on Pace Panel set 0 to pace. It causes
              further time-out.
              2. MDI related bugs under JDK 1.3.
              Version 0.80: Aug, 2000
              Enhancement:
              1. With full-featured SuperLogging.
              Version 0.72: July, 2000
              Bug fix:
              1. Ignore unknown objects, so Weblogic5.1 can show JNDI list.
              Version 0.71: July, 2000
              Enhancement:
              1. Re-worked peek algorithm, doing better for concurent use.
              2. Add cacellable Wait dialog, showing Super is busy.
              3. Add Stop button on Peek Panel.
              4. Add undeploy example button.
              Bug fix:
              1. Deletion on Peek Panel may cause error under JDK 1.3. Now it works for both
              1.2 and 1.3
              Version 0.70: July, 2000
              Enhancement:
              1. PeekPoke EJBs without programming.
              Bug fix:
              1. Did not show many windows under JDK 1.3. Now it works for both 1.2 and 1.3
              Changes:
              1. All changes are backward compatible, but you may need to recompile monitor
              windows defined by you.
              Version 0.61: June, 2000
              Bug fix:
              1. First time if you choose BUFFER as logging device, message will not show.
              2. Fixed LoggingPanel related bugs.
              Version 0.60: May, 2000
              Enhancement:
              1. Add DATABASE as a logging device for persistent logging message.
              2. Made alertInterval configurable.
              3. Made pace for tracing configurable.
              Bug fix:
              1. Fixed many bugs.
              Version 0.51, 0.52 and 0.53: April, 2000
              Enhancement:
              1. Add support to Weblogic 5.1 (support for Logging/Tracing and
              user defined GUI window, not support for regular monitoring).
              Bug fix:
              1. Context sensitive help is available for most of windows: press F1.
              2. Fix installation related problems.
              Version 0.50: April, 2000
              Enhancement:
              1. Use JavaHelp for help system.
              2. Add shutdown functionality for J2EE.
              3. Add support to Weblogic 4.5 (support for Logging/Tracing and
              user defined GUI window, not support for regular monitoring).
              Bug fix:
              1. Better exception handling for null Application.
              Version 0.40: March, 2000
              Enhancement:
              1.New installation program, solves installation related problems.
              2. Installation deploys AceletSuperApp application.
              3. Add deploy/undeploy facilities.
              4. Add EJB and application lists.
              Change:
              1.SimpleMonitorInterface: now more simple.
              Version 0.30: January, 2000
              Enhancement:
              1. Add realm support to J2EE
              2. Come with installation program: you just install what you want
              the first time you run Super.
              Version 0.20: January, 2000
              Enhancement:
              Add support to J2EE Sun-RI.
              Change:
              1. Replace logging device "file" with "buffer" to be
              compliant to EJB 1.1. Your code do not need to change.
              Version 0.10: December, 1999
              Enhancement:
              1. provide SimpleMonitorInterface, so GUI experience is
              not necessary for developing most monitoring applications.
              2. Sortable table for table based windows by mouse
              click (left or right).
              Version 0.01 November., 1999:
              1. Bug fix: An exception thrown when log file is large.
              2. Enhancement: Add tour section in Help information.
              Version 0.00: October, 1999
              Thanks.
              

Maybe you are looking for

  • Append the data in file at receiver side

    Hi All, I want dump the data frm SAP tables. the data is hughe so we are sending the data in slots from ECC like 50K recoerds at a time and after that to collect that i using the append parameter at file receiver side and records are getting appended

  • How do I create a DVD with no theme, i.e., it plays what was recorded when inserted in the player?

    How do I create a DVD with no theme, i.e., it plays what was recorded when inserted in the player?  Nothing extra, just the recorded content.

  • SCOM 2012 Reporting Fails During Install

    I am having issues installing the Reporting Module for SCOM 2012.  Here is my environment: Stand-alone SQL server with SQL 2012 (Default Instance and only is used for SCOM) Reporting Services Installed and I am able to browse to the URL Reader Accoun

  • Error code -36 when emptying trash...Please Help !!!!!!

    Hey there, im having problems emptying the trash on my macbookpro and I wonder is it the problem that slows down my computer too because it has been awhile since i have emptied my trash and now I can't empty it due to the error showed as "PodcastsPla

  • Setting static IP on Lan port

    I've been trying to set up 1 static IP on a LAN port for a media player. I believe the method to use is (1)go to ADVANCED ROUTING tab (2)Select (in my case) GATEWAY (3) Select a route number (I select route (1)) (4) I enter a name in the name area (5