VoIP monitoring tools - real-world application.

We're evaluating several products for an enterprise level VoIP monitoring applications, including, but not limited to, NetScout and NetQoS.
The company is undergoing a major infrastructure upgrade and going to Cisco's UCM 7.x platform and we'd like to get something in place that will do call performance monitoring (delay and jitter and MoS scores as well as infrasctrucutre monitoring) and do it well with relatively easy to use and robust reporting tools.  Reporting is a big bonus as it would be used for tshooting as well as management reportage and capacity planning (hopefully).
Anyone using NetQoS or NetScout specifically for VoIP monitoring and reporting?  Pros and cons?  Looking for a company with a good market footprint and active development.
NetQoS seems to have a good handle on this specific need - they can do endpoint baselining and real-time monitoring as well as traceroutes.  Their reporting is pretty robust.  Comments or concerns about NetQoS?  Other tools that can do the same things, specifically?
Importantly, is anyone using something successfully to monitor Cisco UCM 7.x including cluster infrastrucutre and voice gateways?  Both products can leverage SNMP really well, so we're looking at that.
Has anyone thought of or managed to use any sort of data capture platform of this kind to also generate meaningful CDR reporting?  If so, what platform and how?
(Please, nobody just cut and paste talking points from some website - I see a lot of that happening here and there in the forums.  I have all the available data I need from vendor websites, their admin guides, their marketing materials... )
thanks!

Cisco's Unified Operations Manager and Solarwinds both do an ****ok**** job at monitoring these things. Cisco has really dropped the ball on SIP monitoring at the gateway level. They are beginning to develope SNMP MIBs for call activity monitoring but are pushing people to IOS 15.0 to see it. So.... I am not sure what you are running at the gateway level; if SIP let me know what you find.

Similar Messages

  • Where can i check in the real time monitoring tool if there are active calls in progress?

    where can i check in the real time monitoring tool if there are active calls in progress? 

    Hi
    Selecting Call Manager -->Call Process ---> call activity
    HTH
    Regards
    Carlo

  • VoIP Monitoring and Management Tools

    Does anyone have suggestion on VoIP monitoring and management tools for large enterprise voip infrastructure?
    We are in the final phase of 8000 phone cut-over across 2 location (2clusters) and need some advice on choosing the right monitoring system.
    I saw Qovia have Cisco approved monitoring tools.
    Does anyone ever use this? any suggestions or advice?
    Thanks in advance
    Bonaldi Kresnanto

    Bonaldi,
    Monitoring voice quality is just one aspect of ongoing operations. It is equally (if not more) important to have the ability to remotely troubleshoot end-user problems, manage configuration changes, and perform active testing on a regular basis before users are impacted.
    In addition, whenever you make changes to your IPT system (e.g. CallManager upgrades/patches, IOS upgrades, power outages) you'll definitely want the ability to verify the system is still operating as design. Using the active tests you can do this easily and in an automated fashion.
    ClarusIPC Operations is the only product that addresses these needs. It is also just as valuable during the deployment as ongoing operations to help you make changes and deploy new phones with the same product you day-to-day.
    I'd like to talk to you more about the product and how it can be used in your environment.
    David Roberts
    [email protected]

  • Flex and large real world b2b applications

    Hi all.
    I am new to Flex. I am acting in an architect capacity to
    review the potential in Flex to become the client presentation
    layer for a classic ASP?SQL application. I am seeking a
    cross-browser, cross-platform, zero-installation,
    just-in-time-delivery, rich user experience web application client.
    I think I'm in the right place.
    My aim in creating this post is to solicit feedback on
    approaches and techniques on how to plan and execute a major-system
    re-write into Flex, what is in-scope and what is not, etc. With the
    Flex team putting the final touches into release 1.0, this might be
    considered a bit too soon to ask these questions, but worthy of
    response if Flex is to be take as anything more than
    ‘something to do with that flash games thing that my kids
    play on’.
    One of the key issues for my client company, which I believe
    will be typical for many in the same situation, is to retain
    current investment in the system by re-use of the business logic
    and DB access layers. Basically the back-end is not broken and is
    well prepared for generation of XML instead of HTML. What is
    considered weak, by nature of the web browsers poor user interface
    abilities, is the client side of the system.
    The company has a small, loyal and technically able workforce
    who are very familiar with the current system, which is written
    using classic ASP and SQL Server , HTML and JavaScript. The company
    is risk and runaway cost averse. It has held back from jumping into
    .Net for fear of getting into a costly 5 year revision cycle as
    .Net matures. The AJAX approach is another potentially fruitful
    client-only route but is likely to be a painful way forward as
    there is no major technology vendor leading the charge on
    standards. A Java approach would meet the user interface
    improvement needs but would require replacing or retraining the
    current workforce and a paradigm shift in the technology losing all
    of the middle-tier database logic.
    The ideal is a stable zero-installation web-client that can
    communicate with a server back-end consisting of the same, or
    slightly modified or wrapped business logic and db configuration as
    the current system and that could then run side-by-side during
    switchover.
    If this is possible then risk adverse organisations have a
    way forward.
    The problem is, that from several docs and articles on
    Adobe's web site, there seems to be some careful but vague
    positioning of the capability of Flex in terms of application
    complexity and depth. Also, the demo's that are available seem to
    be quite lightweight compared to real-world needs. These apps
    ‘seem’ to work in a mode where the entire application
    is downloaded in one-hit at user initiation. The assumption is that
    the user will be prepared to pay some wait time for a better UX,
    but there must be a limit.
    Question:: How does one go about crafting in Flex what would
    have been a 300-page website when produced in HTML? Is this
    practical? To create a download containing the drawing instructions
    and page-logic for 300 pages would probably cause such a delay at
    user-initiation that it would not be practical.
    There are many further questions that span from here, but
    lets see what we get back from the post so far.
    Looking forward to reading responses.
    J.

    You're absolutely in the right place (IMO)...
    Cynergy Systems can help you get started. We are a Flex
    Alliance partner,
    and have developed some extremely complex RIAs with Flex.
    Contact our VP of Consulting - Dave Wolf for more info:
    [email protected]
    Paul Horan
    Cynergy Systems, Inc.
    Macromedia Flex Alliance Partner
    http://www.cynergysystems.com
    Office: 866-CYNERGY
    "TKJames" <[email protected]> wrote in
    message
    news:[email protected]...
    > Hi all.
    >
    > I am new to Flex. I am acting in an architect capacity
    to review the
    > potential
    > in Flex to become the client presentation layer for a
    classic ASP?SQL
    > application. I am seeking a cross-browser,
    cross-platform,
    > zero-installation,
    > just-in-time-delivery, rich user experience web
    application client. I
    > think I'm
    > in the right place.
    >
    > My aim in creating this post is to solicit feedback on
    approaches and
    > techniques on how to plan and execute a major-system
    re-write into Flex,
    > what
    > is in-scope and what is not, etc. With the Flex team
    putting the final
    > touches
    > into release 1.0, this might be considered a bit too
    soon to ask these
    > questions, but worthy of response if Flex is to be take
    as anything more
    > than
    > ?something to do with that flash games thing that my
    kids play on?.
    >
    > One of the key issues for my client company, which I
    believe will be
    > typical
    > for many in the same situation, is to retain current
    investment in the
    > system
    > by re-use of the business logic and DB access layers.
    Basically the
    > back-end is
    > not broken and is well prepared for generation of XML
    instead of HTML.
    > What is
    > considered weak, by nature of the web browsers poor user
    interface
    > abilities,
    > is the client side of the system.
    >
    > The company has a small, loyal and technically able
    workforce who are very
    > familiar with the current system, which is written using
    classic ASP and
    > SQL
    > Server , HTML and JavaScript. The company is risk and
    runaway cost
    > averse. It
    > has held back from jumping into .Net for fear of getting
    into a costly 5
    > year
    > revision cycle as .Net matures. The AJAX approach is
    another potentially
    > fruitful client-only route but is likely to be a painful
    way forward as
    > there
    > is no major technology vendor leading the charge on
    standards. A Java
    > approach
    > would meet the user interface improvement needs but
    would require
    > replacing or
    > retraining the current workforce and a paradigm shift in
    the technology
    > losing
    > all of the middle-tier database logic.
    >
    > The ideal is a stable zero-installation web-client that
    can communicate
    > with a
    > server back-end consisting of the same, or slightly
    modified or wrapped
    > business logic and db configuration as the current
    system and that could
    > then
    > run side-by-side during switchover.
    >
    > If this is possible then risk adverse organisations have
    a way forward.
    >
    > The problem is, that from several docs and articles on
    Adobe's web site,
    > there
    > seems to be some careful but vague positioning of the
    capability of Flex
    > in
    > terms of application complexity and depth. Also, the
    demo's that are
    > available
    > seem to be quite lightweight compared to real-world
    needs. These apps
    > ?seem? to
    > work in a mode where the entire application is
    downloaded in one-hit at
    > user
    > initiation. The assumption is that the user will be
    prepared to pay some
    > wait
    > time for a better UX, but there must be a limit.
    >
    >
    Question:: How does one go about crafting in Flex what would
    have
    > been
    > a 300-page website when produced in HTML? Is this
    practical? To create a
    > download containing the drawing instructions and
    page-logic for 300 pages
    > would
    > probably cause such a delay at user-initiation that it
    would not be
    > practical.
    >
    > There are many further questions that span from here,
    but lets see what we
    > get
    > back from the post so far.
    >
    > Looking forward to reading responses.
    >
    > J.
    >
    >

  • Any real-world e-commerce application using HTMLDB?

    Hi,
    Any real-world e-commerce application using HTMLDB?
    If yes, can you please provide the web links?
    Thanks!
    Steve

    That's why I said "depends on your definition"
    According to the Wikipedia, the definition of e-commerce is -
    "Electronic commerce, e-commerce or ecommerce consists primarily of the distributing, buying, selling, marketing,
    and servicing of products or services over electronic systems such as the Internet and other computer networks."So nothing mentioned there about the size/number of transactions ;)
    So, is your question "Has anybody built a site in HTMLDB that handles the number of transactions that Amazon handles?"
    I'd be surprised if the answer were Yes, however HTMLDB (depending on your architecture) is capable of scaling to a huge number of users.
    Do you have a particular application/project in mind, or is it just a hypothetical question?

  • How much real world difference would there be between the 1600MHz memory of a 4,1 Mac Pro and the 800MHz memory of a 3,1 Mac Pro? My main app is multitrack audio with Pro Tools. Thanks so much.

    How much real world performance difference would there be between the 1600MHz memory of a 4,1 Mac Pro and the 800MHz memory of a 3,1 Mac Pro? My main app is multitrack audio with Pro Tools. Thanks so much. The CPU speed of either one would be between 2.8GHz and 3.0GHz.

    What are the differences.... firmware and build, there were tweaks to the PCIe bus itself. As a result 3rd party cards and booting is better.
    Support in 5,1 firmware for more 56xx and W35/36xx processors. Also memory timing.
    The 4,1 was "64-bit boot mode optional" and 5,1 was default. I don't know if there are changes but I assume so, even if it is not reflected elsewhere or in version number.
    I don't know what the prices are but 2009, to buy one today, when the 2010 is $1800.
    The 2008 of course was test bed for 64-bit UEFI and it sure seems even Lion and then ML are not as well engineered - outside of Linc who would be the least likely to have a problem.
    I would assume 2010 has better support for 8GB and even 16GB DIMMs as well as for 1333MHz.
    Nehalem family had only come out in fall 2008 and a lot of work went into making improvements well past 2009.
    If you remember, there were serious heat problems with those and 10.5.7+ up thru 10.6.2 even with iTunes, audio, and hyperthreading and cores hitting and staying in 80*C range. That I assume was both poor code (sleep does not mean poke and ask constantly) as well as changes in SMC and kernel improvements, to work around. Microcode can be patched in firmware, kernel, by drivers and by code, but it is best when the chips and core elements don't need to be.
    If someone is stretched, and can get 2009 for $1200 it might be a fine fit. That year offered the OEM GT120 which isn't really as nice and matched for today both OS and apps that rely on a GPU. And for odd reasons two such 120's don't work well in Lion+ but that is probably minor. Having the 5770 is just "nicer" though.
    There are some articles about trouble booting with PCIe SATA/SAS/SSD and less trouble with 2010. Also support for graphic card and audio I think was one of those "minor" 5770 related support issues. But shows some small changes were made there too.
    I wish someone would come out and pre-announce DDR4 + SATA3 along with PCIe 3.x (for bandwidth and more power per rail) along with say Ivy Bridge-E socket processors was going to be this summer's 3 yr anniversary and to replace the 2010 designed motherboard. But that is what is on Intel's and others drawing boards simmeringn in the pot.

  • Unix Application monitor tool : Help needed

    Hi All,
    I am planning to develop an Application monitor tool for internal use.
    In a Nutshell this tool will monitor set of services (user applications), which or running on HP UNIX box from a windows terminal. Just to know whether they are running OK or not (red green amber!). In other words executing small command on unix and sending the result back to client.
    I am wondering is there any open source tools for this type purpose from which I can take some help.
    Thanks in advance

    A few simple Google searches will probably pull up what you are looking for on HP Unix. For starters
    http://www.google.com/search?sourceid=navclient&ie=UTF-8&rls=GGLD,GGLD:2004-08,GGLD:en&q=open+source+monitoring+tools+HP+unix

  • Is there any monitoring tool for web server and application server ?

    experts,
    I just want to know that is there any monitoring utility which being used to monitor the web server activities like threads web console session tracking and so on..
    I am using Jboss as my application server.If you suggest any monitoring tool for Jboss It would be helpful for me,

    You may use jConsole

  • Application Monitoring Tool for PowerBuilder

    We are looking for an application monitoring tool that works with PowerBuilder Classic.  We have an application that is experiencing an issue at one client site and they are insisting there is an application issue.  I need a strong application monitoring tool that works with PowerBuilder so we can begin to monitor the performance at their site.  So far all the companies we have found to contact turn us away as soon as they hear our code is in PowerBuilder.
    I would greatly appreciate any input as to companies we can contact or recommendations/experiences anyone has had.
    Regards-
    Barbara Jean Dupuis

    I have an app that logs the execution time of all DataWindow retrieves in a table. I can then run a report that tells me which ones are slow. The base DataWindow object captures the start time in the retrievestart event and the retrieveend event inserts a row into a table with the DataWindow name datetime and elapsed time. I use the QueryPerformanceCounter Win API function to get sub-second timing.
    Something like this:
    External Functions:
    Function boolean QueryPerformanceFrequency(Ref Double lpFrequency) Library "kernel32.dll"
    Function boolean QueryPerformanceCounter(Ref Double lpPerformanceCount) Library "kernel32.dll"
    Instance variables:
    Double idbl_frequency
    Double idbl_start
    Constructor event:
    QueryPerformanceFrequency(idbl_frequency)
    Retrievestart event:
    QueryPerformanceCounter(idbl_start)
    Retrieveend event:
    Double ldbl_stop
    Dec{6} ldec_elapsed
    QueryPerformanceCounter(ldbl_stop)
    ldec_elapsed = (ldbl_stop - idbl_start) / idbl_frequency
    INSERT INTO RETRIEVAL_ACTIVITY_LOG
               ( DATAWINDOW_NAME, RETRIEVE_DATETIME, ROW_COUNT, ELAPSED_TIME )
        VALUES ( :this.DataObject, getdate(), :rowcount, :ldec_elapsed );

  • Making Effective Use of the Hybrid Cloud: Real-World Examples

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, and it was clear that NetApp's approach to hybrid cloud and Data Fabric resonated with the crowd. NetApp solutions such as NetApp Private Storage for Cloud are solving real customer problems.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that allows you to move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    Check out the following blogs for more perspectives:
    Microsoft Ignite Sparks More Innovation from NetApp
    ASR Now Supports NetApp Private Storage for Microsoft Azure
    Four Ways Disaster Recovery is Simplified with Storage Management Standards
    Introducing OnCommand Shift
    SHIFT VMs between Hypervisors
    Infront Consulting + NetApp = Success
    Richard Treadway
    Senior Director of Cloud Marketing, NetApp
    Tom Shields
    Senior Manager, Cloud Service Provider Solution Marketing, NetApp
    Enterprises are increasingly turning to cloud to drive agility and closely align IT resources to business needs. New or short-term projects and unexpected spikes in demand can be satisfied quickly and elastically with cloud resources, spurring more creativity and productivity while reducing the waste associated with over- or under-provisioning.
    Figure 1) Cloud lets you closely align resources to demand.
    Source: NetApp, 2015
    While the benefits are attractive for many workloads, customer input suggests that even more can be achieved by moving beyond cloud silos and better managing data across cloud and on-premises infrastructure, with the ability to move data between clouds as needs and prices change. Hybrid cloud models are emerging where data can flow fluidly to the right location at the right time to optimize business outcomes while providing enhanced control and stewardship.
    These models fall into two general categories based on data location. In the first, data moves as needed between on-premises data centers and the cloud. In the second, data is located strategically near, but not in, the cloud.
    Let's look at what some customers are doing with hybrid cloud in the real world, their goals, and the outcomes.
    Data in the Cloud
    At NetApp, we see a variety of hybrid cloud deployments sharing data between on-premises data centers and the cloud, providing greater control and flexibility. These deployments utilize both cloud service providers (CSPs) and hyperscale public clouds such as Amazon Web Services (AWS).
    Use Case 1: Partners with Verizon for Software as a Service Colocation and integrated Disaster Recovery in the Cloud
    For financial services company BlackLine, availability, security, and compliance with financial standards is paramount. But with the company growing at 50% per year, and periodic throughput and capacity bursts of up to 20 times baseline, the company knew it couldn't sustain its business model with on-premises IT alone.
    Stringent requirements often lead to innovation. BlackLine deployed its private cloud infrastructure at a Verizon colocation facility. The Verizon location gives them a data center that is purpose-built for security and compliance. It enables the company to retain full control over sensitive data while delivering the network speed and reliability it needs. The colocation facility gives Blackline access to Verizon cloud services with maximum bandwidth and minimum latency. The company currently uses Verizon Cloud for disaster recovery and backup. Verizon cloud services are built on NetApp® technology, so they work seamlessly with BlackLine's existing NetApp storage.
    To learn more about BlackLine's hybrid cloud deployment, read the executive summary and technical case study, or watch this customer video.
    Use Case 2: Private, Nonprofit University Eliminates Tape with Cloud Integrated Storage
    A private university was just beginning its cloud initiative and wanted to eliminate tape—and offsite tape storage. The university had been using Data Domain as a backup target in its environment, but capacity and expense had become a significant issue, and it didn't provide a backup-to-cloud option.
    The director of Backup turned to a NetApp SteelStore cloud-integrated storage appliance to address the university's needs. A proof of concept showed that SteelStore™ was perfect. The on-site appliance has built-in disk capacity to store the most recent backups so that the majority of restores still happen locally. Data is also replicated to AWS, providing cheap and deep storage for long-term retention. SteelStore features deduplication, compression, and encryption, so it efficiently uses both storage capacity (both in the appliance and in the cloud) and network bandwidth. Encryption keys are managed on-premises, ensuring that data in the cloud is secure.
    The university is already adding a second SteelStore appliance to support another location, and—recognizing which way the wind is blowing—the director of Backup has become the director of Backup and Cloud.
    Use Case 3: Consumer Finance Company Chooses Cloud ONTAP to Move Data Back On-Premises
    A leading provider of online payment services needed a way to move data generated by customer applications running in AWS to its on-premises data warehouse. NetApp Cloud ONTAP® running in AWS proved to be the least expensive way to accomplish this.
    Cloud ONTAP provides the full suite of NetApp enterprise data management tools for use with Amazon Elastic Block Storage, including storage efficiency, replication, and integrated data protection. Cloud ONTAP makes it simple to efficiently replicate the data from AWS to NetApp FAS storage in the company's own data centers. The company can now use existing extract, transform and load (ETL) tools for its data warehouse and run analytics on data generated in AWS.
    Regular replication not only facilitates analytics, it also ensures that a copy of important data is stored on-premises, protecting data from possible cloud outages. Read the success story to learn more.
    Data Near the Cloud
    For many organizations, deploying data near the hyperscale public cloud is a great choice because they can retain physical control of their data while taking advantage of elastic cloud compute resources on an as-needed basis. This hybrid cloud architecture can deliver better IOPS performance than native public cloud storage services, enterprise-class data management, and flexible access to multiple public cloud providers without moving data. Read the recent white paper from the Enterprise Strategy Group, “NetApp Multi-cloud Private Storage: Take Charge of Your Cloud Data,” to learn more about this approach.
    Use Case 1: Municipality Opts for Hybrid Cloud with NetApp Private Storage for AWS
    The IT budgets of many local governments are stretched tight, making it difficult to keep up with the growing expectations of citizens. One small municipality found itself in this exact situation, with aging infrastructure and a data center that not only was nearing capacity, but was also located in a flood plain.
    Rather than continue to invest in its own data center infrastructure, the municipality chose a hybrid cloud using NetApp Private Storage (NPS) for AWS. Because NPS stores personal, identifiable information and data that's subject to strict privacy laws, the municipality needed to retain control of its data. NPS does just that, while opening the door to better citizen services, improving availability and data protection, and saving $250,000 in taxpayer dollars. Read the success story to find out more.
    Use Case 2: IT Consulting Firm Expands Business Model with NetApp Private Storage for Azure
    A Japanese IT consulting firm specializing in SAP recognized the hybrid cloud as a way to expand its service offerings and grow revenue. By choosing NetApp Private Storage for Microsoft Azure, the firm can now offer a cloud service with greater flexibility and control over data versus services that store data in the cloud.
    The new service is being rolled out first to support the development work of the firm's internal systems integration engineering teams, and will later provide SAP development and testing, and disaster recovery services for mid-market customers in financial services, retail, and pharmaceutical industries.
    Use Case 3: Financial Services Leader Partners with NetApp for Major Cloud Initiative
    In the heavily regulated financial services industry, the journey to cloud must be orchestrated to address security, data privacy, and compliance. A leading Australian company recognized that cloud would enable new business opportunities and convert capital expenditures to monthly operating costs. However, with nine million customers, the company must know exactly where its data is stored. Using native cloud storage is not an option for certain data, and regulations require that the company maintain a tertiary copy of data and retain the ability to restore data under any circumstances. The company also needed to vacate one of its disaster-recovery data centers by the end of 2014.
    To address these requirements, the company opted for NetApp Private Storage for Cloud. The firm placed NetApp storage systems in two separate locations: an Equinix cloud access facility and a Global Switch colocation facility both located in Sydney. This satisfies the requirement for three copies of critical data and allows them to take advantage of AWS EC2 compute instances as needed, with the option to use Microsoft Azure or IBM SoftLayer as an alternative to AWS without migrating data. For performance, the company extended its corporate network to the two facilities.
    The firm vacated the data center on schedule, a multimillion-dollar cost avoidance. Cloud services are being rolled out in three phases. In the first phase, NPS will provide disaster recovery for the company's 12,000 virtual desktops. In phase two, NPS will provide disaster recover for enterprise-wide applications. In the final phase, the company will move all enterprise applications to NPS and AWS. NPS gives the company a proven methodology for moving production workloads to the cloud, enabling it to offer new services faster. Because the on-premises storage is the same as the cloud storage, making application architecture changes will also be faster and easier than it would be with other options. Read the success story to learn more.
    NetApp on NetApp: nCloud
    When NetApp IT needed to provide cloud services to its internal customers, the team naturally turned to NetApp hybrid cloud solutions, with a Data Fabric joining the pieces. The result is nCloud, a self-service portal that gives NetApp employees fast access to hybrid cloud resources. nCloud is architected using NetApp Private Storage for AWS, FlexPod®, clustered Data ONTAP and other NetApp technologies. NetApp IT has documented details of its efforts to help other companies on the path to hybrid cloud. Check out the following links to lean more:
    Hybrid Cloud: Changing How We Deliver IT Services [blog and video]
    NetApp IT Approach to NetApp Private Storage and Amazon Web Services in Enterprise IT Environment [white paper]
    NetApp Reaches New Heights with Cloud [infographic]
    Cloud Decision Framework [slideshare]
    Hybrid Cloud Decision Framework [infographic]
    See other NetApp on NetApp resources.
    Data Fabric: NetApp Services for Hybrid Cloud
    As the examples in this article demonstrate, NetApp is developing solutions to help organizations of all sizes move beyond cloud silos and unlock the power of hybrid cloud. A Data Fabric enabled by NetApp helps you more easily move and manage data in and near the cloud; it's the common thread that makes the uses cases in this article possible. Read Realize the Full Potential of Cloud with the Data Fabric to learn more about the Data Fabric and the NetApp technologies that make it possible.
    Richard Treadway is responsible for NetApp Hybrid Cloud solutions including SteelStore, Cloud ONTAP, NetApp Private Storage, StorageGRID Webscale, and OnCommand Insight. He has held executive roles in marketing and engineering at KnowNow, AvantGo, and BEA Systems, where he led efforts in developing the BEA WebLogic Portal.
    Tom Shields leads the Cloud Service Provider Solution Marketing group at NetApp, working with alliance partners and open source communities to design integrated solution stacks for CSPs. Tom designed and launched the marketing elements of the storage industry's first Cloud Service Provider Partner Program—growing it to 275 partners with a portfolio of more than 400 NetApp-based services.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    Dave:
    "David Scarani" <[email protected]> wrote in message
    news:3ecfc046$[email protected]..
    >
    I was looking for some real world "Best Practices" of deploying J2EEapplications
    into a Production Weblogic Environment.
    We are new at deploying applications to J2EE application servers and arecurrently
    debating 2 methods.
    1) Store all configuration (application as well as Domain configuration)in properties
    files and Use Ant to rebuild the domain everytime the application isdeployed.
    I am just a WLS engineer, not a customer, so my opinions have in some
    regards little relative weight. However I think you'll get more mileage out
    of the fact that once you have created your config.xml, checking it into src
    control, versioning it. I would imagine that application changes are more
    frequent than server/domain configuration so it seems a little heavy weight
    to regenerate the entire configuration everytime an application is
    deployed/redeployed. Either way you should check out the wlconfig ant task.
    Cheers
    mbg
    2) Have a production domain built one time, configured as required andalways
    up and available, then use Ant to deploy only the J2EE application intothe existing,
    running production domain.
    I would be interested in hearing how people are doing this in theirproduction
    environments and any pros and cons of one way over the other.
    Thanks.
    Dave Scarani

  • RAID test on 8-core with real world tasks gives 9% gain?

    Here are my results from testing the software RAID set up on my new (July 2009) Mac Pro. As you will see, although my 8-core (Octo) tested twice as fast as my new (March 2009) MacBook 2.4 GHz, the software RAID set up only gave me a 9% increase at best.
    Specs:
    Mac Pro 2x 2.26 GHz Quad-core Intel Xeon, 8 GB 1066 MHz DDR3, 4x 1TB 7200 Apple Drives.
    MacBook 2.4 GHz Intel Core 2 Duo, 4 GB 1067 MHz DDR3
    Both running OS X 10.5.7
    Canon Vixia HG20 HD video camera shooting in 1440 x 1080 resolution at “XP+” AVCHD format, 16:9 (wonderful camera)
    The tests. (These are close to my real world “work flow” jobs that I would have to wait on when using my G5.)
    Test A: import 5:00 of video into iMovie at 960x540 with thumbnails
    Test B: render and export with Sepia applied to MPEG-4 at 960x540 (a 140 MB file) in iMovie
    Test C: in QuickTime resize this MPEG-4 file to iPod size .m4v at 640x360 resolution
    Results:
    Control: MacBook as shipped
    Test A: 4:16 (four minutes, sixteen seconds)
    Test B: 13:28
    Test C: 4:21
    Control: Mac Pro as shipped (no RAID)
    Test A: 1:50
    Test B: 7:14
    Test C: 2:22
    Mac Pro config 1
    RAID 0 (no RAID on the boot drive, three 1TB drives striped)
    Test A: 1:44
    Test B: 7:02
    Test C: 2:23
    Mac Pro config 2
    RAID 10 (drives 1 and 2 mirrored, drives 3 and 4 mirrored, then both mirrors striped)
    Test A: 1:40
    Test B: 7:09
    Test C: 2:23
    My question: Why am I not seeing an increase in speed on these tasks? Any ideas?
    David
    Notes:
    I took this to the Apple store and they were expecting 30 to 50 per cent increase with the software RAID. They don’t know why I didn’t see it on my tests.
    I am using iMovie and QuickTime because I just got the Adobe CS4 and ran out of cash. And it is fine for my live music videos. Soon I will get Final Cut Studio.
    I set up the RAID with Disk Utility without trouble. (It crashed once but reopened and set up just fine.) If I check back it shows the RAID set up working.
    Activity Monitor reported “disk activity” peaks at about 8 MB/sec on both QuickTime and iMovie tasks. The CPU number (percent?) on QT was 470 (5 cores involved?) and iMovie was 294 (3 cores involved?).
    Console reported the same error for iMovie and QT:
    7/27/09 11:05:35 AM iMovie[1715] Error loading /Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHDAudio: dlopen(/Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHD Audio, 262): Symbol not found: _keymgr_get_per_threaddata
    Referenced from: /Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHDAudio
    Expected in: /usr/lib/libSystem.B.dylib

    The memory controllers, one for each cpu, means that you need at least 2 x 2GB on each bank. If that is how Apple set it up, that is minimal and the only thing I would do now with RAM is add another 2 x 2GB. That's all. And get you into triple channel bandwidth.
    It could be the make and model of your hard drives. If they are seagate then more info would help. And not all drives are equal when it comes to RAID.
    Are you new to RAID or something you've been doing? seems you had enough to build 0+1 and do some testing. Though not pleased, even if it works now, that it didn't take the one time.
    Drives - and RAIDs - improve over the first week or two - which, before commiting good data to them - is the best time to torture, run them ragged, use Speedtools to break them in, loosen up the heads, scan for media errors, and run ZoneBench (and with 1TB, partition each drive into 1/4ths).
    If Drive A is not identical to B, then they may deal with an array even worse. And no two drives are purly identical, some vary more than others, and some are best used in hardware RAID controller environments.
    Memory: buying in groups of three. okay. But then adding 4 x 4GB? So bank A with 4 x 2GB and B with twice as much memory. On Mac Pro, 4 DIMMs on a bank you get 70% bandwidth, it drops down from tri-channel to dual-channel mode.
    I studied how to build or put together a PC for over six months, but then learned more in the month (or two) after I bought all the parts, found what didn't work, learned my own short-comings, and ended up building TWO - one for testing, other for backup system. And three motherboards (the best 'rated' also had more trouble with BIOS and fans, the cheap one was great, the Intel board that reviewers didn't seem to "gork" actually has been the best and easiest to use and update BIOS). Hands on wins 3:1 versus trying to learn by reading for me, hands-on is what I need to learn. Or take car or sailboat out for drive, spin, see how it fares in rough weather.
    I buy an Apple system bare bones, stock, or less, then do all the upgrades on my own, when I can afford to, gradually over months, year.
    Each cpu needs to be fed. So they each need at least 3 x 1GB RAM. And they need raw data fed to RAM and cpu from disk drives. And your mix of programs will each behave differently. Which is why you see Barefeats test with Pro Apps, CINEBENCH, and other apps or tools.
    What did you read or do in the past that led you to think you need RAID setup, and for how it would affect performance?
    Photoshop Guides to Performance:
    http://homepage.mac.com/boots911/.Public/PhotoshopAccelerationBasics2.4W.pdf
    http://kb2.adobe.com/cps/401/kb401089.html
    http://www.macgurus.com/guides/storageaccelguide.php
    4-core vs 8-core
    http://www.barefeats.com/nehal08.html
    http://www.barefeats.com/nehal03.html

  • New Enterprise Manager Book on Advanced EM Techniques for the Real World

    Dear Friends,
    I am pleased to say my first EM book can be ordered now.
    Oracle Enterprise Manager Grid Control: Advanced Techniques for the Real World
    [http://www.rampant-books.com/book_1001_advanced_techniques_oem_grid_control.htm]
    Please let your colleagues and friends and clients know – it is the first book in the world to include EM 11g Grid Control. It is a great way for people to understand the capabilities of EM.
    Oracle’s Enterprise Manager Grid Control is recognized as the IT Industry’s leading Oracle database administration and management tool. It is unrivalled in its ability to monitor, manage, maintain and report on entire enterprise grids that comprise hundreds (if not thousands) of Oracle databases and servers following an approach that is consistent and repeatable.
    However, Enterprise Manager Grid Control may seem daunting even to the most advanced Oracle Administrator. The problem is you know about the power of Enterprise Manager but how do you unleash that power amongst what initially appears to be a maze of GUI-based screens that feature a myriad of links to reports and management tasks that in turn lead you to even more reports and management tasks?
    This book shows you how to unleash that power.
    Based on the Author’s considerable and practical Oracle database and Enterprise Manager Grid Control experience you will learn through illustrated examples how to create and schedule RMAN backups, generate Data Guard Standbys, clone databases and Oracle Homes and patch databases across hundreds and thousands of databases. You will learn how you can unlock the power of the Enterprise Manager Grid Control Packs, PlugIns and Connectors to simplify your database administration across your company’s database network, as also the management and monitoring of important Service Level Agreements (SLAs), and the nuances of all important real-time change control using Enterprise Manager.
    There are other books on the market that describe how to install and configure Enterprise Manager but until now they haven’t explained using a simple and illustrated approach how to get the most out of your Enterprise Manager. This book does just that.
    Covers the NEW Enterprise Manager Grid Control 11g.
    Regards,
    Porus.

    Abuse reported.

  • RMI Use in the real world

    I present an RMI module in the Java course I teach. I know enough about RMI to be able to talk about it, and write a simple classroom example, but I have never done RMI in the real world. Can anyone offer an example of what kind of applications are being developed that use RMI?
    Thanks,
    J.D.

    I can tell you about two sites.
    1. A system which allocates and dispatches crews, trucks, backpack hoses, spare socks, etc to bushfires (wildfires to you). It operates between two Government departments here in Australia. Each of those despatchable items is a remote object and there have been up to 50,000 active in the system at a time during the hot summer months. This is a large and life-critical system.
    2. A monitoring system for cable TV channels. A piece of hardware produces a data stream representing things like channel utilization, error rates, delay, etc and this is multiplexed via RMI to a large number of operator consoles. Again this is a major and business-critical system.
    And of course every J2EE system in existence uses RMI internally, albeit almost entirely RMI/IIOP.

  • Looking for real world effects site

    Hi everybody
    This is the first time ever using this forum, so please don't let me down.
    I'm using FCS3 and doing some small jobs for a while now, and recently, I've been asked to film a music video, and the guy wants some professional effects, and i got scared since then.
    here comes my request; I need a website or ... that show me then explain to me some real world effects that i can use it in the music video,
    I'm really tired of looking and searching.
    Thanks very much
    Salah

    Oho...so you want to go from dabbling in video with some small jobs, to doing high end effects like you see in THE MATRIX? Quite a leap. And you want to learn how to do this via online tutorials and the like?
    Sigh.
    like how to freeze frame while sweeping the camera about 180 degree around the objects (people).
    Rent THE MATRIX and watch the EXTRA called BULLET TIME. See how they use 150 still cameras linked together to go off mere milliseconds apart. HIgh end visual effects guys with years of training figured that out.
    i need someone to say this is a new tool in motion and you can use it to create this kind of effects ( real world, with people )
    Not gonna happen because it doesn't exist. YOu want to have some application or plugin somehow be able to freeze someone, and spin them around, somehow seeing the OTHER SIDE of them, even though the one camera is on one side. Or, you can use 5-10 cameras all shooting that person, and use something like After Effects or other 3D program that stitches together that footage to make it do just that. Not simple...not by a longshot.
    that's what I'm searching for. new professional stuff.
    Professionals don't usually put their trade secrets online. Otherwise any old person could do what they did, and they wouldn't get paid the big bucks to figure it out, and do it on music videos and movies. That's why you don't find anything online.
    Sorry to be snarky here...but I still wish you luck in figuring this out.
    Shane

  • First of many Server 2003 to 2012 R2 Migration - Real World Advice

    Hey everyone,
    Performing my first legacy server migration and just looking for some good "real-world" pointers on how to approach it. Since I am planning on seeing a lot of these projects in the next year or so, I was wondering who is willing to share some approaches,
    tools, experiences, etc.
    I would especially like to know how everyone handles application compatibility!
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. ”

    Hello,
    please provide more details about the machines to migrate.
    Is this about a domain or workgroup?
    Which client OS run in the network?
    What about trusts to other domains?
    Applications you have to test each on the new OS versions and also have to contact the vendors BEFORE if that is supported.
    If old hardware is used drivers must be checked if they exist for the new versions.
    Licensing is a question, for example Remote Desktop has changed with RD CALs, they must be bought new.
    Best regards
    Meinolf Weber
    MVP, MCP, MCTS
    Microsoft MVP - Directory Services
    My Blog: http://msmvps.com/blogs/mweber/
    Disclaimer: This posting is provided AS IS with no warranties or guarantees and confers no rights.

Maybe you are looking for

  • Java Built-in regular expressions versus Jakarta

    Are there any advantages to using the Jakarta regular expression package, over the built-in Java regular expressions? I wasn't able to find much information on the Jakarta regexp package, except for the Javadoc and I didn't find that very informative

  • Embedding fonts into a component

    I have this weird issue where I am using a component (slideshowpro) and have it set to embed the font, but it is not doing it. I am wondering if there is anything else I can or if there might be a trick. Can I embed a font into the movie? I think I h

  • QuickTime always has static noise while playing audio

    Every time I open an mp3 (or any audio file) in QuickTime, it plays a lot of static noise in the background.  These audio files work fine in other applications (iTunes, VLC, etc) so i know the problem lies with QuickTime.  I have tried re-installing

  • How do i remove the administrator when wife forgot admin password?

    How can I remove an administrator if the admin password is lost?

  • Error:-50 trying to set up a printer

    I get this with every printer. I have an epson i first tried, but then was frustrated and just realized it was the same for any choice. Why cant i just simply hook up a printer? A windows p.c. never gave me this much trouble oooopps . i mean mac rule