Routing real world IP to computer on internal Network

I have a OS X 10.4 server providing DHCP and NAT to a private subnet (192.168.2.X) and I was wondering if anyone had any experience in routing a real world IP to one of the computers on that network. I have several additional real world IP's given to me by the DSL service provider and I wanted to route traffic from one of them to one of the machines. Is this even possible?

Well, I am familiar w/port forwarding but the situation is this:
ISP gives me IP numbers A,B & C
Xserve has IP A, provides NAT & DCHP service to network of macs.
I want to be able to assign IP B to one mac on that network
Is that possible or am I going to have to run a cable from DSL modem to the mac that I want to give IP B to, bypassing the server?

Similar Messages

  • Making Effective Use of the Hybrid Cloud: Real-World Examples

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, and it was clear that NetApp's approach to hybrid cloud and Data Fabric resonated with the crowd. NetApp solutions such as NetApp Private Storage for Cloud are solving real customer problems.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that allows you to move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    Check out the following blogs for more perspectives:
    Microsoft Ignite Sparks More Innovation from NetApp
    ASR Now Supports NetApp Private Storage for Microsoft Azure
    Four Ways Disaster Recovery is Simplified with Storage Management Standards
    Introducing OnCommand Shift
    SHIFT VMs between Hypervisors
    Infront Consulting + NetApp = Success
    Richard Treadway
    Senior Director of Cloud Marketing, NetApp
    Tom Shields
    Senior Manager, Cloud Service Provider Solution Marketing, NetApp
    Enterprises are increasingly turning to cloud to drive agility and closely align IT resources to business needs. New or short-term projects and unexpected spikes in demand can be satisfied quickly and elastically with cloud resources, spurring more creativity and productivity while reducing the waste associated with over- or under-provisioning.
    Figure 1) Cloud lets you closely align resources to demand.
    Source: NetApp, 2015
    While the benefits are attractive for many workloads, customer input suggests that even more can be achieved by moving beyond cloud silos and better managing data across cloud and on-premises infrastructure, with the ability to move data between clouds as needs and prices change. Hybrid cloud models are emerging where data can flow fluidly to the right location at the right time to optimize business outcomes while providing enhanced control and stewardship.
    These models fall into two general categories based on data location. In the first, data moves as needed between on-premises data centers and the cloud. In the second, data is located strategically near, but not in, the cloud.
    Let's look at what some customers are doing with hybrid cloud in the real world, their goals, and the outcomes.
    Data in the Cloud
    At NetApp, we see a variety of hybrid cloud deployments sharing data between on-premises data centers and the cloud, providing greater control and flexibility. These deployments utilize both cloud service providers (CSPs) and hyperscale public clouds such as Amazon Web Services (AWS).
    Use Case 1: Partners with Verizon for Software as a Service Colocation and integrated Disaster Recovery in the Cloud
    For financial services company BlackLine, availability, security, and compliance with financial standards is paramount. But with the company growing at 50% per year, and periodic throughput and capacity bursts of up to 20 times baseline, the company knew it couldn't sustain its business model with on-premises IT alone.
    Stringent requirements often lead to innovation. BlackLine deployed its private cloud infrastructure at a Verizon colocation facility. The Verizon location gives them a data center that is purpose-built for security and compliance. It enables the company to retain full control over sensitive data while delivering the network speed and reliability it needs. The colocation facility gives Blackline access to Verizon cloud services with maximum bandwidth and minimum latency. The company currently uses Verizon Cloud for disaster recovery and backup. Verizon cloud services are built on NetApp® technology, so they work seamlessly with BlackLine's existing NetApp storage.
    To learn more about BlackLine's hybrid cloud deployment, read the executive summary and technical case study, or watch this customer video.
    Use Case 2: Private, Nonprofit University Eliminates Tape with Cloud Integrated Storage
    A private university was just beginning its cloud initiative and wanted to eliminate tape—and offsite tape storage. The university had been using Data Domain as a backup target in its environment, but capacity and expense had become a significant issue, and it didn't provide a backup-to-cloud option.
    The director of Backup turned to a NetApp SteelStore cloud-integrated storage appliance to address the university's needs. A proof of concept showed that SteelStore™ was perfect. The on-site appliance has built-in disk capacity to store the most recent backups so that the majority of restores still happen locally. Data is also replicated to AWS, providing cheap and deep storage for long-term retention. SteelStore features deduplication, compression, and encryption, so it efficiently uses both storage capacity (both in the appliance and in the cloud) and network bandwidth. Encryption keys are managed on-premises, ensuring that data in the cloud is secure.
    The university is already adding a second SteelStore appliance to support another location, and—recognizing which way the wind is blowing—the director of Backup has become the director of Backup and Cloud.
    Use Case 3: Consumer Finance Company Chooses Cloud ONTAP to Move Data Back On-Premises
    A leading provider of online payment services needed a way to move data generated by customer applications running in AWS to its on-premises data warehouse. NetApp Cloud ONTAP® running in AWS proved to be the least expensive way to accomplish this.
    Cloud ONTAP provides the full suite of NetApp enterprise data management tools for use with Amazon Elastic Block Storage, including storage efficiency, replication, and integrated data protection. Cloud ONTAP makes it simple to efficiently replicate the data from AWS to NetApp FAS storage in the company's own data centers. The company can now use existing extract, transform and load (ETL) tools for its data warehouse and run analytics on data generated in AWS.
    Regular replication not only facilitates analytics, it also ensures that a copy of important data is stored on-premises, protecting data from possible cloud outages. Read the success story to learn more.
    Data Near the Cloud
    For many organizations, deploying data near the hyperscale public cloud is a great choice because they can retain physical control of their data while taking advantage of elastic cloud compute resources on an as-needed basis. This hybrid cloud architecture can deliver better IOPS performance than native public cloud storage services, enterprise-class data management, and flexible access to multiple public cloud providers without moving data. Read the recent white paper from the Enterprise Strategy Group, “NetApp Multi-cloud Private Storage: Take Charge of Your Cloud Data,” to learn more about this approach.
    Use Case 1: Municipality Opts for Hybrid Cloud with NetApp Private Storage for AWS
    The IT budgets of many local governments are stretched tight, making it difficult to keep up with the growing expectations of citizens. One small municipality found itself in this exact situation, with aging infrastructure and a data center that not only was nearing capacity, but was also located in a flood plain.
    Rather than continue to invest in its own data center infrastructure, the municipality chose a hybrid cloud using NetApp Private Storage (NPS) for AWS. Because NPS stores personal, identifiable information and data that's subject to strict privacy laws, the municipality needed to retain control of its data. NPS does just that, while opening the door to better citizen services, improving availability and data protection, and saving $250,000 in taxpayer dollars. Read the success story to find out more.
    Use Case 2: IT Consulting Firm Expands Business Model with NetApp Private Storage for Azure
    A Japanese IT consulting firm specializing in SAP recognized the hybrid cloud as a way to expand its service offerings and grow revenue. By choosing NetApp Private Storage for Microsoft Azure, the firm can now offer a cloud service with greater flexibility and control over data versus services that store data in the cloud.
    The new service is being rolled out first to support the development work of the firm's internal systems integration engineering teams, and will later provide SAP development and testing, and disaster recovery services for mid-market customers in financial services, retail, and pharmaceutical industries.
    Use Case 3: Financial Services Leader Partners with NetApp for Major Cloud Initiative
    In the heavily regulated financial services industry, the journey to cloud must be orchestrated to address security, data privacy, and compliance. A leading Australian company recognized that cloud would enable new business opportunities and convert capital expenditures to monthly operating costs. However, with nine million customers, the company must know exactly where its data is stored. Using native cloud storage is not an option for certain data, and regulations require that the company maintain a tertiary copy of data and retain the ability to restore data under any circumstances. The company also needed to vacate one of its disaster-recovery data centers by the end of 2014.
    To address these requirements, the company opted for NetApp Private Storage for Cloud. The firm placed NetApp storage systems in two separate locations: an Equinix cloud access facility and a Global Switch colocation facility both located in Sydney. This satisfies the requirement for three copies of critical data and allows them to take advantage of AWS EC2 compute instances as needed, with the option to use Microsoft Azure or IBM SoftLayer as an alternative to AWS without migrating data. For performance, the company extended its corporate network to the two facilities.
    The firm vacated the data center on schedule, a multimillion-dollar cost avoidance. Cloud services are being rolled out in three phases. In the first phase, NPS will provide disaster recovery for the company's 12,000 virtual desktops. In phase two, NPS will provide disaster recover for enterprise-wide applications. In the final phase, the company will move all enterprise applications to NPS and AWS. NPS gives the company a proven methodology for moving production workloads to the cloud, enabling it to offer new services faster. Because the on-premises storage is the same as the cloud storage, making application architecture changes will also be faster and easier than it would be with other options. Read the success story to learn more.
    NetApp on NetApp: nCloud
    When NetApp IT needed to provide cloud services to its internal customers, the team naturally turned to NetApp hybrid cloud solutions, with a Data Fabric joining the pieces. The result is nCloud, a self-service portal that gives NetApp employees fast access to hybrid cloud resources. nCloud is architected using NetApp Private Storage for AWS, FlexPod®, clustered Data ONTAP and other NetApp technologies. NetApp IT has documented details of its efforts to help other companies on the path to hybrid cloud. Check out the following links to lean more:
    Hybrid Cloud: Changing How We Deliver IT Services [blog and video]
    NetApp IT Approach to NetApp Private Storage and Amazon Web Services in Enterprise IT Environment [white paper]
    NetApp Reaches New Heights with Cloud [infographic]
    Cloud Decision Framework [slideshare]
    Hybrid Cloud Decision Framework [infographic]
    See other NetApp on NetApp resources.
    Data Fabric: NetApp Services for Hybrid Cloud
    As the examples in this article demonstrate, NetApp is developing solutions to help organizations of all sizes move beyond cloud silos and unlock the power of hybrid cloud. A Data Fabric enabled by NetApp helps you more easily move and manage data in and near the cloud; it's the common thread that makes the uses cases in this article possible. Read Realize the Full Potential of Cloud with the Data Fabric to learn more about the Data Fabric and the NetApp technologies that make it possible.
    Richard Treadway is responsible for NetApp Hybrid Cloud solutions including SteelStore, Cloud ONTAP, NetApp Private Storage, StorageGRID Webscale, and OnCommand Insight. He has held executive roles in marketing and engineering at KnowNow, AvantGo, and BEA Systems, where he led efforts in developing the BEA WebLogic Portal.
    Tom Shields leads the Cloud Service Provider Solution Marketing group at NetApp, working with alliance partners and open source communities to design integrated solution stacks for CSPs. Tom designed and launched the marketing elements of the storage industry's first Cloud Service Provider Partner Program—growing it to 275 partners with a portfolio of more than 400 NetApp-based services.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    Dave:
    "David Scarani" <[email protected]> wrote in message
    news:3ecfc046$[email protected]..
    >
    I was looking for some real world "Best Practices" of deploying J2EEapplications
    into a Production Weblogic Environment.
    We are new at deploying applications to J2EE application servers and arecurrently
    debating 2 methods.
    1) Store all configuration (application as well as Domain configuration)in properties
    files and Use Ant to rebuild the domain everytime the application isdeployed.
    I am just a WLS engineer, not a customer, so my opinions have in some
    regards little relative weight. However I think you'll get more mileage out
    of the fact that once you have created your config.xml, checking it into src
    control, versioning it. I would imagine that application changes are more
    frequent than server/domain configuration so it seems a little heavy weight
    to regenerate the entire configuration everytime an application is
    deployed/redeployed. Either way you should check out the wlconfig ant task.
    Cheers
    mbg
    2) Have a production domain built one time, configured as required andalways
    up and available, then use Ant to deploy only the J2EE application intothe existing,
    running production domain.
    I would be interested in hearing how people are doing this in theirproduction
    environments and any pros and cons of one way over the other.
    Thanks.
    Dave Scarani

  • Character Styles in the Real World

    Rick:
    Thanks for your efforts, and let me add my Amen to both
    subjects (on file locations and on Character styles).
    My real-world use of Character styles is a combination usage
    of Paragraph and Character styles for Notes: I have a Paragraph
    style called Note, which simply adds margins of .15in Left, 10pt
    Top, and 8pt Bottom. Within this paragraph style, multiple labels
    announce the type of Note with the use of Character styles
    NoteLabel (Navy), RecommendLabel (Teal), CAUTIONLabel (Purple), and
    WARNINGLabel (Red).
    This way, you can change the color of one or more labels
    without worrying about the paragraph settings (or vice versa).
    Also, when placing a Note inside a table cell (which might
    have limited horizontal space, especially with three or four
    columns), we still use the "Label" character styles but
    without the Notes paragraph style. This still sets off the
    text visually, without adding unnecessary extra vertical space.
    Thanks again, Rick!
    Leon

    I can tell you about two sites.
    1. A system which allocates and dispatches crews, trucks, backpack hoses, spare socks, etc to bushfires (wildfires to you). It operates between two Government departments here in Australia. Each of those despatchable items is a remote object and there have been up to 50,000 active in the system at a time during the hot summer months. This is a large and life-critical system.
    2. A monitoring system for cable TV channels. A piece of hardware produces a data stream representing things like channel utilization, error rates, delay, etc and this is multiplexed via RMI to a large number of operator consoles. Again this is a major and business-critical system.
    And of course every J2EE system in existence uses RMI internally, albeit almost entirely RMI/IIOP.

  • How to convert pixel Measurment values to real world cordinate systems ( 2D & 3D)

    Hello  All
    I am very new to Image processing and I stuck in a position where i have to convert the pixel measurement value in the real world X,Y,Z  coordinates, and after then that value can be send to the robot arm.
    For more understanding i have attached the image of my VI and also the Labview VI, you can also suggest if something more is missing then what i already did
    I need help to solve this problem, as i already mentioned i am very new to image processing so answer in a simple explanations would be appreciated
    Thanks in Advance
    Attachments:
    Object_detection_Pixel.vi ‏48 KB

    Hello,
    I would like to help you with this topic.
    I have one question: have you tried to run the VI with highlight mode or with probes to see what happens step by step?
    Concerning the broken wire, as far as I can see it, its just the dimensions of the output array which does not fit.
    The data is an array of the dimension 2 but you connected an array with dimension 1.
    So LabVIEW will not run the VI, I posted a screenshot with the new Array in the BD. As you can see, the wire now has two lines instead of one.
    You can always add suitable FP indicators by right-clicking -> create -> indicator.
    Please let me know if this is useful for you.
    I am looking forward to hearing from you!
    Have a nice day,
    Christopher W.
    Intern Application Engineering | NI Certified LabVIEW Associate Developer (CLAD) | NI Germany
    Attachments:
    array-wire.png ‏24 KB

  • Java ... and the real world

    Good Day,
    I'm new at JAVA language, I'm basicly a C++ programmer, but I wanted to get inside the java world and know what is it all about, I have the motivation to make it my 1st lang.
    But, there is a question, is it used widely like the other old and new languages, what makes it a better choice to develope with, what is the advantages I mean ??!!
    is it ready for a real huge application from the real world??
    give me an example
    thanks in advance

    Don't hold your breath ...by that I mean it will be
    better in some ways and the same in most ways.
    My point is that the future of computing, its
    application and its growth lies in network and
    inter-network applications, not in desktop and PC
    stand-alone applications.I don't know if i agree (that desktop apps aren't going to be 'the area' in the future)... just about everybody uses computers nowadays, with the vast majority being home desktop users. The desktop app market is set up in such a way that it cannot reach saturation, as 'everyone' (i.e. the majority of users) always want the upgrades... Norton Antivirus 2004 anyone?
    Basically, I agree with you that Java is not a player in this market, and systems integration is (and will continue to be) a huge growth area. Java should play a central role in this, due in part to its platform independence. Only problem is, in the future if the only programming jobs are bolting together components and existing systems, there will be even less coder jobs than today. So you could say, whilst the future of computing is network and inter-network apps, therein lies our doom...

  • The real world battery life of a 2013 MacbookAir ?

    Here is my situation. I use my Air a lot in the field (photography, and a few other things). It is the model before the 2013.
    My question is, is the battery life of the 2013 model as good as it is said to be? I read reviews which say 7 hours or more of battery life, in real world conditions. This is a lot of time, a game changer.
    I am definitely considering buying the new model, but only if the battery life is significantly better (my current Air is only a year old). Can anyone who used one of these fine machines give me their thoughts? Is the battery as good as claimed, is the battery life ok (I have read that the new batts have a shorter life span).
    Any advice appreciated.
    Ian

    Ian999
    is the battery life of the 2013 model as good as it is said to be?
    I get 12+ hours out of mine, it depends on what you are running as the main factor
    compared to my 2012 model I sold, the improvement is a bit stunning.
    Ian999
    I read reviews which say 7 hours or more of battery life
    Again, that totally depends on WHAT you are running, ..videos, a good bit less
    Ive owned 3 Air, the battery life on the new Haswell is incredible, far better than last 2012 model, as apple.com chart will indicate to you.
    Youre not going to find another superslim notebook/laptop with this power that does it all with THIS kind of battery life.
    Keep it plugged in when near a socket so you keep the charging cycles down on your LiPo (lithium polymer) cells / battery, but not plugged in all the time. When not being used for several hours, turn it off.
    And best "tip" is if its near a socket,...plug it in as long as you can (especially at home) since cycle count on the battery are the "miles that wear out the tires (battery)", however again, not plugged in all or most of the time.
    http://www.apple.com/batteries/notebooks.html
    "Apple does not recommend leaving your portable plugged in all the time."
    While cycle count is commonly seen to be the “miles” on your Lithium Ion pack cell in your Macbook, which they are, this distinction is not a fine line at all, and it is a big misconception to “count charge cycles”
    *A person who has, for example, 300 charge cycles on their battery and is recharging at say 50-60% remaining of a 100% charge has better battery usage and care than another person who has 300 charge cycles at say 15% remaining on a 100% charge. 
    DoD (depth of discharge) is far more important on the wear and tear on your Macbook battery than any mere charge cycle count.  *There is no set “mile” or wear from a charge cycle in general OR in specific.    As such, contrary to popular conception, counting cycles is not conclusive whatsoever, rather the amount of deep DoD on an averaged scale of its use and charging conditions.
    (as a very rough analogy would be 20,000 hard miles put on a car vs. 80,000 good miles being something similar)
    *Contrary to some myths out there, there is protection circuitry in your Macbook and therefore you cannot overcharge it when plugged in and already fully charged
    *However if you don’t plan on using it for a few hours, turn it OFF (plugged in or otherwise) ..*You don’t want your Macbook both always plugged in AND in sleep mode       (When portable devices are charging and in the on or sleep position, the current that is drawn through the device is called the parasitic load and will alter the dynamics of charge cycle. Battery manufacturers advise against parasitic loading because it induces mini-cycles.)
    Keeping batteries connected to a charger ensures that periodic "top-ups" do very minor but continuous damage to individual cells, hence Apples recommendation above:   “Apple does not recommend leaving your portable plugged in all the time”, …this is because “Li-ion degrades fastest at high state-of-charge”. This is also the same reason new Apple notebooks are packaged with 50% charges and not 100%.
    LiPo (lithium polymer, same as in your Macbook) batteries do not need conditioning. However...
    A lot of battery experts call the use of Lithium cells the "80% Rule" ...meaning use 80% of the charge or so, then recharge them for longer overall life.
    Never let your Macbook go into shutdown and safe mode from loss of power, you can corrupt files that way, and the batteries do not like it.
    The only quantified abuse seen to Lithium cells is instances when often the cells are repeatedly drained very low…. key word being "often"
    The good news is that your Macbook has a safety circuit in place to insure the battery doesn’t reach too low before your Macbook will auto power-off. Bad news: if you let your Macbook protection circuitry shut down your notebook at its bottom, and you refrain from charging it for a couple days...the battery will SELF-DRAIN to zero (depending on climate and humidity)…and nothing is worse on a Lithium battery being low-discharged than self-draining down to and sitting at 0
    Contrary to what some might say, Lithium batteries have an "ideal" break in period. First ten cycles or so, don't discharge down past 40% of the battery's capacity. Same way you don’t take a new car out and speed and rev the engine hard first 100 or so miles.
    Proper treatment is still important. Just because LiPo batteries don’t need conditioning in general, does NOT mean they dont have an ideal use / recharge environment. Anything can be abused even if it doesn’t need conditioning.
    From Apple on batteries:
    http://support.apple.com/kb/HT1446
    Storing your MacBook
    If you are going to store your MacBook away for an extended period of time, keep it in a cool location (room temperature roughly 22° C or about 72° F). Make certain you have at least a 50% charge on the internal battery of your Macbook if you plan on storing it away for a few months; recharge your battery to 50% or so every six months roughly if being stored away. If you live in a humid environment, keep your Macbook stored in its zippered case to prevent infiltration of humidity on the internals of your Macbook which could lead to corrosion.

  • RMI Use in the real world

    I present an RMI module in the Java course I teach. I know enough about RMI to be able to talk about it, and write a simple classroom example, but I have never done RMI in the real world. Can anyone offer an example of what kind of applications are being developed that use RMI?
    Thanks,
    J.D.

    I can tell you about two sites.
    1. A system which allocates and dispatches crews, trucks, backpack hoses, spare socks, etc to bushfires (wildfires to you). It operates between two Government departments here in Australia. Each of those despatchable items is a remote object and there have been up to 50,000 active in the system at a time during the hot summer months. This is a large and life-critical system.
    2. A monitoring system for cable TV channels. A piece of hardware produces a data stream representing things like channel utilization, error rates, delay, etc and this is multiplexed via RMI to a large number of operator consoles. Again this is a major and business-critical system.
    And of course every J2EE system in existence uses RMI internally, albeit almost entirely RMI/IIOP.

  • Flex and large real world b2b applications

    Hi all.
    I am new to Flex. I am acting in an architect capacity to
    review the potential in Flex to become the client presentation
    layer for a classic ASP?SQL application. I am seeking a
    cross-browser, cross-platform, zero-installation,
    just-in-time-delivery, rich user experience web application client.
    I think I'm in the right place.
    My aim in creating this post is to solicit feedback on
    approaches and techniques on how to plan and execute a major-system
    re-write into Flex, what is in-scope and what is not, etc. With the
    Flex team putting the final touches into release 1.0, this might be
    considered a bit too soon to ask these questions, but worthy of
    response if Flex is to be take as anything more than
    ‘something to do with that flash games thing that my kids
    play on’.
    One of the key issues for my client company, which I believe
    will be typical for many in the same situation, is to retain
    current investment in the system by re-use of the business logic
    and DB access layers. Basically the back-end is not broken and is
    well prepared for generation of XML instead of HTML. What is
    considered weak, by nature of the web browsers poor user interface
    abilities, is the client side of the system.
    The company has a small, loyal and technically able workforce
    who are very familiar with the current system, which is written
    using classic ASP and SQL Server , HTML and JavaScript. The company
    is risk and runaway cost averse. It has held back from jumping into
    .Net for fear of getting into a costly 5 year revision cycle as
    .Net matures. The AJAX approach is another potentially fruitful
    client-only route but is likely to be a painful way forward as
    there is no major technology vendor leading the charge on
    standards. A Java approach would meet the user interface
    improvement needs but would require replacing or retraining the
    current workforce and a paradigm shift in the technology losing all
    of the middle-tier database logic.
    The ideal is a stable zero-installation web-client that can
    communicate with a server back-end consisting of the same, or
    slightly modified or wrapped business logic and db configuration as
    the current system and that could then run side-by-side during
    switchover.
    If this is possible then risk adverse organisations have a
    way forward.
    The problem is, that from several docs and articles on
    Adobe's web site, there seems to be some careful but vague
    positioning of the capability of Flex in terms of application
    complexity and depth. Also, the demo's that are available seem to
    be quite lightweight compared to real-world needs. These apps
    ‘seem’ to work in a mode where the entire application
    is downloaded in one-hit at user initiation. The assumption is that
    the user will be prepared to pay some wait time for a better UX,
    but there must be a limit.
    Question:: How does one go about crafting in Flex what would
    have been a 300-page website when produced in HTML? Is this
    practical? To create a download containing the drawing instructions
    and page-logic for 300 pages would probably cause such a delay at
    user-initiation that it would not be practical.
    There are many further questions that span from here, but
    lets see what we get back from the post so far.
    Looking forward to reading responses.
    J.

    You're absolutely in the right place (IMO)...
    Cynergy Systems can help you get started. We are a Flex
    Alliance partner,
    and have developed some extremely complex RIAs with Flex.
    Contact our VP of Consulting - Dave Wolf for more info:
    [email protected]
    Paul Horan
    Cynergy Systems, Inc.
    Macromedia Flex Alliance Partner
    http://www.cynergysystems.com
    Office: 866-CYNERGY
    "TKJames" <[email protected]> wrote in
    message
    news:[email protected]...
    > Hi all.
    >
    > I am new to Flex. I am acting in an architect capacity
    to review the
    > potential
    > in Flex to become the client presentation layer for a
    classic ASP?SQL
    > application. I am seeking a cross-browser,
    cross-platform,
    > zero-installation,
    > just-in-time-delivery, rich user experience web
    application client. I
    > think I'm
    > in the right place.
    >
    > My aim in creating this post is to solicit feedback on
    approaches and
    > techniques on how to plan and execute a major-system
    re-write into Flex,
    > what
    > is in-scope and what is not, etc. With the Flex team
    putting the final
    > touches
    > into release 1.0, this might be considered a bit too
    soon to ask these
    > questions, but worthy of response if Flex is to be take
    as anything more
    > than
    > ?something to do with that flash games thing that my
    kids play on?.
    >
    > One of the key issues for my client company, which I
    believe will be
    > typical
    > for many in the same situation, is to retain current
    investment in the
    > system
    > by re-use of the business logic and DB access layers.
    Basically the
    > back-end is
    > not broken and is well prepared for generation of XML
    instead of HTML.
    > What is
    > considered weak, by nature of the web browsers poor user
    interface
    > abilities,
    > is the client side of the system.
    >
    > The company has a small, loyal and technically able
    workforce who are very
    > familiar with the current system, which is written using
    classic ASP and
    > SQL
    > Server , HTML and JavaScript. The company is risk and
    runaway cost
    > averse. It
    > has held back from jumping into .Net for fear of getting
    into a costly 5
    > year
    > revision cycle as .Net matures. The AJAX approach is
    another potentially
    > fruitful client-only route but is likely to be a painful
    way forward as
    > there
    > is no major technology vendor leading the charge on
    standards. A Java
    > approach
    > would meet the user interface improvement needs but
    would require
    > replacing or
    > retraining the current workforce and a paradigm shift in
    the technology
    > losing
    > all of the middle-tier database logic.
    >
    > The ideal is a stable zero-installation web-client that
    can communicate
    > with a
    > server back-end consisting of the same, or slightly
    modified or wrapped
    > business logic and db configuration as the current
    system and that could
    > then
    > run side-by-side during switchover.
    >
    > If this is possible then risk adverse organisations have
    a way forward.
    >
    > The problem is, that from several docs and articles on
    Adobe's web site,
    > there
    > seems to be some careful but vague positioning of the
    capability of Flex
    > in
    > terms of application complexity and depth. Also, the
    demo's that are
    > available
    > seem to be quite lightweight compared to real-world
    needs. These apps
    > ?seem? to
    > work in a mode where the entire application is
    downloaded in one-hit at
    > user
    > initiation. The assumption is that the user will be
    prepared to pay some
    > wait
    > time for a better UX, but there must be a limit.
    >
    >
    Question:: How does one go about crafting in Flex what would
    have
    > been
    > a 300-page website when produced in HTML? Is this
    practical? To create a
    > download containing the drawing instructions and
    page-logic for 300 pages
    > would
    > probably cause such a delay at user-initiation that it
    would not be
    > practical.
    >
    > There are many further questions that span from here,
    but lets see what we
    > get
    > back from the post so far.
    >
    > Looking forward to reading responses.
    >
    > J.
    >
    >

  • Any real-world e-commerce application using HTMLDB?

    Hi,
    Any real-world e-commerce application using HTMLDB?
    If yes, can you please provide the web links?
    Thanks!
    Steve

    That's why I said "depends on your definition"
    According to the Wikipedia, the definition of e-commerce is -
    "Electronic commerce, e-commerce or ecommerce consists primarily of the distributing, buying, selling, marketing,
    and servicing of products or services over electronic systems such as the Internet and other computer networks."So nothing mentioned there about the size/number of transactions ;)
    So, is your question "Has anybody built a site in HTMLDB that handles the number of transactions that Amazon handles?"
    I'd be surprised if the answer were Yes, however HTMLDB (depending on your architecture) is capable of scaling to a huge number of users.
    Do you have a particular application/project in mind, or is it just a hypothetical question?

  • ESATA 6g, USB 3 on PCI, real world

    I am looking to buy a
    CalDigit FASTA-6GU3
    2 Port USB 3.0 & eSATA 6Gb/s Host Adapter
    to put in my 2008 Mac Pro PCI slot to run an external Blu-Ray writer and hard drive enclosure off the eSATA ports.
    Looking for info on what real-world Gps/Mbps the Mac Pro PCI slot can actually handle -- it is great if the card is 6Gbps eSATA & 5Gbps USB3.0 -- but the bottleneck is what the PCI bus can move (also limited by what the hadrdrive and writer can actually read/write).
    I read on the reviews that the Mac Pro PCI bus can only perform tops 500Mbps (slightly above Firewire 400Mbps).
    I also read the actual PCI card performance was limited to 250MBps due to its PLX chip design.
    As USB2.0 is based on 480Mbps -- I have to wonder what all these numbers mean.
    Can anyone put this into layman's perspective -- why put a 6Gps eSATA and USB 3.0 card in a PCI slot that is limited to 250Mbps?

    Bytes will come to byte you if you think they are bits.
    Kilobytes was the same issue. Or gigabits and bytes.
    10 bits to a byte basically. sometimes 8 bits if there is no check bit.
    Wikipedia is your friend, more so than Google. And there are dictionaries online of tech terms.
    Some poeple come here talking about their MacPro except that they don't know what a 65lbs Mac is and my guess is there are a dozen notebook users asking.
    The Sonnet E4P ~$250 w/ 4 x eSATA ports is nice for adding direct connect to SATA devices, more than enough bandwidth.
    Your Mac has those four internal SATA2 drive bays. Those are all on a single shared controller bus, so while in theory you can get 275MB/sec max from each, there also is NOT full 1GB let alone 1.2 (1200MB/sec) but rather 750MB/sec - more than enough for most disk drives still. Only when you got to SSDs that it was more apparent.
    Here is what your Mac's Serial ATA drives and bus are offering in bandwidth performance, sortof.
    180MB/sec is about the max of 4TB enterprise drive, or WD 10K latest VelocityRaptors.
    SATA2 SSD was about 275MB/sec also, and 450-500/550 (max, writes are often less with SSDs, reads tend to do very well, but it is the very high I/O's per second that they shine, along with lowest response time in seeks being 1/1000th - instead of being 4-12 milliseconds you are into less than 1.0 nanoseconds.
    Geeks were happy with a Base2 numbering - binary 1's and 0's and base16 whereas Apple and others now wanted to change one gigabyte from being binary 1024MB and turn it into 1000MBs. The dumbing down of math, specs, and the i-ification  of consumer devices. iDevice, iOS and consumers that never liked those 1024 binary and hex based numbering system.

  • Cisco ASA 5505 Routing between internal networks

    Hi,
    I am new to Cisco ASA and have been configuring my new firewall but one thing have been bothering. I cannot get internal networks and routing between them to work as I would like to. Goal is to set four networks and control access with ACL:s between those.
    1. Outside
    2. DMZ
    3. ServerNet1
    4. Inside
    ASA version is 9.1 and i have been reading on two different ways on handling IP routing with this. NAT Exempt and not configuring NAT at all and letting normal IP routing to handle internal networks. No matter how I configure, with or without NAT I cannot get access from inside network to DMZ or from ServerNet1 to DMZ. Strange thing is that I can access services from DMZ to Inside and ServerNet1 if access list allows it. For instance DNS server is on Inside network and DMZ works great using it.
    Here is the running conf:
    interface Ethernet0/0
    switchport access vlan 20
    interface Ethernet0/1
    switchport access vlan 20
    interface Ethernet0/2
    switchport access vlan 19
    interface Ethernet0/3
    switchport access vlan 10
    switchport trunk allowed vlan 10,19-20
    switchport trunk native vlan 1
    interface Ethernet0/4
    switchport access vlan 10
    interface Ethernet0/5
    switchport access vlan 10
    switchport trunk allowed vlan 10-11,19-20
    switchport trunk native vlan 1
    switchport mode trunk
    interface Ethernet0/6
    switchport access vlan 10
    switchport trunk allowed vlan 10-11,19-20
    switchport trunk native vlan 1
    switchport mode trunk
    interface Ethernet0/7
    switchport access vlan 10
    interface Vlan10
    nameif inside
    security-level 90
    ip address 192.168.2.1 255.255.255.0
    interface Vlan11
    nameif ServerNet1
    security-level 100
    ip address 192.168.4.1 255.255.255.0
    interface Vlan19
    nameif DMZ
    security-level 10
    ip address 192.168.3.1 255.255.255.0
    interface Vlan20
    nameif outside
    security-level 0
    ip address dhcp setroute
    ftp mode passive
    clock timezone EEST 2
    clock summer-time EEDT recurring last Sun Mar 3:00 last Sun Oct 4:00
    object network obj_any
    subnet 0.0.0.0 0.0.0.0
    object network obj-192.168.2.0
    subnet 192.168.2.0 255.255.255.0
    object network obj-192.168.3.0
    subnet 192.168.3.0 255.255.255.0
    object network DNS
    host 192.168.2.10
    description DNS Liikenne
    object network Srv2
    host 192.168.2.10
    description DC, DNS, DNCP
    object network obj-192.168.4.0
    subnet 192.168.4.0 255.255.255.0
    object network ServerNet1
    subnet 192.168.4.0 255.255.255.0
    object-group protocol TCPUDP
    protocol-object udp
    protocol-object tcp
    object-group network RFC1918
    object-group network InternalNetworks
    network-object 192.168.2.0 255.255.255.0
    network-object 192.168.3.0 255.255.255.0
    object-group service DM_INLINE_SERVICE_1
    service-object tcp destination eq domain
    service-object udp destination eq domain
    service-object udp destination eq nameserver
    service-object udp destination eq ntp
    object-group service DM_INLINE_TCP_1 tcp
    port-object eq www
    port-object eq https
    port-object eq ftp
    port-object eq ftp-data
    object-group service rdp tcp-udp
    description Microsoft RDP
    port-object eq 3389
    object-group service DM_INLINE_TCP_2 tcp
    port-object eq ftp
    port-object eq ftp-data
    port-object eq www
    port-object eq https
    object-group service DM_INLINE_SERVICE_2
    service-object tcp destination eq domain
    service-object udp destination eq domain
    object-group network DM_INLINE_NETWORK_1
    network-object object obj-192.168.2.0
    network-object object obj-192.168.4.0
    access-list dmz_access_in extended permit ip object obj-192.168.3.0 object obj_any
    access-list dmz_access_in extended deny ip any object-group InternalNetworks
    access-list DMZ_access_in extended permit object-group TCPUDP object obj-192.168.3.0 object DNS eq domain
    access-list DMZ_access_in extended permit object-group TCPUDP object obj-192.168.3.0 object-group DM_INLINE_NETWORK_1 object-group rdp
    access-list DMZ_access_in extended deny ip any object-group InternalNetworks
    access-list DMZ_access_in extended permit tcp object obj-192.168.3.0 object obj_any object-group DM_INLINE_TCP_2
    access-list inside_access_in extended permit ip object obj-192.168.2.0 object-group InternalNetworks
    access-list inside_access_in extended permit object-group TCPUDP object obj-192.168.2.0 object obj_any object-group rdp
    access-list inside_access_in extended permit tcp object obj-192.168.2.0 object obj_any object-group DM_INLINE_TCP_1
    access-list inside_access_in extended permit object-group DM_INLINE_SERVICE_1 object Srv2 object obj_any
    access-list inside_access_in extended permit object-group TCPUDP object obj-192.168.2.0 object obj-192.168.3.0 object-group rdp
    access-list ServerNet1_access_in extended permit object-group DM_INLINE_SERVICE_2 any object DNS
    access-list ServerNet1_access_in extended permit ip any any
    pager lines 24
    logging enable
    logging asdm informational
    mtu ServerNet1 1500
    mtu inside 1500
    mtu DMZ 1500
    mtu outside 1500
    no failover
    icmp unreachable rate-limit 1 burst-size 1
    asdm image disk0:/asdm-711-52.bin
    no asdm history enable
    arp timeout 14400
    no arp permit-nonconnected
    nat (inside,DMZ) source static obj-192.168.2.0 obj-192.168.2.0 destination static obj-192.168.2.0 obj-192.168.2.0 no-proxy-arp
    object network obj_any
    nat (inside,outside) dynamic interface
    nat (DMZ,outside) after-auto source dynamic obj_any interface destination static obj_any obj_any
    nat (ServerNet1,outside) after-auto source dynamic obj-192.168.4.0 interface
    access-group ServerNet1_access_in in interface ServerNet1
    access-group inside_access_in in interface inside
    access-group DMZ_access_in in interface DMZ
    timeout xlate 3:00:00
    timeout pat-xlate 0:00:30
    timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02
    timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00
    timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00
    timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute
    timeout tcp-proxy-reassembly 0:01:00
    timeout floating-conn 0:00:00
    dynamic-access-policy-record DfltAccessPolicy
    user-identity default-domain LOCAL
    aaa authentication ssh console LOCAL
    http server enable
    http 192.168.2.0 255.255.255.0 inside
    http 192.168.4.0 255.255.255.0 ServerNet1
    no snmp-server location
    no snmp-server contact
    snmp-server enable traps snmp authentication linkup linkdown coldstart warmstart
    crypto ipsec security-association pmtu-aging infinite
    crypto ca trustpool policy
    telnet timeout 5
    ssh 192.168.4.0 255.255.255.0 ServerNet1
    ssh 192.168.2.0 255.255.255.0 inside
    ssh timeout 5
    console timeout 0
    dhcpd auto_config outside
    threat-detection basic-threat
    threat-detection statistics access-list
    no threat-detection statistics tcp-intercept
    class-map inspection_default
    match default-inspection-traffic
    policy-map type inspect dns preset_dns_map
    parameters
      message-length maximum client auto
      message-length maximum 512
    policy-map global_policy
    class inspection_default
      inspect dns preset_dns_map
      inspect ftp
      inspect h323 h225
      inspect h323 ras
      inspect rsh
      inspect rtsp
      inspect esmtp
      inspect sqlnet
      inspect skinny
      inspect sunrpc
      inspect xdmcp
      inspect sip
      inspect netbios
      inspect tftp
      inspect ip-options
      inspect icmp
    service-policy global_policy global
    prompt hostname context
    no call-home reporting anonymous

    Hi Jouni,
    Yep, Finnish would be good also =)
    In front of ASA is DSL modem, on the trunk ports is Hyper-V host that uses the trunk ports so that every VM has their VLAN ID defined in the VM level. Everything is working good on that end. Also there is WLAN Access Pois on one of the ASA ports, on the WLAN AP there is the management portal address on DMZ that i have been testing agains (192.168.3.4)
    If i configure Dynamic PAT from inside to the DMZ then the traffic starts to work from inside to all hosts on DMZ but thats not the right way to do it so no shortcuts =)
    Here is the conf now, still doesnt work:
    interface Ethernet0/0
    switchport access vlan 20
    interface Ethernet0/1
    switchport access vlan 20
    interface Ethernet0/2
    switchport access vlan 19
    interface Ethernet0/3
    switchport access vlan 10
    switchport trunk allowed vlan 10,19-20
    switchport trunk native vlan 1
    interface Ethernet0/4
    switchport access vlan 10
    interface Ethernet0/5
    switchport access vlan 10
    switchport trunk allowed vlan 10-11,19-20
    switchport trunk native vlan 1
    switchport mode trunk
    interface Ethernet0/6
    switchport access vlan 10
    switchport trunk allowed vlan 10-11,19-20
    switchport trunk native vlan 1
    switchport mode trunk
    interface Ethernet0/7
    switchport access vlan 10
    interface Vlan10
    nameif inside
    security-level 90
    ip address 192.168.2.1 255.255.255.0
    interface Vlan11
    nameif ServerNet1
    security-level 100
    ip address 192.168.4.1 255.255.255.0
    interface Vlan19
    nameif DMZ
    security-level 10
    ip address 192.168.3.1 255.255.255.0
    interface Vlan20
    nameif outside
    security-level 0
    ip address dhcp setroute
    ftp mode passive
    clock timezone EEST 2
    clock summer-time EEDT recurring last Sun Mar 3:00 last Sun Oct 4:00
    object network obj_any
    subnet 0.0.0.0 0.0.0.0
    object network obj-192.168.2.0
    subnet 192.168.2.0 255.255.255.0
    object network obj-192.168.3.0
    subnet 192.168.3.0 255.255.255.0
    object network DNS
    host 192.168.2.10
    description DNS Liikenne
    object network Srv2
    host 192.168.2.10
    description DC, DNS, DNCP
    object network obj-192.168.4.0
    subnet 192.168.4.0 255.255.255.0
    object network ServerNet1
    subnet 192.168.4.0 255.255.255.0
    object-group protocol TCPUDP
    protocol-object udp
    protocol-object tcp
    object-group network RFC1918
    object-group network InternalNetworks
    network-object 192.168.2.0 255.255.255.0
    network-object 192.168.3.0 255.255.255.0
    object-group service DM_INLINE_SERVICE_1
    service-object tcp destination eq domain
    service-object udp destination eq domain
    service-object udp destination eq nameserver
    service-object udp destination eq ntp
    object-group service DM_INLINE_TCP_1 tcp
    port-object eq www
    port-object eq https
    port-object eq ftp
    port-object eq ftp-data
    object-group service rdp tcp-udp
    description Microsoft RDP
    port-object eq 3389
    object-group service DM_INLINE_TCP_2 tcp
    port-object eq ftp
    port-object eq ftp-data
    port-object eq www
    port-object eq https
    object-group service DM_INLINE_SERVICE_2
    service-object tcp destination eq domain
    service-object udp destination eq domain
    object-group network DM_INLINE_NETWORK_1
    network-object object obj-192.168.2.0
    network-object object obj-192.168.4.0
    object-group network DEFAULT-PAT-SOURCE
    description Default PAT source networks
    network-object 192.168.2.0 255.255.255.0
    network-object 192.168.3.0 255.255.255.0
    network-object 192.168.4.0 255.255.255.0
    access-list dmz_access_in extended permit ip object obj-192.168.3.0 object obj_any
    access-list dmz_access_in extended deny ip any object-group InternalNetworks
    access-list DMZ_access_in extended permit object-group TCPUDP object obj-192.168.3.0 object DNS eq domain
    access-list DMZ_access_in extended permit object-group TCPUDP object obj-192.168.3.0 object-group DM_INLINE_NETWORK_1 object-group rdp
    access-list DMZ_access_in extended deny ip any object-group InternalNetworks
    access-list DMZ_access_in extended permit tcp object obj-192.168.3.0 object obj_any object-group DM_INLINE_TCP_2
    access-list inside_access_in extended permit ip object obj-192.168.2.0 object-group InternalNetworks
    access-list inside_access_in extended permit object-group TCPUDP object obj-192.168.2.0 object obj_any object-group rdp
    access-list inside_access_in extended permit tcp object obj-192.168.2.0 object obj_any object-group DM_INLINE_TCP_1
    access-list inside_access_in extended permit object-group DM_INLINE_SERVICE_1 object Srv2 object obj_any
    access-list inside_access_in extended permit object-group TCPUDP object obj-192.168.2.0 object obj-192.168.3.0 object-group rdp
    access-list ServerNet1_access_in extended permit object-group DM_INLINE_SERVICE_2 any object DNS
    access-list ServerNet1_access_in extended permit ip any any
    pager lines 24
    logging enable
    logging asdm informational
    mtu ServerNet1 1500
    mtu inside 1500
    mtu DMZ 1500
    mtu outside 1500
    no failover
    icmp unreachable rate-limit 1 burst-size 1
    asdm image disk0:/asdm-711-52.bin
    no asdm history enable
    arp timeout 14400
    no arp permit-nonconnected
    nat (any,outside) after-auto source dynamic DEFAULT-PAT-SOURCE interface
    access-group ServerNet1_access_in in interface ServerNet1
    access-group inside_access_in in interface inside
    access-group DMZ_access_in in interface DMZ
    timeout xlate 3:00:00
    timeout pat-xlate 0:00:30
    timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02
    timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00
    timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00
    timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute
    timeout tcp-proxy-reassembly 0:01:00
    timeout floating-conn 0:00:00
    dynamic-access-policy-record DfltAccessPolicy
    user-identity default-domain LOCAL
    aaa authentication ssh console LOCAL
    http server enable
    http 192.168.2.0 255.255.255.0 inside
    http 192.168.4.0 255.255.255.0 ServerNet1
    no snmp-server location
    no snmp-server contact
    snmp-server enable traps snmp authentication linkup linkdown coldstart warmstart
    crypto ipsec security-association pmtu-aging infinite
    crypto ca trustpool policy
    telnet timeout 5
    ssh 192.168.4.0 255.255.255.0 ServerNet1
    ssh 192.168.2.0 255.255.255.0 inside
    ssh timeout 5
    console timeout 0
    dhcpd auto_config outside
    threat-detection basic-threat
    threat-detection statistics access-list
    no threat-detection statistics tcp-intercept
    class-map inspection_default
    match default-inspection-traffic
    policy-map type inspect dns preset_dns_map
    parameters
      message-length maximum client auto
      message-length maximum 512
    policy-map global_policy
    class inspection_default
      inspect dns preset_dns_map
      inspect ftp
      inspect h323 h225
      inspect h323 ras
      inspect rsh
      inspect rtsp
      inspect esmtp
      inspect sqlnet
      inspect skinny
      inspect sunrpc
      inspect xdmcp
      inspect sip
      inspect netbios
      inspect tftp
      inspect ip-options
      inspect icmp
    service-policy global_policy global
    prompt hostname context
    no call-home reporting anonymous

  • Real World Adobe Photoshop CS3 (Real World)

    Real World Adobe Illustrator CS3 (Real World) - Mordy
    Golding;
    Real World Adobe Photoshop CS3 (Real World) - David Blatner;
    these books are in the UPPER LEVEL than "classroom in a book"
    series ?

    > but the part about DNG has convinced me to dive deeper in it and give it a go
    When working in a Bridge/Camera Raw/Photoshop workflow, I tend to ingest the actual native raw files, do initial selects and gross edits and basic metadata work via templates and THEN do the conversion to DNG. I'll use the DNG as my working files and the original raws as an archive. I tend to do this more with studio shoots. I tend to use Lightroom when I'm on the road.
    When working in Lightroom first, I tend to ingest and convert to DNG upon ingestion (when in the road working on a laptop) while using the backup copyusually working on a pair of external FW drives one for working DNG files and 1 for BU of the original raws. Then, when I get back to the studio I make sure I write to XMP and export the new shoot as a catalog and import into my studio copy of Lightroom. Then I'll also cache the newly imported images in Bridge as well so I can get at the image either in Bridge or Lightroom.
    It's a bit of a chore now since I do work in Camera Raw a lot (well, DOH, I had to to do the book!) but I also keep all my digital files in a Lightroom catalog which is now up to about 74K...
    Then, depending on what I'll need to do, I'll either work out of LR or Bridge/Camera Raw...
    If I'm doing a high-end final print, I generally process out of Camera Raw as a Smart Object and stack multiple layers of CR processed images...if I'm working on a batch of images I'll work out of Lightroom since the workflow seems to suit me better.
    In either event, I've found DNG to be better than native raws with sidecar files.

  • RAID test on 8-core with real world tasks gives 9% gain?

    Here are my results from testing the software RAID set up on my new (July 2009) Mac Pro. As you will see, although my 8-core (Octo) tested twice as fast as my new (March 2009) MacBook 2.4 GHz, the software RAID set up only gave me a 9% increase at best.
    Specs:
    Mac Pro 2x 2.26 GHz Quad-core Intel Xeon, 8 GB 1066 MHz DDR3, 4x 1TB 7200 Apple Drives.
    MacBook 2.4 GHz Intel Core 2 Duo, 4 GB 1067 MHz DDR3
    Both running OS X 10.5.7
    Canon Vixia HG20 HD video camera shooting in 1440 x 1080 resolution at “XP+” AVCHD format, 16:9 (wonderful camera)
    The tests. (These are close to my real world “work flow” jobs that I would have to wait on when using my G5.)
    Test A: import 5:00 of video into iMovie at 960x540 with thumbnails
    Test B: render and export with Sepia applied to MPEG-4 at 960x540 (a 140 MB file) in iMovie
    Test C: in QuickTime resize this MPEG-4 file to iPod size .m4v at 640x360 resolution
    Results:
    Control: MacBook as shipped
    Test A: 4:16 (four minutes, sixteen seconds)
    Test B: 13:28
    Test C: 4:21
    Control: Mac Pro as shipped (no RAID)
    Test A: 1:50
    Test B: 7:14
    Test C: 2:22
    Mac Pro config 1
    RAID 0 (no RAID on the boot drive, three 1TB drives striped)
    Test A: 1:44
    Test B: 7:02
    Test C: 2:23
    Mac Pro config 2
    RAID 10 (drives 1 and 2 mirrored, drives 3 and 4 mirrored, then both mirrors striped)
    Test A: 1:40
    Test B: 7:09
    Test C: 2:23
    My question: Why am I not seeing an increase in speed on these tasks? Any ideas?
    David
    Notes:
    I took this to the Apple store and they were expecting 30 to 50 per cent increase with the software RAID. They don’t know why I didn’t see it on my tests.
    I am using iMovie and QuickTime because I just got the Adobe CS4 and ran out of cash. And it is fine for my live music videos. Soon I will get Final Cut Studio.
    I set up the RAID with Disk Utility without trouble. (It crashed once but reopened and set up just fine.) If I check back it shows the RAID set up working.
    Activity Monitor reported “disk activity” peaks at about 8 MB/sec on both QuickTime and iMovie tasks. The CPU number (percent?) on QT was 470 (5 cores involved?) and iMovie was 294 (3 cores involved?).
    Console reported the same error for iMovie and QT:
    7/27/09 11:05:35 AM iMovie[1715] Error loading /Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHDAudio: dlopen(/Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHD Audio, 262): Symbol not found: _keymgr_get_per_threaddata
    Referenced from: /Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHDAudio
    Expected in: /usr/lib/libSystem.B.dylib

    The memory controllers, one for each cpu, means that you need at least 2 x 2GB on each bank. If that is how Apple set it up, that is minimal and the only thing I would do now with RAM is add another 2 x 2GB. That's all. And get you into triple channel bandwidth.
    It could be the make and model of your hard drives. If they are seagate then more info would help. And not all drives are equal when it comes to RAID.
    Are you new to RAID or something you've been doing? seems you had enough to build 0+1 and do some testing. Though not pleased, even if it works now, that it didn't take the one time.
    Drives - and RAIDs - improve over the first week or two - which, before commiting good data to them - is the best time to torture, run them ragged, use Speedtools to break them in, loosen up the heads, scan for media errors, and run ZoneBench (and with 1TB, partition each drive into 1/4ths).
    If Drive A is not identical to B, then they may deal with an array even worse. And no two drives are purly identical, some vary more than others, and some are best used in hardware RAID controller environments.
    Memory: buying in groups of three. okay. But then adding 4 x 4GB? So bank A with 4 x 2GB and B with twice as much memory. On Mac Pro, 4 DIMMs on a bank you get 70% bandwidth, it drops down from tri-channel to dual-channel mode.
    I studied how to build or put together a PC for over six months, but then learned more in the month (or two) after I bought all the parts, found what didn't work, learned my own short-comings, and ended up building TWO - one for testing, other for backup system. And three motherboards (the best 'rated' also had more trouble with BIOS and fans, the cheap one was great, the Intel board that reviewers didn't seem to "gork" actually has been the best and easiest to use and update BIOS). Hands on wins 3:1 versus trying to learn by reading for me, hands-on is what I need to learn. Or take car or sailboat out for drive, spin, see how it fares in rough weather.
    I buy an Apple system bare bones, stock, or less, then do all the upgrades on my own, when I can afford to, gradually over months, year.
    Each cpu needs to be fed. So they each need at least 3 x 1GB RAM. And they need raw data fed to RAM and cpu from disk drives. And your mix of programs will each behave differently. Which is why you see Barefeats test with Pro Apps, CINEBENCH, and other apps or tools.
    What did you read or do in the past that led you to think you need RAID setup, and for how it would affect performance?
    Photoshop Guides to Performance:
    http://homepage.mac.com/boots911/.Public/PhotoshopAccelerationBasics2.4W.pdf
    http://kb2.adobe.com/cps/401/kb401089.html
    http://www.macgurus.com/guides/storageaccelguide.php
    4-core vs 8-core
    http://www.barefeats.com/nehal08.html
    http://www.barefeats.com/nehal03.html

  • EJB 3.0 in a real world open source project. Great for coding reference!

    If you are interested in seeing EJB 3.0 implemented in a real world project (not just examples) or if you are interested in learning how to use them I suggest you to take a look a the open source project Overactive Logistics.
    It has been written totally using EJB 3.0 (session and entity beans) I found it very helpful in solving several technical situations I was facing.
    You can get more information at:
    http://overactive.sourceforge.net

    Thanks for the ponter, I will check it out.
    hth,
    Sean

  • PI Implementation Examples - Real World Scenarios

    Hello friends,
    In the near future, I'm going to give a presentation to our customers on SAP PI. To convince them to using this product, I need examples from the real world that are already implemented successfully.
    I have made a basic search but still don't have enough material on the topic, I don't know where to look actually. Could you post any examples you have at hand? Thanks a lot.
    Regards,
    Gökhan

    Hi,
    Please find here with you the links
    SAP NetWeaver Exchange Infrastructure Business to Business and Industry Standards Support (2004)
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/90052f25-bc11-2a10-ad97-8f73c999068e
    SAP Exchange Infrastructure 3.0: Simple Use Cases
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/20800429-d311-2a10-0da2-d1ee9f5ffd4f
    Exchange Infrastructure - Integrating Heterogeneous Systems with Ease
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1ebea490-0201-0010-faad-a32dd753d009
    SAP Network Blog: Re-Usable frame work in XI
    /people/sravya.talanki2/blog/2006/01/10/re-usable-frame-work-in-xi
    SAP NetWeaver in the Real World, Part 1 - Overview
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/20456b29-bb11-2a10-b481-d283a0fce2d7
    SAP NetWeaver in the Real World, Part 3 - SAP Exchange Infrastructure
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3172d290-0201-0010-2b80-c59c8292dcc9
    SAP NetWeaver in the Real World, Part 3 - SAP Exchange Infrastructure
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/9ae9d490-0201-0010-108b-d20a71998852
    SAP NetWeaver in the Real World, Part 4 - SAP Business Intelligence
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1f42d490-0201-0010-6d98-b18a00b57551
    Real World Composites: Working With Enterprise Services
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/d0988960-b1c1-2a10-18bd-dafad1412a10
    Thanks
    Swarup

Maybe you are looking for

  • I downloaded Mountain on an older Mac and now Safari doesn't open

    I clicked on the download to Mountain Lion and got a message that my computer could not support the upgrade. Later I found in itunes under purchased a Mountain Lion download. I clicked on it and it downloaded, but when the download was finished it st

  • Current day and date

    I've created a fairly simple website in which I've inserted the day and date and want it to update automatically according to my location, which is the same as the server. Currently, it is in the following format: Friday, March 14, 2008. I've not bee

  • Change currency in condition in PO after cancel GR

    Hi everyone, I have met a problem.  I want to change currency in PO condition, like FRB1, after GR has been cancelled. First, I created a PO, then did GR in MIGO. Next, I cancelled the GR and now I want to change the currency of condition FRB1 in ME2

  • Very high transaction log file growth

    Hello Running Exchange 2010 sp2 in a two node DAG configuration.  Just recently i have noticed a very high transaction log file growth for one database. The transaction logs are growing so quickly that i have had to turn on circular logging in order

  • Newbie WiFi Question...Help!

    I am going on a cross country trip and would like to use my iTouch as my main mode of internet access. Relying on free WiFi at coffee places ect. is not going to cut it. I'm wondering if there is a way to pay for internet access or can that only be d