Best design structure for 4710's

We are implementing 4710's in our core network..
what could be the best design structure from a simplicity point
one interface vlan for for vips---connected front end to the core..and backend for servers (routed mode)
should you have more than one interface vlan for servers and or clients?
at which point would u need multi context.......besides an Admin context
should you put a management interface on each context?

We are implementing 4710's in our core network..
--what could be the best design structure from a simplicity point
Design would vary based on specific requirements. To connect it to a specific layer on the network (core/agg) you would have to check the traffic flow to decide what suits you best.
In terms of ACE design, if source IP visibility is not a requirement, One-arm mode with Source NAT provides the ability for non load balanced traffic to bypass the ACE. If it is a requirement you can use PBRs but that complicates things a little because you have to now manage the routers for changes on the ACE. With routed mode, the design is simple and servers point to the ACE as their default gateway. Need to weigh the pros and cons of each of the options based on the specific requirements.
--one interface vlan for for vips---connected front end to the core..and backend for servers (routed mode)
Yes - for routed mode that would be the way to do it. In this case, in addition to load balancing, the ACE routes non-loadbalanced traffic to/from the servers.
should you have more than one interface vlan for servers and or clients?
- Depends in your subnets. If you have separate subnets for your web/app/db servers then it is a good idea to have different subnets. Also, you may want to think about separate contexts if you want complete isolation between the layers.
--at which point would u need multi context.......besides an Admin context
As far as possible, try to keep the Admin context only for administration. Make a separate context(s) for load balancing and manage the resources to it.
--should you put a management interface on each context?
Yes - that would give you the ability to have different users manage only their contexts.
Hope that helps .

Similar Messages

  • What is the best design pattern for this problem?

    No code to go with the question. I am trying to settle on the best design pattern for the problem before I code. I want to use an Object Oriented approach.
    I have included a basic UML diagram of what I was thinking so far. 
    Stated simply, I have three devices; Module, Wired Modem, and Wireless Modem.
    In the Device Under Test parent class, I have put the attributes that are variable from device to device, but common to all of them.
    In the child classes, I have put the attributes that are not variable to each copy of that device. The attributes are common across device types. I was planning to use controls in the class definition that have the data set to a default value, since it doesn't change for each serial number of that device. For example, a Module will always have a Device Type ID of 1. These values are used to query the database.
    An example query would be [DHR].[GetDeviceActiveVersions] '39288', 1, '4/26/2012 12:18:52 PM'
    The '1' is the device type ID, the 39288 is the serial number, and the return would be "A000" or "S002", for example.
    So, I would be pulling the Serial Number and Device Type ID from the Device Under Test parent and child, and passing them to the Database using a SQL string stored in the control of the Active Versions child class of Database.
    The overall idea is that the same data is used to send multiple queries to the database and receiving back various data that I then evaluate for pass of fail, and for date order.
    What I can't settle on is the approach. Should it be a Strategy pattern, A Chain of Command pattern, a Decorator pattern or something else. 
    Ideas?

    elrathia wrote:
    Hi Ben,
    I haven't much idea of how override works and when you would use it and why. I'm the newest of the new here. 
    Good. At least you will not be smaking with a OPPer dOOPer hammer if I make some gramatical mistake.
    You may want to look at this thread in the BreakPoint where i trie to help Cory get a handle on Dynamic Dispatching with an example of two classes that inherit from a common parent and invoke Over-ride VIs to do the same thing but with wildly varying results.
    The example uses a Class of "Numeric"  and a sibling class "Text" and the both implement an Add method.
    It is dirt simple and Cory did a decent job of explaining it.
    It just be the motivation you are looking for.
    have fun!
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Best design idea for parallel DAQ (via GPIB), PID control and watchdog system

    Hello!
    I am a starter programmer in LabView, I already understand the basic ideas, and design patterns, like producer/consumer, events state machine, functional global variables, and so on, so the basics...
    I started a project a few weeks ago, and first I wrote and tested all of the necessary subvi-s for my project, like the ones initializing my GPIB devices (a Keithley mux and a dvm), controlling/reading out the measured voltage/resistancy values from them, subvi for my Static Analog Output card (I have to drive 7 analog output channels, 4 has to be PID controlled regarding to some of the measured values from the GPIB devices), and another subvi for the analogue card, sending TTL watchdog signals out to my experiment in every 2 seconds.
    Any idea is welcomed about suggesting the best design pattern for my project.
    The main features of my program would be:
    After starting the user interface the program starts with a start-up state, initializes the DAQmx channels and the GPIB devices. The program starts to read out different values from the GPIB devices, including 4 temperature values in every 3 seconds.
    In the same loop (?), using PID control, the program sets the DC voltage values of 4 channels (3 to heating wires, 1 to a Peltier-heat pump) on the Static Waveform Analogue output card, the remaining 3 values are constants.
    I have to send digital TTL watchdog signal to some relays from the same output card, changing its state in every 2 seconds (not the same rate as the GPIB values are read out).
    When the 4 temperatures and the power values regarding to the heating wires are equilibrated after a few hours, the program goes into another state, and signals to the user, that the measurement can be started. During a measurement, I write out all of the measured values to a TDMS file, and there is also some basic calculations "on the fly", like a moving average.
    After the measurement done, the user can swap samples, and the program goes into the above state, waiting for equilibration . and so on...
    Do you think I should use a Producer/Consumer pattern with events? Or someone would recommend me a better design idea?
    Thanks very much!
    ps.: I read out lots of values from the Keithleys, so I put them in a cluster. I made a small test vi (attached without the GPIB comm subvi-s), just to test the GPIB comm. So this is the recent state of my project. (all other subvi-s tested and ready to use as I wrote above, like the DAQmx output part)
    Attachments:
    GUI_Calorimeter_control_image_v2.vi ‏284 KB

    Okey
    I think it is a better approach if I work first, and after I ask
    I go with small steps. For first, now I just want to make a DAQ analog output loop, and parallel a watchdog loop sending out TTL in every 2 seconds.
    The main loop in my project will iterate with approx. 10-15 seconds.
    I want to check in my watchdog loop, if my main loop hangs up (in that case the PID control stops, but the danger is if the output voltages stay on).
    After some readings, I have decided to use a functional global variable.
    I have attached the vi, can someone give me advice, what would be the good solution for this purpose?
    (I know it is a bit silly what I messed up with the shift registers in the bottom loop, it was some experimenting).
    Thanks in advance!
    Attachments:
    watchdog_funcglobvariable.vi ‏12 KB
    Global 1_stop.vi ‏4 KB
    analog_output+Watchdog.vi ‏57 KB

  • What is the best design pattern for top-down ws development..?

    Hi,
    What is the best design pattern for top-down development+ wsdl2service....?

    elrathia wrote:
    Hi Ben,
    I haven't much idea of how override works and when you would use it and why. I'm the newest of the new here. 
    Good. At least you will not be smaking with a OPPer dOOPer hammer if I make some gramatical mistake.
    You may want to look at this thread in the BreakPoint where i trie to help Cory get a handle on Dynamic Dispatching with an example of two classes that inherit from a common parent and invoke Over-ride VIs to do the same thing but with wildly varying results.
    The example uses a Class of "Numeric"  and a sibling class "Text" and the both implement an Add method.
    It is dirt simple and Cory did a decent job of explaining it.
    It just be the motivation you are looking for.
    have fun!
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Best data structure for tablemodel

    Hi everybody
    I'm building an application that has a JTable which gets its data from a resultSet. I'd like to know which data structure is the best for this situation. I've seen some examples that passes the data from the resultset to a Vector or an ArrayList, but if you have a scroollable resultset, why will I instantiate another data structure?
    Thanks very much
    Mauricio

    Hi everybody
    I'm building an application that has a JTable which gets its data
    from a resultSet. I'd like to know which data structure is the best
    for this situation. I've seen some examples that passes the data from
    the resultset to a Vector or an ArrayList, but if you have a
    scroollable resultset, why will I instantiate another data structure?
    Thanks very much
    Mauricio I've used TableModels that get their data directly form the ResultSet and had some success. But you are at the mercy of the implementation of the ResultSet. If it is slow your table model will be slow too. I would suggest that you use an array of Object arrays, i.e. Object[][], as your data stucture and copy your result set into it.

  • Best design pattern for a large number of options?

    Hi,
    I'm faced with the following straightforward problem, but I'm having trouble coming up with a design solution. Any suggestions?
    Two items in the database can be related in three possible ways (temp replacement, permanent replacement, substitute). Each item has three possible stock levels. The user can select one of two items.
    This comes out to 54 different prompts that need to be provided to the user (example: "The entered item has a preferrable temp replacement available that is in stock, sell instead of the entered item?", "The entered item is out of stock, but has a substitute item available, use instead?", etc. etc.)
    Does anybody have a suggestion of a good design pattern to use? In the legacy system it was implemented with a simple case statement, but I'd like to use something more maintainable.
    If anybody has any suggestions, I'd appreciate it.
    thanks,

    In the legacy system it was
    implemented with a simple case statement, but I'd like
    to use something more maintainable.Is it ever likely to change? If no, then a case statement is pretty maintainable.
    How is the data retrieved? I'm guessing it's a decision tree: if the desired object is in stock, return it, otherwise look for a permanent substitute, &c. In this case, perhaps you have a retrieval object that implements a state machine internally: each call to the retrieval operation causes a transition to the next-best state if unable to fulfill the request.
    If you do retrieve all possible data in a single query (and I hope not, as that would almost certainly be very inefficient), then think of some sort of "preference function" that could be used to order the results, and store them in a TreeMap or TreeSet ordered by that function.

  • What is the best data structure for loading an enterprise Power BI site?

    Hi folks, I'd sure appreciate some help here!
    I'm a kinda old-fashioned gal and a bit of a traditionalist, building enterprise data warehouses out of Analysis Service hypercubes with a whole raft of MDX for analytics.  Those puppies would sit up and beg when you asked them to deliver up goodies
    to SSRS or PowerView.
    But Power BI is a whole new game for me.  
    Should I be exposing each dimension and fact table in the relational data warehouse as a single Odata feed?  
    Should I be running Data Management Gateway and exposing each table in my RDW individually?
    Should I be flattening my stars and snowflakes and creating a very wide First Normal Form dataset with everything relating to each fact? 
    I guess my real question, folks, is what's the optimum way of exposing data to the Power BI cloud?  
    And my subsidiary question is this:  am I right in saying that all the data management, validation, cleansing, and regular ETTL processes are still required
    before the data is suitable to expose to Power BI?  
    Or, to put it another way, is it not the case that you need to have a clean and properly structured data warehouse
    before the data is ready to be massaged and presented by Power BI? 
    I'd sure value your thoughts and opinions,
    Cheers, Donna
    Donna Kelly

    Dear All,
    My original question was: 
    what's the optimum way of exposing data to the Power BI cloud?
    Having spent the last month faffing about with Power BI – and reading about many people’s experiences using it – I think I can offer a few preliminary conclusions.
    Before I do that, though, let me summarise a few points:
    Melissa said “My initial thoughts:  I would expose each dim & fact as a separate OData feed” and went on to say “one of the hardest things . . . is
    the data modeling piece . . . I think we should try to expose the data in a way that'll help usability . . . which wouldn't be a wide, flat table ”.
    Greg said “data modeling is not a good thing to expose end users to . . . we've had better luck with is building out the data model, and teaching the users
    how to combine pre-built elements”
    I had commented “. . . end users and data modelling don't mix . . . self-service so
    far has been mostly a bust”.
    Here at Redwing, we give out a short White Paper on Business Intelligence Reporting.  It goes to clients and anyone else who wants one.  The heart
    of the Paper is the Reporting Pyramid, which states:  Business intelligence is all about the creation and delivery of actionable intelligence to the right audience at the right time
    For most of the audience, that means Corporate BI: pre-built reports delivered on a schedule.
    For most of the remaining audience, that means parameterised, drillable, and sliceable reporting available via the web, running the gamut from the dashboard to the details, available on
    demand.
    For the relatively few business analysts, that means the ability for business users to create their own semi-customised visual reports when required, to serve
    their audiences.
    For the very few high-power users, that means the ability to interrogate the data warehouse directly, extract the required data, and construct data mining models, spreadsheets and other
    intricate analyses as needed.
    On the subject of self-service, the Redwing view says:  Although many vendors want tot sell self-service reporting tools to the enterprise, the facts of the matter are these:
    v
    80%+ of all enterprise reporting requirement is satisfied by corporate BI . . . if it’s done right.
    v Very few staff members have the time, skills, or inclination to learn and employ self-service business intelligence in the course of their activities.
    I cannot just expose raw data and tell everyone to get on with it.  That way lies madness!
    I think that clean and well-structured data is a prerequisite for delivering business intelligence. 
    Assuming that data is properly integrated, historically accurate and non-volatile as well, then I've just described
    a data warehouse, which is the physical expression of the dimensional model.
    Therefore, exposing the presentation layer of the data warehouse is – in my opinion – the appropriate interface for self-service business intelligence.
    Of course, we can choose to expose perspectives as well, which is functionally identical to building and exposing subject data marts.
    That way, all calculations, KPIs, definitions, and even field names, and all consistent because they all come from the single source of the truth, and not from spreadmart hell.
    So my conclusion is that exposing the presentation layer of the properly modelled data warehouse is – in general - the way to expose data for self-service.
    That’s fine for the general case, but what about Power BI?  Well, it’s important to distinguish between new capabilities in Excel, and the ones in Office 365.
    I think that to all intents and purposes, we’re talking about exposing data through the Data Management Gateway and reading it via Power Query.
    The question boils down to what data structures should go down that pipe. 
    According to
    Create a Data Source and Enable OData Feed in Power BI Admin Center, the possibilities are tables and views.  I guess I could have repeating data in there, so it could be a flattened structure of the kind Melissa doesn’t like (and neither do I). 
    I could expose all the dims and all the facts . . . but that would mean essentially re-building the DW in the PowerPivot DM, and that would be just plain stoopid.  I mean, not a toy system, but a real one with scores of facts and maybe hundreds of dimensions?
    Fact is, I cannot for the life of me see what advantages DMG/PQ
    has over just telling corporate users to go directly to the Cube Perspective they want, that has already all the right calcs, KPIs, security, analytics, field names . . . and most importantly, is already modelled correctly!
    If I’m a real Power User, then I can use PQ on my desktop to pull mashup data from the world, along with all my on-prem data through my exposed Cube presentation layer, and PowerPivot the
    heck out of that to produce all the reporting I’d ever want.  It'd be a zillion times faster reading the data directly from the Cube instead of via the DMG, as well (I think Power BI performance sucks, actually).
    Of course, your enterprise might not
    have a DW, just a heterogeneous mass of dirty unstructured data.  If that’s the case,
    choosing Power BI data structures is the least of your problems!  :-)
    Cheers, Donna
    Donna Kelly

  • What are the best design requisites for a Query design?

    Hi Guru's
    Could you please let me know,which item will execute first when you run a query,I mean Calculated keyfigure,restricted keyfigure or formula  e,t,c...How does it effects the Query performance?What are the design requisites to optimise better query performance?
    Thanks in advance,
    rgds,
    Srini.

    Hi Srinivas....
    The design of queries can have a significant impact on the performance.
    Sometimes long running queries are the result of poor design, not just the amount
    of data. There are a number of design techniques that developers can use to
    provide optimal query performance.
    For example, in most cases characteristics should be placed in the rows and key
    figures in the columns. A characteristic should only be used in the columns in
    certain circumstances (like time). Characteristics having potentially many values
    (such as 0MATERIAL) must not be added to the columns without a filter or
    variables. Alternatively, it can be integrated into the query as a free characteristic
    – enabling it to be used in navigation.
    If a relatively detailed time characteristic, such as calendar day (0CALDAY) is
    added to the rows, the more aggregated time characteristics (such as calendar
    month (0CALMONTH)) and calendar year (0CALYEAR) should also be included
    in the free characteristics of the query. For most reports, a current period of time
    (current month, previous or current calendar year) is useful. For this reason, the
    use of variables is particularly relevant for time characteristics.
    To improve query performance
    1) Variables and drop down lists can improve query performance by making the
    data request more specific. This is very important for queries against Data Store
    Objects and InfoSets, which are not aggregated like InfoCubes.
    2) When using restricted key figures, filters or selections, try to avoid the Exclusion
    option if possible. Only characteristics in the inclusion can use database indexes.
    Characteristics in the exclusion cannot use indexes.
    3) When a query is run against a MultiProvider, all of InfoProviders in that
    MultiProvider are read. The selection of the InfoProviders in a MultiProvider
    query can be controlled by restricting the virtual characteristic 0INFOPROVIDER
    to only read the InfoProviders that are needed. In this way, there will be no
    unnecessary database reads.
    4) Defining calculated key figures at the InfoProvider level instead
    of the query level will improve query runtime performance, but may add
    time for data loads.
    5) Cell calculation by means of the cell editor generates separate queries at query
    runtime. Be cautious with cell calculations.
    6) Customer-specific code is necessary for virtual key figures and characteristics.
       Check Code in Customer Exits.
    7)Using graphics in queries, such as charts, can have a performance impact.
    Hope this helps.........
    Regards,
    Debjani.........

  • Looking for best design approach for moving data from one db to another.

    We have a very simple requirement to keep 2 tables synched up that live in 2 different databases. There can be up to 20K rows of data we need to synch up (nightly).
    The current design:
    BPEL process queries Source DB, puts results into memory and inserts into Target DB. Out of memory exception occurs. (no surprise).
    I am proposing a design change to get the data in 1000 row chunks, something like this:
    1. Get next 1000 records from Source DB. (managed through query)
    2. Put into memory (OR save to file).
    3. Read from memory (OR from a file).
    4. Save into Target DB.
    Question is:
    1 Is this a good approach and if so, does SOA have any built in mechanisms to handle this? I would think so since I believe this is a common problem - we don't want to reinvent the wheel.
    2. Is it better to put records into memory or writing to a file before inserting into the Target DB?
    The implementation team told me this would have to be done with Java code, but I would think this would be out of the box functionality. Is that correct?
    I am a SOA newby, so please let me know if there is a better approach.
    Thank you very much for your valued input.
    wildeman

    Hi,
    After going through your question, the first thing that came to my mind is what would be the size of the 20K records.
    If this is going to be huge then even the 1000 row logic might take significant time to do the transfer. And I think even writing it to a file will not be efficient enough.
    If the size is not huge then probably your solution might work. But I think you will need to decide on the chunk size based on how well your BPEL process will work. Possible you can try different size and test the performance to arrive at an optimal value.
    But in case the size is going to be huge, then you might want to consider using ETL implementations. Oracle ODI does provide such features out of the box with high performance.
    On the other hand, implementing the logic using the DBAdapter should be more efficient than java code.
    Hope this helps. Please do share your thoughts/suggestions.
    Thanks,
    Patrick

  • What is the best design pattern for multiple instrument​s?

    I have several PXI cards (DMM, O-scope, etc) that I want to create a master panel that can control all of the instruments individually, though not necessarily all at the same time (turn "instruments" on and off at will while the master vi continues to run).  Is there one design pattern (master/slave, producer/consumer, etc.) that works better than another for this type of master panel?  Are there other alternatives for this type of problem (VI Server, etc)?
    I was hoping that I could save a bunch of time if I could start on the right path right off the bat!
    Thanks,
    -Warren

    elrathia wrote:
    Hi Ben,
    I haven't much idea of how override works and when you would use it and why. I'm the newest of the new here. 
    Good. At least you will not be smaking with a OPPer dOOPer hammer if I make some gramatical mistake.
    You may want to look at this thread in the BreakPoint where i trie to help Cory get a handle on Dynamic Dispatching with an example of two classes that inherit from a common parent and invoke Over-ride VIs to do the same thing but with wildly varying results.
    The example uses a Class of "Numeric"  and a sibling class "Text" and the both implement an Add method.
    It is dirt simple and Cory did a decent job of explaining it.
    It just be the motivation you are looking for.
    have fun!
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Best design ap for beginners?

    HI
    im looking for a design app to do simple designs (labels, simple flyers etc) I've used illustrator or in design in the past but it was too complex. Anyone know of something? I'm looking for something like what appeture is to photoshop but for designs.
    THanks

    Hi;
    You can find all related information at:
    tahiti.oracle.com
    Similar issue mention here many times. Please see this search
    It should answer all of your book related quesiton.
    PS:Please dont forget to change thread status to answered if it possible when u belive your thread has been answered, it pretend to lose time of other forums user while they are searching open question which is not answered,thanks for understanding
    Regard
    Helios

  • What's the best program structure for multi-channel Averaged FFTs?

    I have been successfully using a 8-channel FFT analyzer using a separate Averaged FFT Spectrum (Mag-Phase) VI for each channel, but I now need to expand my channel count from 8 to 16. All channels are simultaneoulsy read in to a buffer to maintain phase relatioships. Do I need to have 16 copies of the Averaged FFT Spectrum VI in my diagram to maintain the averages of each channel or can I utilize one FFT VI somehow within a For Loop? If a For Loop can be used, how can the averaged spectrums be maintained for each of the 16 channels?

    You should be able to use a single FFT vi in a for loop. If your data is in a 2D array representation with each row representing a channel and you leave the indexing enabled you can pass your data array into the loop and it will strip off each row automatically. You could then pass each row of data to the FFT and pass the results out to the boundary of the for loop. This will then build an array containing the FFT results for each row (channel).
    Marc
    Marc

  • Design structure for systematized backup - Pt. 1

    Introduction: I am planning to set up a system for regular back up of my wife's G4 (mirrored drive door). It originally came with both System 9 and System X all mixed together on the internal hard drive. We can choose which system to boot into in Startup preferences. Additional internal drives have been added in the past and RAID is configured.
    System X has been upgraded to 10.4.11. System 9 remains as a separate drive on the desktop and we want to preserve the ability to boot directly into System 9 to use some old software.
    The G4 has Firewire 400 and we just bought a new external 1TB quad interface drive. I would like to use the drive to create a new external 10.5 boot drive, and also for backup purposes. I have been reading various threads in here, but some apparently conflicting information has me confused.
    Issues: I'll start my questions with Part 1 - preparing the new external drive. I am contemplating partitioning the new drive into several pieces to allow for several different "drives" to be available on the desktop. One partition would be for a new installation of System 10.5; one would be for a clone backup of the internal drive System 10.4.11; one would be for a clone backup of the internal drive System 9; one would be for Time Machine backup of the new System 10.5.
    Questions: (A) Can each of the different partitions be bootable (except the Time Machine)? I've seen language in here that suggests that partitions can't be made bootable, only a volume, thus only a +one partition+ drive can be made bootable? (I would like to be able to have at least three different bootable drives with different systems.)
    (B) Can some of the partitions be made APM bootable while others are GUID bootable? Or does the type of format apply across all partitions to the entire physical drive? (I would like to be able to also have a backup of my Mac mini Leopard and Snow Leopard drives on the 1TB external as well.)
    (C) Can I use my Mac mini with System 10.5.8 to do the initial partitioning of the 1TB external, so that non destructive addition of more partitions in the future may be possible?
    I have other questions but will handle those in later posts after I resolve these initial mattes. Thanks to all for your time and attention.
    Message was edited by: Randy Knowles

    Randy Knowles wrote:
    (1) You said - "When formatting the drive, be sure to pick the Disk Utility option to load OS 9 drivers." Is this necessary on every partition, or only those that will be backups for the System 9 direct boot on the internal drive?
    I haven't done that lately, so I can't remember which one, but you'll only see that option in one of those two places.
    (2) When you speak of "formatting the drive", does this mean each partition when I prepare it for use, or do I do that only once for the entire 1TB? I thought I formatted each partition?
    I was using the term "formatting the drive" is the loose sense of preparing a drive for use. You're right that you first partition a drive, then format each partition.
    (3) You said "Intel Macs can boot from APM volumes. If I remember correctly, the only thing you can't do with that combination is to install OS X. That doesn't prevent making a clone." I thought Intel Macs can only boot from System X drives (volumes)? If I make a clone of my internal start up drive (System X) part of the purpose would be to have an external drive I could boot from if my internal drive failed?
    You're right that Intel Macs can only boot from OS X volumes. That's separate from the issue of that the "partition map scheme" has to be. If you're worried that my advice isn't accurate, then just try an APM scheme.
    See also this thread: http://forums.macrumors.com/showthread.php?t=253567

  • Design structure for systematized backup - Pt. 2

    Introduction: I am planning to set up a system for regular back up of my wife's G4 (mirrored drive door). It originally came with both System 9 and System X all mixed together on the internal hard drive. We can choose which system to boot into in Startup preferences. Additional internal drives have been added in the past and RAID is configured.
    System X has been upgraded to 10.4.11. System 9 remains as a separate drive on the desktop and we want to preserve the ability to boot directly into System 9 to use some old software. The G4 has Firewire 400 and we just bought a new external 1TB quad interface drive (Newertech miniStack v3).
    Current Plan: Partition the new drive to create different bootable backups for different system versions, both for the G4 and also my Mac mini. Additional partitions for new fresh install of 10.5 for G4 and for Time Machine backups of same. Eg., current plan is:
    Partition 1 = Bootable backup of System 10.4.11 from G4;
    Partition 2 = Same of System 9 from G4;
    Partition 3 = Same of System 10.5 from mini;
    Partition 4 = Same of System 10.6 from mini;
    Partition 5 = New install of 10.5 for G4;
    Partition 6 = Time Machine backups from partition 5 (yes I know this is less than ideal and separate media is preferable).
    Partition 7 = Remaining unused space
    Issues: I have been reading various threads in here, but some questions remain. Regarding options for backup software:
    I have Data Backup that came on an external drive that I purchased some time ago (now ugraded to v3.1.1). I've used this in the past to make one-time clone backups of bootable systems to external drives.
    In addition, the new 1TB Drive came with a copy of Carbon Copy Cloner v3.3.7, which is new to me. I've used it once to clone a bootable System 10.5.8 to a Flash Drive as an emergency startup.
    I have also seen a number of references in messages to SuperDuper as another popular backup utility.
    Questions: (A) Are there any significant differences between the features of these programs in making bootable backups of internal startup drives?
    (B) Are any of these programs significantly faster than the others (parameters being equal)?
    (C) Do all programs have an efficient means to update bootable clone backups to keep them current with the source (not separate multiple "incremental" files)?
    (D) How long can updates be made to a clone before its advisable to redo a fresh new backup from scratch?
    (E) Is there anything else I should know in comparing which program to use?
    I have other questions but will handle those in later posts after I resolve these matters. Thanks to all for your time and attention.

    Randy Knowles wrote:
    (A) Are there any significant differences between the features of these programs in making bootable backups of internal startup drives?
    I was hoping to give someone else a chance to contribute, but since a week has gone by, I'll make some comments.
    As I believe I mentioned earlier, I have no knowledge of Data Backup. The other two programs are very similar in function. I believe that Carbon Copy Cloner is "donationware". To get the ability of copying only changed files, you have to pay around US$28 for Super Duper!, while that feature seems to be enabled already on Carbon Copy Cloner.
    (B) Are any of these programs significantly faster than the others (parameters being equal)?
    I'm unaware of benchmark results for those programs. The ability to copy only changed files can save hours per backup.
    (C) Do all programs have an efficient means to update bootable clone backups to keep them current with the source (not separate multiple "incremental" files)?
    (D) How long can updates be made to a clone before its advisable to redo a fresh new backup from scratch?
    I have clones that have been in use for months or even years. The only time I tend to "remake" a clone is when I'm switching to a larger disk. Even then, I sometimes clone the clone to the new drive.
    (E) Is there anything else I should know in comparing which program to use?
    I'd still suggest reviewing the electronic book that I believe I mentioned in an earlier thread.

  • Best practices or design framework for designing processes in OSB(11g)

    Hi all,
    We have been working in oracle 10g, now in the new project we are going to use Soa suite 11g.For 10g we designed our services very similar to AIA framework. But in 11g since OSB is introduced we are not able to exactly fit the AIA framework here because OSB has a structure different than ESB.
    Can anybody suggest best practices or some design framework for designing processes in OSB or 11g SOA Suite ?

    http://download.oracle.com/docs/cd/E12839_01/integration.1111/e10223/04_osb.htm
    http://www.oracle.com/technology/products/integration/service-bus/index.html
    Regards,
    Anuj

Maybe you are looking for