Design structure for systematized backup - Pt. 1

Introduction: I am planning to set up a system for regular back up of my wife's G4 (mirrored drive door). It originally came with both System 9 and System X all mixed together on the internal hard drive. We can choose which system to boot into in Startup preferences. Additional internal drives have been added in the past and RAID is configured.
System X has been upgraded to 10.4.11. System 9 remains as a separate drive on the desktop and we want to preserve the ability to boot directly into System 9 to use some old software.
The G4 has Firewire 400 and we just bought a new external 1TB quad interface drive. I would like to use the drive to create a new external 10.5 boot drive, and also for backup purposes. I have been reading various threads in here, but some apparently conflicting information has me confused.
Issues: I'll start my questions with Part 1 - preparing the new external drive. I am contemplating partitioning the new drive into several pieces to allow for several different "drives" to be available on the desktop. One partition would be for a new installation of System 10.5; one would be for a clone backup of the internal drive System 10.4.11; one would be for a clone backup of the internal drive System 9; one would be for Time Machine backup of the new System 10.5.
Questions: (A) Can each of the different partitions be bootable (except the Time Machine)? I've seen language in here that suggests that partitions can't be made bootable, only a volume, thus only a +one partition+ drive can be made bootable? (I would like to be able to have at least three different bootable drives with different systems.)
(B) Can some of the partitions be made APM bootable while others are GUID bootable? Or does the type of format apply across all partitions to the entire physical drive? (I would like to be able to also have a backup of my Mac mini Leopard and Snow Leopard drives on the 1TB external as well.)
(C) Can I use my Mac mini with System 10.5.8 to do the initial partitioning of the 1TB external, so that non destructive addition of more partitions in the future may be possible?
I have other questions but will handle those in later posts after I resolve these initial mattes. Thanks to all for your time and attention.
Message was edited by: Randy Knowles

Randy Knowles wrote:
(1) You said - "When formatting the drive, be sure to pick the Disk Utility option to load OS 9 drivers." Is this necessary on every partition, or only those that will be backups for the System 9 direct boot on the internal drive?
I haven't done that lately, so I can't remember which one, but you'll only see that option in one of those two places.
(2) When you speak of "formatting the drive", does this mean each partition when I prepare it for use, or do I do that only once for the entire 1TB? I thought I formatted each partition?
I was using the term "formatting the drive" is the loose sense of preparing a drive for use. You're right that you first partition a drive, then format each partition.
(3) You said "Intel Macs can boot from APM volumes. If I remember correctly, the only thing you can't do with that combination is to install OS X. That doesn't prevent making a clone." I thought Intel Macs can only boot from System X drives (volumes)? If I make a clone of my internal start up drive (System X) part of the purpose would be to have an external drive I could boot from if my internal drive failed?
You're right that Intel Macs can only boot from OS X volumes. That's separate from the issue of that the "partition map scheme" has to be. If you're worried that my advice isn't accurate, then just try an APM scheme.
See also this thread: http://forums.macrumors.com/showthread.php?t=253567

Similar Messages

  • Design structure for systematized backup - Pt. 2

    Introduction: I am planning to set up a system for regular back up of my wife's G4 (mirrored drive door). It originally came with both System 9 and System X all mixed together on the internal hard drive. We can choose which system to boot into in Startup preferences. Additional internal drives have been added in the past and RAID is configured.
    System X has been upgraded to 10.4.11. System 9 remains as a separate drive on the desktop and we want to preserve the ability to boot directly into System 9 to use some old software. The G4 has Firewire 400 and we just bought a new external 1TB quad interface drive (Newertech miniStack v3).
    Current Plan: Partition the new drive to create different bootable backups for different system versions, both for the G4 and also my Mac mini. Additional partitions for new fresh install of 10.5 for G4 and for Time Machine backups of same. Eg., current plan is:
    Partition 1 = Bootable backup of System 10.4.11 from G4;
    Partition 2 = Same of System 9 from G4;
    Partition 3 = Same of System 10.5 from mini;
    Partition 4 = Same of System 10.6 from mini;
    Partition 5 = New install of 10.5 for G4;
    Partition 6 = Time Machine backups from partition 5 (yes I know this is less than ideal and separate media is preferable).
    Partition 7 = Remaining unused space
    Issues: I have been reading various threads in here, but some questions remain. Regarding options for backup software:
    I have Data Backup that came on an external drive that I purchased some time ago (now ugraded to v3.1.1). I've used this in the past to make one-time clone backups of bootable systems to external drives.
    In addition, the new 1TB Drive came with a copy of Carbon Copy Cloner v3.3.7, which is new to me. I've used it once to clone a bootable System 10.5.8 to a Flash Drive as an emergency startup.
    I have also seen a number of references in messages to SuperDuper as another popular backup utility.
    Questions: (A) Are there any significant differences between the features of these programs in making bootable backups of internal startup drives?
    (B) Are any of these programs significantly faster than the others (parameters being equal)?
    (C) Do all programs have an efficient means to update bootable clone backups to keep them current with the source (not separate multiple "incremental" files)?
    (D) How long can updates be made to a clone before its advisable to redo a fresh new backup from scratch?
    (E) Is there anything else I should know in comparing which program to use?
    I have other questions but will handle those in later posts after I resolve these matters. Thanks to all for your time and attention.

    Randy Knowles wrote:
    (A) Are there any significant differences between the features of these programs in making bootable backups of internal startup drives?
    I was hoping to give someone else a chance to contribute, but since a week has gone by, I'll make some comments.
    As I believe I mentioned earlier, I have no knowledge of Data Backup. The other two programs are very similar in function. I believe that Carbon Copy Cloner is "donationware". To get the ability of copying only changed files, you have to pay around US$28 for Super Duper!, while that feature seems to be enabled already on Carbon Copy Cloner.
    (B) Are any of these programs significantly faster than the others (parameters being equal)?
    I'm unaware of benchmark results for those programs. The ability to copy only changed files can save hours per backup.
    (C) Do all programs have an efficient means to update bootable clone backups to keep them current with the source (not separate multiple "incremental" files)?
    (D) How long can updates be made to a clone before its advisable to redo a fresh new backup from scratch?
    I have clones that have been in use for months or even years. The only time I tend to "remake" a clone is when I'm switching to a larger disk. Even then, I sometimes clone the clone to the new drive.
    (E) Is there anything else I should know in comparing which program to use?
    I'd still suggest reviewing the electronic book that I believe I mentioned in an earlier thread.

  • Best design structure for 4710's

    We are implementing 4710's in our core network..
    what could be the best design structure from a simplicity point
    one interface vlan for for vips---connected front end to the core..and backend for servers (routed mode)
    should you have more than one interface vlan for servers and or clients?
    at which point would u need multi context.......besides an Admin context
    should you put a management interface on each context?

    We are implementing 4710's in our core network..
    --what could be the best design structure from a simplicity point
    Design would vary based on specific requirements. To connect it to a specific layer on the network (core/agg) you would have to check the traffic flow to decide what suits you best.
    In terms of ACE design, if source IP visibility is not a requirement, One-arm mode with Source NAT provides the ability for non load balanced traffic to bypass the ACE. If it is a requirement you can use PBRs but that complicates things a little because you have to now manage the routers for changes on the ACE. With routed mode, the design is simple and servers point to the ACE as their default gateway. Need to weigh the pros and cons of each of the options based on the specific requirements.
    --one interface vlan for for vips---connected front end to the core..and backend for servers (routed mode)
    Yes - for routed mode that would be the way to do it. In this case, in addition to load balancing, the ACE routes non-loadbalanced traffic to/from the servers.
    should you have more than one interface vlan for servers and or clients?
    - Depends in your subnets. If you have separate subnets for your web/app/db servers then it is a good idea to have different subnets. Also, you may want to think about separate contexts if you want complete isolation between the layers.
    --at which point would u need multi context.......besides an Admin context
    As far as possible, try to keep the Admin context only for administration. Make a separate context(s) for load balancing and manage the resources to it.
    --should you put a management interface on each context?
    Yes - that would give you the ability to have different users manage only their contexts.
    Hope that helps .

  • Proper security structure for Single Sign on Server

    We are all used to how we design security structure for vCenter Server if you have had an existing VMware environment prior to 5.1.  Who should have administrative privileges in vCenter Server, what roles, permissions, and so on should be assigned to what users and groups - these questions have already been addressed in our current configuration.
    Now Single Sign on introduces a significant new point of consideration for determining issues of access and authentication.
    I'd like to get some ideas on how this should be handled.  For example, should previous VMware administrators by definition become Single Sign on Administrators? Should the administrators of the Active Directory domain now start to get involved with the Single Sign on Server?
    For example, Single Sign on now forces VMware administrators to configure things like:
    -Password Complexity Policy for SSO
    -Password Expiration for SSO
    -Lockout Policy
    We already probably have these things tightly controlled in AD and locked down with group policy, but you can't apply group policy directly to an SSO server and make it receive a GPO from Active Directory.  (You can make the Windows OS that SSO is running on have a GPO applied, but it won't configure SSO itself, just the OS).
    VMware admins are looking at a new set of questions relating to authentication and authorization.  Someone has to have written something or will be writing something to help us get the big picture of what is changing with SSO if anything and how we need to look at SSO from a security design and best practices.
    Should we just make existing vCenter Server admins SSO admins or do we need to take a step back and reconsider?

    Hello,
    Actually, yes. SSO is fairly robust in 5.5. It has a few limitations around email of expired passwords, but that is mainly because some people do not use them. I use SSO to provide the usernames and passwords for all my VMware vCenter and related product service accounts. I.e. an account for vdp, Horizon, vCops, Log Insight, etc.  This is more about keeping systems segregated once more with no real need for AD for services. But AD via SSO is used by users.
    Read the documentation, and determine how SSO fits into your current password policy and take a long hard look at your virtualization management environment. Is there a 1 service account per service talking directly to vCenter? If not, SSO can help you implement that. The key is to match its functionality to your security policy.
    Best regards,
    Edward L. Haletky
    VMware Communities User Moderator, VMware vExpert 2009, 2010, 2011,2012,2013,2014
    Author of the books 'VMWare ESX and ESXi in the Enterprise: Planning Deployment Virtualization Servers', Copyright 2011 Pearson Education. 'VMware vSphere and Virtual Infrastructure Security: Securing the Virtual Environment', Copyright 2009 Pearson Education.
    Virtualization and Cloud Security Analyst: The Virtualization Practice, LLC -- vSphere Upgrade Saga -- Virtualization Security Round Table Podcast

  • Lost Adobe Create Suite 4 Design Premium for Windows disk #2

    Hi, I have the Adobe Create Suite 4 Design Premium for Windows and have lost the Application disk #2 and cannot reinstall this on my new laptop.  I have the licenses and the rest of the disks.  I have called Adobe and they no longer support CS4.  They suggested I try to find someone who has the software and get a copy of disk 2. Can some please help,  I don't want to upgrade to CS5

    >suggested I try to find someone who has the software and get a copy of disk 2
    I am surprised by that... my understanding (albeit poor) of copyright is that you are allowed to make a copy of a disc for your own personal backup storage
    That does not include making a copy to give to anyone else
    Old or Used Software http://www.emsps.com/oldtools/ Or http://www.retrosoftware.com/
    http://forums.adobe.com/message/1636890 warns about buying from eBay

  • Error: "This backup is too large for the backup volume."

    Well TM is acting up. I get an error that reads:
    "This backup is too large for the backup volume."
    Both the internal boot disk and the external baclup drive are 1TB. The internal one has a two partitions, the OSX one that is 900GBs and a 32GB NTFS one for Boot Camp.
    The external drive is a single OSX Extended part. that is 932GBs.
    Both the Time Machine disk, and the Boot Camp disk are excluded from the backup along with a "Crap" folder for temporary large files as well as the EyeTV temp folder.
    Time Machine says it needs 938GBs to backup only the OSX disk, which has 806GBs in use with the rest free. WTFFF? The TM pane says that "only" 782GBs are going to be backed up. Where did the 938GBs figure come from?
    This happened after moving a large folder (128GB in total) from the root of the OSX disk over to my Home Folder.
    I have reformated the Time Machine drive and have no backups at all of my data and it refuses to backup!!
    Why would it need 938GBs of space to backup if the disk has "only" 806 GBs in use??? Is there anyway to reset Time Machine completely???
    Some screenshots:
    http://www.xcapepr.com/images/tm2.png
    http://www.xcapepr.com/images/tm1.png
    http://www.xcapepr.com/images/tm4.png

    xcapepr wrote:
    Time Machine says it needs 938GBs to backup only the OSX disk, which has 806GBs in use with the rest free. WTFFF? The TM pane says that "only" 782GBs are going to be backed up. Where did the 938GBs figure come from?
    Why would it need 938GBs of space to backup if the disk has "only" 806 GBs in use??? Is there anyway to reset Time Machine completely???
    TM makes an initial "estimate" of how much space it needs, "including padding", that is often quite high. Why that is, and Just exactly what it means by "padding" are rather mysterious. But it does also need work space on any drive, including your TM drive.
    But beyond that, your TM disk really is too small for what you're backing-up. The general "rule of thumb" is it should be 2-3 times the size of what it's backing-up, but it really depends on how you use your Mac. If you frequently update lots of large files, even 3 times may not be enough. If you're a light user, you might get by with 1.5 times. But that's about the lower limit.
    Note that although it does skip a few system caches, work files, etc., by default it backs up everything else, and does not do any compression.
    All this is because TM is designed to manage it's backups and space for you. Once it's initial, full backup is done, it will by default then back-up any changes hourly. It only keeps those hourly backups for 24 hours, but converts the first of the day to a "daily" backup, which it keeps for a month. After a month, it converts one per week into a "weekly" backup that it will keep for as long as it has room
    What you're up against is, room for those 30 dailies and up to 24 hourlies.
    You might be able to get it to work, sort of, temporarily, by excluding something large, like your home folder, until that first full backup completes, then remove the exclusion for the next run. But pretty soon, it will begin to fail again, and you'll have to delete backups manually (from the TM interface, not via the Finder).
    Longer term, you need a bigger disk; or exclude some large items (back-them up to a portable external or even DVD/RWs first); or a different strategy.
    You might want to investigate CarbonCopyCloner, SuperDuper!, and other apps that can be used to make bootable "clones". Their advantage, beyond needing less room, is when your HD fails, you can immediately boot and run from the clone, rather than waiting to restore from TM to your repaired or replaced HD.
    Their disadvantages are, you don't have the previous versions of changed or deleted files, and because of the way they work, their "incremental" backups of changed items take much longer and far more CPU.
    Many of us use both a "clone" (I use CCC) and TM. On my small (roughly 30 gb) system, the difference is dramatic: I rarely notice TM's hourly backups -- they usually run under 30 seconds; CCC takes at least 15 minutes and most of my CPU.

  • "Backup is too large for the backup volume" error

    I've been backing up with TM for a while now, and finally it seems as though the hard drive is full, since I'm down to 4.2GB available of 114.4GB.
    Whenever TM tries to do a backup, it gives me the error "This backup is too large for the backup volume. The backup requires 10.8 GB but only 4.2GB are available. To select a larger volume, or make the backup smaller by excluding files, open System Preferences and choose Time Machine."
    I understand that I have those two options, but why can't TM just erase the oldest backup and use that free space to make the new backup? I know a 120GB drive is pretty small, but if I have to just keep accumulating backups infinitely, I'm afraid I'll end up with 10 years of backups and a 890-zettabyte drive taking up my garage. I'm hoping there's a more practical solution.

    John,
    Please review the following article as it might explain what you are encountering.
    *_“This Backup is Too Large for the Backup Volume”_*
    First, much depends on the size of your Mac’s internal hard disk, the quantity of data it contains, and the size of the hard disk designated for Time Machine backups. It is recommended that any hard disk designated for Time Machine backups be +at least+ twice as large as the hard disk it is backing up from. You see, the more space it has to grow, the greater the history it can preserve.
    *Disk Management*
    Time Machine is designed to use the space it is given as economically as possible. When backups reach the limit of expansion, Time Machine will begin to delete old backups to make way for newer data. The less space you provide for backups the sooner older data will be discarded. [http://docs.info.apple.com/article.html?path=Mac/10.5/en/15137.html]
    However, Time Machine will only delete what it considers “expired”. Within the Console Logs this process is referred to as “thinning”. It appears that many of these “expired” backups are deleted when hourly backups are consolidated into daily backups and daily backups are consolidated into weekly backups. This consolidation takes place once hourly backups reach 24 hours old and daily backups reach about 30 days old. Weekly backups will only be deleted, or ‘thinned’, once the backup drive nears full capacity.
    One thing seems for sure, though; If a new incremental backup happens to be larger than what Time Machine currently considers “expired” then you will get the message “This backup is too large for the backup volume.” In other words, Time Machine believes it would have to sacrifice to much to accommodate the latest incremental backup. This is probably why Time Machine always overestimates incremental backups by 2 to 10 times the actual size of the data currently being backed up. Within the Console logs this is referred to as “padding”. This is so that backup files never actually reach the physically limits of the backup disk itself.
    *Recovering Backup Space*
    If you have discovered that large unwanted files have been backed up, you can use the Time Machine “time travel” interface to recovered some of that space. Do NOT, however, delete files from a Time Machine backup disk by manually mounting the disk and dragging files to the trash. You can damage or destroy your original backups by this means.
    Additionally, deleting files you no longer wish to keep on your Mac does not immediately remove such files from Time Machine backups. Once data has been removed from your Macs' hard disk it will remain in backups for some time until Time Machine determines that it has "expired". That's one of its’ benefits - it retains data you may have unintentionally deleted. But eventually that data is expunged. If, however, you need to remove backed up files immediately, do this:
    Launch Time Machine from the Dock icon.
    Initially, you are presented with a window labeled “Today (Now)”. This window represents the state of your Mac as it exists now. +DO NOT+ delete or make changes to files while you see “Today (Now)” at the bottom of the screen. Otherwise, you will be deleting files that exist "today" - not yesterday or last week.
    Click on the window just behind “Today (Now)”. This represents the last successful backup and should display the date and time of this backup at the bottom of the screen.
    Now, navigate to where the unwanted file resides. If it has been some time since you deleted the file from your Mac, you may need to go farther back in time to see the unwanted file. In that case, use the time scale on the right to choose a date prior to when you actually deleted the file from your Mac.
    Highlight the file and click the Actions menu (Gear icon) from the toolbar.
    Select “Delete all backups of <this file>”.
    *Full Backup After Restore*
    If you are running out of disk space sooner than expected it may be that Time Machine is ignoring previous backups and is trying to perform another full backup of your system? This will happen if you have reinstalled the System Software (Mac OS), or replaced your computer with a new one, or hard significant repair work done on your exisitng Mac. Time Machine will perform a new full backup. This is normal. [http://support.apple.com/kb/TS1338]
    You have several options if Time Machine is unable to perform the new full backup:
    A. Delete the old backups, and let Time Machine begin a fresh.
    B. Attach another external hard disk and begin backups there, while keeping this current hard disk. After you are satisfied with the new backup set, you can later reformat the old hard disk and use it for other storage.
    C. Ctrl-Click the Time Machine Dock icon and select "Browse Other Time Machine disks...". Then select the old backup set. Navigate to files/folders you don't really need backups of and go up to the Action menu ("Gear" icon) and select "Delete all backups of this file." If you delete enough useless stuff, you may be able to free up enough space for the new backup to take place. However, this method is not assured as it may not free up enough "contiguous space" for the new backup to take place.
    *Outgrown Your Backup Disk?*
    On the other hand, your computers drive contents may very well have outgrown the capacity of the Time Machine backup disk. It may be time to purchase a larger capacity hard drive for Time Machine backups. Alternatively, you can begin using the Time Machine Preferences exclusion list to prevent Time Machine from backing up unneeded files/folders.
    Consider as well: Do you really need ALL that data on your primary hard disk? It sounds like you might need to Archive to a different hard disk anything that's is not of immediate importance. You see, Time Machine is not designed for archiving purposes, just as a backup of your local drive(s). In the event of disaster, it can get your system back to its' current state without having to reinstall everything. But if you need LONG TERM storage, then you need another drive that is removed from your normal everyday working environment.
    This KB article discusses this scenario with some suggestions including Archiving the old backups and starting fresh [http://docs.info.apple.com/article.html?path=Mac/10.5/en/15137.html]
    Let us know if this clarifies things.
    Cheers!

  • LDAP design question for multiple sites

    LDAP design question for multiple sites
    I'm planning to implement the Sun Java System Directory Server 5.2 2005Q1 for replacing the NIS.
    Currently we have 3 sites with different NIS domains.
    Since the NFS over the WAN connection is very unreliable, I would like to implement as follows:
    1. 3 LDAP servers + replica for each sites.
    2. Single username and password for every end user cross those 3 sites.
    3. Different auto_master, auto_home and auto_local maps for three sites. So when user login to different site, the password is the same but the home directory is different (local).
    So the questions are
    1. Should I need to have 3 domains for LDAP?
    2. If yes for question 1, then how can I keep the username password sync for three domains? If no for question 1, then what is the DIT (Directory Infrastructure Tree) or directory structure I should use?
    3. How to make auto map work on LDAP as well as mount local home directory?
    I really appreciate that some LDAP experta can light me up on this project.

    Thanks for your information.
    My current environment has 3 sites with 3 different NIS domainname: SiteA: A.com, SiteB:B.A.com, SiteC:C.A.com (A.com is our company domainname).
    So everytime I add a new user account and I need to create on three NIS domains separately. Also, the password is out of sync if user change the password on one site.
    I would like to migrate NIS to LDAP.
    I want to have single username and password for each user on 3 sites. However, the home directory is on local NFS filer.
    Say for userA, his home directory is /user/userA in passwd file/map. On location X, his home directory will mount FilerX:/vol/user/userA,
    On location Y, userA's home directory will mount FilerY:/vol/user/userA.
    So the mount drive is determined by auto_user map in NIS.
    In other words, there will be 3 different auto_user maps in 3 different LDAP servers.
    So userA login hostX in location X will mount home directory on local FilerX, and login hostY in location Y will mount home directory on local FilerY.
    But the username and password will be the same on three sites.
    That'd my goal.
    Some LDAP expert suggest me the MMR (Multiple-Master-Replication). But I still no quite sure how to do MMR.
    It would be appreciated if some LDAP guru can give me some guideline at start point.
    Best wishes

  • How to create a simple File structure for a large project?

    Hi to all,
    I've own and operated my own website design/development (a 1 woman office, plus many sub-contractors) over the period of 8 years. I started hand-coding HTML sites in 1997, before the creation DW (though I think the first ver was for Mac in '97). Over the recent years I've udated my skills to include CSS and enough Java/PHP to customize and/or troubleshoot current projects (learn as I go).
    The majority of my clients have been other 1-10 person entrepreneur companies. I've recently won a bid to redesign a government site which consist of 30 departments, including their main site.
    The purpose of this thread is to get some ideas on creating a file management/structure. Creating file management setup for smaller companies was a piece of cake, using a simple file mgmt structure within DW. Their current file structure is all over the place. I've read about a very good, simple file struture in a DW CS4 manual and wanted to get feedback on different methods that have worked, and have not worked, or your client:
    Here's my thinking:
    1. within the root dir place home.htm and perhaps a few .htm related only to home
        2. create the following folders off the root, "docs, imgs/global, CSS, FLA, Departments"
                - sub folders within docs for each dept
                - site wide css's placed into CSS
                - site wide FLAs into the FLA
                - sub-folders created within 'imgs' for each dept, including a 'global folder' for sitewide images and menu imgs (if needed)
    - OR -
    1. create same file structure for each dept folder, such as 'imgs/CSS/FLA/Docs'
    Open for suggestions....
    Ciao

    It is a problem I have thought over at length and still feel what I use could be better. You are doing it the right way around researching before you start, as moving files once things are underway can course real problems. One issue is the use of similar assets across site(s), and version control if you have multiple versions of the same asset.
    Can not say I have built a site(s) of that size but would recommend putting together a flow chart to help visualise the structure and find out better ways organising (works for me). Good luck, post back with your solution.

  • Does anyone know the pricing structure for Digital Marketing Suite?

    Does anyone know the pricing structure for Digital Marketing Suite? I asked for info but no one ever got back to me. I don't want to waste anyones time if it's too expensive so I'd like to know up front what the fees/package rates are.
    Thanks.
    Peter Marino
    Owner of an
    SEO Comapny in NYC

    This is the top story at this writing at AppleInsider:
    Like the updated mid-2014 13-inch MacBook Pro with Retina display, Apple's latest 15-inch notebook brings slightly speedier Intel processors. But with memory, graphics and design all carried over from last year, the most substantial change is a $100 price cut.
    http://appleinsider.com/articles/14/09/07/review-apples-mid-2014-15-inch-macbook -pro-with-retina-display

  • An EFFECTIVE development directory structure for J2EE platform?

    Hi, here we r talking about deployment environment more than development
    environment. Have u ever think about designing an EFFECTIVE development
    directory structure for J2EE platform( e.g. weblogic )? u r not using the
    deployment directories for coding, r u? :)
    I used to construct a dir structure for dev and want to improve it.
    d:/wholesystem/*.prj // Project files
    ...../module1/src/com/.... // Module source files
    ...../module1/doc/... // Module doc files
    ...../module1/classes/... // Module class files
    ...../module2/...
    ...../web/*.jsp // web page files
    ...../web/images/... // web page images
    ...../web/WEB-INF/... //...
    Do u have any good ideas? Thanks!
    * Name: Gary Wang
    * Tele: 010-65546668-8119
    * Mail: [email protected]

    Create a web-inf folder at the same level of src and
    jsp folder inside src
    i mean
    /build.xml
    /src/
    /src/java/<package>/...../*.java
    /src/demo/<package/...../*.java
    /src/test/<package>/....../*.java
    /src/jsp
    /web-infSo, would you put in /src/jsp only the *.jsp?
    And what in /WEB-INF ? What woud you put there? Would you do something like:
    /WEB-INF/web.xml
    /WEB-INF/src/<package>/..../<my_servlets_and_j2ee_stuff>.java
    /WEB-INF/classes/<package>/..../<my_servlets_and_j2ee_stuff>.java
    In this manner sources and classes are in the same tree, it does not seem very clean to me, expecially if you consider that probably I must have a "test" directory to unit test some j2ee stuff (as for the j2se stuff in "src"): how would you do that?
    Is this directory structure anyway what you meant or not?
    alessio

  • Help me out with Directory structure for JSF+SPRING+HIBERNATE Project

    Hi frnds ,
    My name is Walter working for a startup software company . We are working on Hospital Management System (HMS) project .. MVC Architecture ...using Hibernate Spring and JSF ..we need to design Directory Structure for our project..
    plzz help me friends in suggesting MVC Directory structure ...? also plzz help me by directing me with the navigation flow?
    Thnxx in advance
    Regards
    Walter

    Thank you so much .. friends ..for your kind replies..thanks to Illu, anguquga and special thanks to BalusC for giving me the advice for hiring EE Artitech ..
    Anyways I have discussed with my teammates designing the directory structure
    anguquga your directory structure is close to what I have designed ..referiing to a sample application on web..
    Hospital Management system MVC architecture Directory structure
    This is the way the structure goes on ..
    model --> for Hibernate as well as Spring
    View --> for JSF
    src(-)
    |
    ------(-) java
         |
         -------(-) model
              |
              ------(+) businessobject
              |
              ------(-) dao
              |
              ------(+) hibernate
              |
              ------(+) exception
              |
              ------(-) service
              |
              ------ (+) impl
              |
              ------(+) util
              (-) view
              |
              ------(+) bean
              |
              ------(+) builder
              |
              ------(+) bundle
              |
              ------(+) servicelocator
              |
              ------(+) util
              |
              ------(+) validator
    (-) Web or WebRoot
    |
    ----- (-) JSP Files�etc.,
    |
    ----- (-) META-INF
    |
         ------     (+) Images
    |
         ------     (+) Scripts ==== CSS (cascading style sheets, JavaScript files etc.,)
    |
    ----- (-) WEB-INF =========xml files web.xml, faces-config.xml etc.,
         |
         -----(-) Classes
              |
    -----(-) HMS
    |
                   ----- (+) model
                   |
                   ----- (+) view
    |
    ----- (+) lib
    I am sure you may notice few errors .. if u find any plzz reply me back.... thnxx in advance for replies...and thnxx for giving your valuable replies...
    Walter (Kaleem)

  • Mail.app:  non-parallel structure for incoming versus outgoing servers

    Manual setup of POP accounts has a non-parallel structure for incoming versus outgoing servers.
    To review:  for each account you must enter setup data for a single POP  (incoming) server, but you are offered multiple choices for SMTP (outgoing)  servers, and Mail.app supports sharing a single SMTP server setup among multiple accounts.
    Why?
    Why not just offer distinct setups for each account in each direction?   There must be a good reason for all this extra structure…. yes?
    More specifically:  What are implications for multiple accounts, e.g.
          [email protected]
          [email protected]
    using the same outgoing server?   It seems to me that Mail.app encourages you use a single SMTP server setup for both, am I correct?    But this seems to undermine proper security. I think  each account should uses its own credentials for both incoming and outgoing, yes?  
    In Mail.app V8.2, shipped with MacOS 10.10.2, Mail.app offers an opportunity to set up a second distinct SMTP server (in the above example, for [email protected]) but the text boxes for username and password refuse to accept text.  Is this just my experience, a local problem?  Or is it observed by others?  If so… a new way of enforcing sharing of SMTP setups, or  a bug?
    I'm very curious about the long-term Mail.app design philosophy in this regard, but this is not just curiosity:  I'm having trouble with multiple email accounts after upgrading to 10.10.2  In each case, when I try to set up multiple accounts on a single mail server.
    I've done quite a bit of searching for explanations of mail.app's design, without success.  It is very possible I've missed some relevant explanations.  If so, please provide links and accept my apologies.
    TIA

    Manual setup of POP accounts has a non-parallel structure for incoming versus outgoing servers.
    To review:  for each account you must enter setup data for a single POP  (incoming) server, but you are offered multiple choices for SMTP (outgoing)  servers, and Mail.app supports sharing a single SMTP server setup among multiple accounts.
    Why?
    Why not just offer distinct setups for each account in each direction?   There must be a good reason for all this extra structure…. yes?
    More specifically:  What are implications for multiple accounts, e.g.
          [email protected]
          [email protected]
    using the same outgoing server?   It seems to me that Mail.app encourages you use a single SMTP server setup for both, am I correct?    But this seems to undermine proper security. I think  each account should uses its own credentials for both incoming and outgoing, yes?  
    In Mail.app V8.2, shipped with MacOS 10.10.2, Mail.app offers an opportunity to set up a second distinct SMTP server (in the above example, for [email protected]) but the text boxes for username and password refuse to accept text.  Is this just my experience, a local problem?  Or is it observed by others?  If so… a new way of enforcing sharing of SMTP setups, or  a bug?
    I'm very curious about the long-term Mail.app design philosophy in this regard, but this is not just curiosity:  I'm having trouble with multiple email accounts after upgrading to 10.10.2  In each case, when I try to set up multiple accounts on a single mail server.
    I've done quite a bit of searching for explanations of mail.app's design, without success.  It is very possible I've missed some relevant explanations.  If so, please provide links and accept my apologies.
    TIA

  • Bad performance when deleting report column in webi(with "Design – Structure only")

    Hi all,
    One of our customer has recently upgraded from BO XI to BO4.1. In the new BO 4.1, they encountered a bad performance issue when they were deleting a column in Webi(using "Design – Structure only" mode).
    With “Design – Structure only" mode,  it took webi about 10 seconds to complete after the customer right-clicked a report column and clicked the "delete".  The customer said that they only need to wait for less than 1 second when they did the same in BO XI old version.
    The new BO version used is 4.1SP02, installed in Windows Server 2008 R2. (Server with 32 core CPU, 32G memory)
    This bad performance happened in both Webi web and Rich Client. (in Webi Rich Client, the performance is a little bit better. The 'delete column' action takes about 8 seconds to complete).
    Do anyone know how to tune this performance in webi?  Thank you.
    Besides, it seems that each time we are making change in the webi report structure in IE or Rich Client, webi need to interact with Server site to upload the changes. Is there any option to change this behavior?  Say, do not upload change to Server for when 'deleting report column', only trigger the upload after a set of actions(e.g. trigger when click the "Save" button).
    Thank you.
    Regards,
    Eton.

    Hi all,
    Could anyone help me on this?  Thanks!
    What customer concerns now is that when they did the same 'column editing' action in BO XI R2 for the same report, they did not need to wait.  And they need to wait for at least 7-8 second in the BO 4.1SP02 environment for this action to complete.(data already purged, in structure-only mode)
    One more information about the webi report being editing is: there are many sheets in the report(about 6~10 sheets in one report). Customer don't want to separate these sheet into different reports  as it will increase the effort of their end users to locate similar report sheets.
    Regards,
    Eton.

  • Choice of design pattern for data acquisition system

    Hello all
    I have a trouble about selecting the suitable design pattern / architecture for a data acquisition system. 
    Here is the details of the desired system:
    There is data acquisition hardware and I need to use it by observing parameters on User interface. 
    the data acquisiton period, channel list to scan should be chosen on User interface. Besides, there are many user interface interactions. e.g. if user selects a channel to add scanlist, then I need to enable and make visible some other parts on user interface. 
    When user completes the channel selection, then he will press the button to start data acquisition. Then I also need to show the scanned values on a graph in real time and log them in txt file.
    I know that I cannot use producer consumer pattern here. because the data acquisition loop should wait for parameters to scan channels. and it works in a given period by user. so the user interface loop performs higher rate then consumer loop (data acquisition loop). it means queue will be bigger bigger. if I use notifier it will loss some data come from user interface. 
    is there any idea about that ? is there any suitable design pattern for this case ? 
    Thanks in advance
    best regards 
    Veli BAYAR
    Embedded Systems Software and Hardware Engineer 
    "You live in a graphical world. Why not program in one?"
    Solved!
    Go to Solution.

    johnsold wrote:
    Veli,
    I recommend the Producer/Consumer model with some modifications.
    You might need three loops.  I cannot tell for sure from your brief description.
    The User Interface loop responds to the user inputs for configuration and start/stop of acquisition.  The parameters and commands are passed to the Data Acquisition loop via a queue. In this loop is a state machine which has Idle, Configuration, Acquisition, and Shutdown states (and perhaps others). The data is sent to the Processing loop via a different queue. The Processing loop performs any data processing, displays the data to the user, and saves it to file. A notifier can be used to send the Stop or shutdown command from the User Interface loop to the other loops.  If the amount of processing is minimal and the file write times are not too long, the Processing loop functions might be able to occur in the Timeout case of the UI loop Event structure.  This simplifies things somewhat but is not as flexible when changes need to be made.
    I am not sure that a Design Pattern for this exact setup exists but it is basically a combination of the Producer/Consumer (Events) and Producer/Consumer (Data) Design Patterns.
    Lynn
    Check out this thread: http://forums.ni.com/t5/LabVIEW/Multiple-poll-case-structures-to-event-help/td-p/2551309
    There are discussions there about a 3-loop architecture that may help you.
    Jeff
    Jeffrey Zola

Maybe you are looking for

  • How can I remove pictures from the picture file?

    I have alot of pictures in the picture folder for the desktop background and screensaver. But there is no option to move to trash or delete. And, it won't let you drag them to the trash. I would like to remove them, can someone tell me? Thanks in adv

  • Save file in same folder

    It seems since CS5 when I open a file, make changes, and save the file, ID doesn't know where to save the file. Why doesn't it just save it where it is? Now I have to browse back to the folder where the file lives. Is there any way to fix this? Thank

  • XI file seems to be success (showing green) but actually would not updating

    Hi friends, Still problem is existed... the below xml code message shows the reasons for failures ...i guess... - <SOAP:Envelope xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/"> - <SOAP:Header> - <sap:Main xmlns:sap="http://sap.com/xi/XI/Messa

  • Insert image to JTree

    how to insert image to JTree(not the leaficon, openicon, closeicon image) ? example: // JTree A1 imaage A --A1.1 Image B B1 Image c --B1.1 image d                                                                                                        

  • Forms upgrade from 11g R1 to 11g R2

    Hi, We are planing to migrate from 11g R1 forms to 11g R2 forms. So we migrated one of the environment from 11g R1 to 11g R2. We noticed that the screen repaints / Flickers in forms 11g R2. This behavior is noticed in all the forms in 11g R2 environm