Best practice: Webdynpro in a large system landscape

Dear Sirs,
I have a few questions about using Webdynpro (WD) in a large system landscape.  After doing some research I understand there are a few alternatives, and I would like to get your opinions on the issue and links to any relevant documentation. I know most of my questions do not have a single answer, but I hope we can get a disussion, which will highlight the pro/cons.
My landscape consists of a full set of ECC and portal servers (DEV, QA, P) , where using WD to fetch BABI’s from the backend and present them in the portal is a likely scenario.
<b><i>Deploy the WD components on portal servers or on separate servers?</i></b>
Would you deploy the WD components on the portal WAS or would you advice having a (or a number) of servers dedicated to running WD.
The way I see it, when you are having a large number of developers, giving  away the SDM password to the portal server (DEV) in order for them to test WD applications is not advisable (or perhaps more true, not wanted by the basis). So perhaps a separate WAS for development of WD is advisable, and then let basis deploy them into the portal QA and PROD server.  I do not think that each developer having its own local J2EE for testing is likely.
How about performance?, will any solution be preferable over an other. Will it be faster/slower to run WD on separate WAS.
<b><i>Transporting the WD components</i></b>
How should one transport the components and keep them pointing to the right JCO connections (as you have different JCO connections for (DEV, QA, P)), I have seen example with threads where you opt for a dynamic setting of the JCO connections through parameters.  Is this the one to prefer? 
Any documentation on this issue would be highly appreciated. (Already read: System Landscape Directory, SAP System Landscape Directory on SAP Web AS Java 6.40)

Look into using NWDI as your source code control (DTR) and transport/migration from dev through to production.  This also will handle the deployment to your dev system (check-in/activate).
For unit testing and debugging you should be running a local version (NWDW).  This way once the code is ready to be shared with the team, you check it in (makes it visible to other team members) and activate it (deploys it to development server).
We are currently using a separate server for WD applications rather than running them on the portal server.  However, this does not allow for the WD app to run in the new WD iView.  So it depends on what the WD app needs to do an have access to.  Of course there is always the Federated Portal Network as an option, but that is a whole other topic.
For JCo connections, WD uses a connection name and this connection can be set up to point to different locations depending on which server it is on.  So on the development server the JCo connection can point to the dev back-end and in prod point to the prod back-end.  The JCo connections are not migrated, but setup in each system.
I hope this helps.  There is a lot of documentation available for NWDI to get you started.  See:  http://help.sap.com/saphelp_erp2005/helpdata/en/01/9c4940d1ba6913e10000000a1550b0/frameset.htm
-Cindy

Similar Messages

  • How to check verison of Best Practice Baseline in existing ECC system?

    Hi Expert,
    How to check verison of Best Practice Baseline in existing ECC system such as v1.603 or v1.604?
    Any help will be appriciate.
    Sayan

    Dear,
    Please go to https://websmp201.sap-ag.de/bestpractices and click on Baseline packages then on right hand side you will see that On which release is SAP Best Practices Baseline package which version is applicable.
    If you are on EHP4 then you can use the v1.604.
    How to Get SAP Best Practices Data Files for Installation (pdf, 278 KB) please refer this link,
    https://websmp201.sap-ag.de/~sapidb/011000358700000421882008E.pdf
    Hope it will help you.
    Regards,
    R.Brahmankar

  • Storage Server 2012 best practices? Newbie to larger storage systems.

    I have many years managing and planning smaller Windows server environments, however, my non-profit has recently purchased
    two StoreEasy 1630 servers and we would like to set them up using best practices for networking and Windows storage technologies. The main goal is to build an infrastructure so we can provide SMB/CIFS services across our campus network to our 500+ end user
    workstations, taking into account redundancy, backup and room for growth. The following describes our environment and vision. Any thoughts / guidance / white papers / directions would be appreciated.
    Networking
    The server closets all have Cisco 1000T switching equipment. What type of networking is desired/required? Do we
    need switch-hardware based LACP or will the Windows 2012 nic-teaming options be sufficient across the 4 1000T ports on the Storeasy?
    NAS Enclosures
    There are 2 StoreEasy 1630 Windows Storage servers. One in Brooklyn and the other in Manhattan.
    Hard Disk Configuration
    Each of the StoreEasy servers has 14 3TB drives for a total RAW storage capacity of 42TB. By default the StoreEasy
    servers were configured with 2 RAID 6 arrays with 1 hot standby disk in the first bay. One RAID 6 array is made up of disks 2-8 and is presenting two logical drives to the storage server. There is a 99.99GB OS partition and a 13872.32GB NTFS D: drive.The second
    RAID 6 Array resides on Disks 9-14 and is partitioned as one 11177.83 NTFS drive.  
    Storage Pooling
    In our deployment we would like to build in room for growth by implementing storage pooling that can be later
    increased in size when we add additional disk enclosures to the rack. Do we want to create VHDX files on top of the logical NTFS drives? When physical disk enclosures, with disks, are added to the rack and present a logical drive to the OS, would we just create
    additional VHDX files on the expansion enclosures and add them to the storage pool? If we do use VHDX virtual disks, what size virtual hard disks should we make? Is there a max capacity? 64TB? Please let us know what the best approach for storage pooling will
    be for our environment.
    Windows Sharing
    We were thinking that we would create a single Share granting all users within the AD FullOrganization User group
    read/write permission. Then within this share we were thinking of using NTFS permissioning to create subfolders with different permissions for each departmental group and subgroup. Is this the correct approach or do you suggest a different approach?
    DFS
    In order to provide high availability and redundancy we would like to use DFS replication on shared folders to
    mirror storage01, located in our Brooklyn server closet and storage02, located in our Manhattan server closet. Presently there is a 10TB DFS replication limit in Windows 2012. Is this replicaiton limit per share, or total of all files under DFS. We have been
    informed that HP will provide an upgrade to 2012 R2 Storage Server when it becomes available. In the meanwhile, how should we designing our storage and replication strategy around the limits?
    Backup Strategy
    I read that Windows Server backup can only backup disks up to 2TB in size. We were thinking that we would like
    our 2 current StoreEasy servers to backup to each other (to an unreplicated portion of the disk space) nightly until we can purchase a third system for backup. What is the best approach for backup? Should we use Windows Server Backup to be capturing the data
    volumes?
    Should we use a third party backup software?

    Hi,
    Sorry for the delay in reply.
    I'll try to reply each of your questions. However for the first one, you may have a try to post to Network forum for further information, or contact your device provider (HP) to see if there is any recommendation.
    For Storage Pooling:
    From your description you would like to create VHDX on RAID6 disks for increasment. It is fine and as you said it is 64TB limited. See:
    Hyper-V Virtual Hard Disk Format Overview
    http://technet.microsoft.com/en-us/library/hh831446.aspx
    Another possiable solution is using Storage Space - new function in Windows Server 2012. See:
    Storage Spaces Overview
    http://technet.microsoft.com/en-us/library/hh831739.aspx
    It could add hard disks to a storage pool and creating virtual disks from the pool. You can add disks later to this pool and creating new virtual disks if needed. 
    For Windows Sharing
    Generally we will have different sharing folders later. Creating all shares in a root folder sounds good but actually we may not able to accomplish. So it actually depends on actual environment.
    For DFS replication limitation
    I assume the 10TB limitation comes from this link:
    http://blogs.technet.com/b/csstwplatform/archive/2009/10/20/what-is-dfs-maximum-size-limit.aspx
    I contacted DFSR department about the limitation. Actually DFS-R could replicate more data which do not have an exact limitation. As you can see the article is created in 2009. 
    For Backup
    As you said there is a backup limitation (2GB - single backup). So if it cannot meet your requirement you will need to find a third party solution.
    Backup limitation
    http://technet.microsoft.com/en-us/library/cc772523.aspx
    If you have any feedback on our support, please send to [email protected]

  • SAP RAR - Best Practice ECC,CRM and BW systems

    Hi All
    i have the requirement to configure RAR for the systems ECC,CRM and BW systems . Each system has only one client . whats the best practice regarding using the rules against each system . i am assuming the rules will be the same irrespective of the system but when i see the names of the initial files , they are system specific . can anybody elloborate around this . thanks
    Regards
    Prasad

    Prasad,
    To build on Chinmaya's explanation, make sure you use a logical system for CRM, BI, and ECC for the basis portion of the rule set (and only the basis portion).  This will keep you from duplicating your rules to meet your basis requirements.  The other rules should be attributed to the individual systems (or additional logical systems if including mult landscapes, ex. Dev, QA, and Prod ECC merged into one ECC logical system).

  • WCEM Best Practice deployment in a multi CRM Landscape

    Hi SCN
    Im looking for advice in relation to best practice deployment of WCEM. Specifically in a multi CRM landscape scenario.
    Do best practices exist?
    JR

    Look into using NWDI as your source code control (DTR) and transport/migration from dev through to production.  This also will handle the deployment to your dev system (check-in/activate).
    For unit testing and debugging you should be running a local version (NWDW).  This way once the code is ready to be shared with the team, you check it in (makes it visible to other team members) and activate it (deploys it to development server).
    We are currently using a separate server for WD applications rather than running them on the portal server.  However, this does not allow for the WD app to run in the new WD iView.  So it depends on what the WD app needs to do an have access to.  Of course there is always the Federated Portal Network as an option, but that is a whole other topic.
    For JCo connections, WD uses a connection name and this connection can be set up to point to different locations depending on which server it is on.  So on the development server the JCo connection can point to the dev back-end and in prod point to the prod back-end.  The JCo connections are not migrated, but setup in each system.
    I hope this helps.  There is a lot of documentation available for NWDI to get you started.  See:  http://help.sap.com/saphelp_erp2005/helpdata/en/01/9c4940d1ba6913e10000000a1550b0/frameset.htm
    -Cindy

  • What is a best practice for managing a large amount of ever-changing hyperlinks?

    I am moving an 80+ page printed catalog online. We need to add hyperlinks to our Learning Management System courses to each reference of a class - there are 100s of them. I'm having difficulty understanding what the best practice is for consistent results when I need to go back and edit (which we will have to do regularly).
    These seem like my options:
    Link the actual text - sometimes when I go back to edit the link I can't find it in InDesign but can see it's there when I open up the PDF in Acrobat
    Draw an invisible box over the text and link it - this seems to work better but seems like an extra step
    Do all of the linking in Acrobat
    Am I missing anything?
    Here is the document in case anyone wants to see it so far. For the links that are in there, I used a combination of adding the links in InDesign then perfecting them using Acrobat (removing additional links or correcting others that I couldn't see in InDesign). This part of the process gives me anxiety each month we have to make edits. Nothing seems consistent. Maybe I'm missing something obvious?

    what exatly needs to be edited, the hyperlink or content or?

  • Best Practice: Migrating transports to Prod (system down etc.)

    Hi all
    This is more of a process and governance question as opposed to a ChaRM question.
    We use ChaRM to migrate transports to Production systems. For example, we have a Minor BAU Release (every 2 weeks), a Minor Initiative Release (every 4 weeks) and a Major Release (every 3 months).
    We realise that some of the major releases may require SAP to be taken offline. But what is SAP Best practice for ANY release into Production? i.e. for our Minor BAU Release we never shut down any Production systems, never stop batch jobs, never lock users etc.
    What does SAP recommend when migrating transports to Prod?
    Thanks
    Shaun

    Have you checked out the "Two Value Releases Per Year" whitepaper for SAP recommendations?  Section 6 is applicable.
    Lifetime Support by SAP &amp;raquo; Two Value Releases per Year
    The "real-world" answer is going to depend on how risk-adverse versus downtime adverse your company is.  I think most companies would choose to keep the systems running except when SAP forces an outage or there is a real risk of data corruption (some data conversions and data loads, for example).
    Specific to your minor BAU releases, it may be wise to make a process whereby anything that requires a production shutdown, stopped batch jobs, locked users, etc. needs to be in a different release type. But if you don't have the kind of control, your process will need to allow for these things to happen with those releases.
    Also, with regards to stopping batch jobs in the real world, you always need to balance the desire to take full advantage of the available systems versus the pain of managing the variations.  If your batch schedule is full, how are you going to make sure the critical jobs complete on time when you do need to take the system down?  If it isn't full, why do you need that time?  Can you make sure only non-critical batch jobs run during those times?  Do you have a good method of implementing an alternate batch schedule when need be?

  • Best practices for data entry online system

    Hi all
    I am(with a team of 4 members) going to build an online data entry system which may have approximately 30 screens. I am going to use Spring BlazeDS remoting to connect middleware.
    Anyone could please suggest me some good practices to follow in flex side to do such a "DATA ENTRY" application.
    The below points are some very few common best practices we need to follow while doing coding .But i am not sure how to achive them in flex side.
    User experience (Probably i can get little info regarding this from my client)
    Code maintanability
    Code extendibility
    memory and CPU optimization
    Able to work with team members(Multiple checkouts)
    Best framework
    So i am looking for valueble suggestion from great minds.

    There are two options, none of them very palatable:
    1) One is to create a domain, and add the VM and your local box to it.
    2) Stick to a workgroup, but have the same user name and password on both machines.
    In practice, a better option is to create an SQL login that is member of sysadmin - or who have rights to impersonate an account that is member of sysadmin. And for that matter, you could use the built-in sa account - but you rename it to something else.
    The other day I was looking at the error log from a server that apparently had been exposed on the net. The log was full with failed login attempts for sa, with occasional attempts for names like usera and so on. The server is in Sweden - the IP address
    for the login attempts were in China.
    Just so know what you can expect.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Best Practices to add second BW system/instance to existing one

    Hi Experts,
    I need to give my client - different strategies, Best Practice, Precautions, Steps, How to..document,,..... basically prepare methodology considering efforts required ( hardware + time) to develop second instance of BW system.
    We want to create one more BW server and i need to create Blue Print for that starting from scratch to end..
    Plz help me - -I am not aware of anything related to this...
    Regards and thanks in advance
    Gaurav

    Hi Arun,
    We have migrated to BI 7.0, but now since SAP wont be supporting old BW 3.0 - we are thinking to create new Box where would be rewiting the things more efficienty ..using transformation/end routine...clear the things...and get evrything new..
    Also we need to think about different geography also..as we have to accomodate different hubs which would be evntually going live later..
    Or else we have to change in existing system only...
    Also whats difference between adding different clients/sandbox system or different application server...?
    I think with different client - you cannot have seperate development but with seperate sandbox you can have...
    What abt App Server - i m not sure abt that..
    Can you please explains little more but - what would be advantges and wht  efforts(hardware/time) would be required...
    Thanks in advance
    Gaurav
    Message was edited by:
            Gaurav

  • Best Practices for Backing Up Large (+10TB) Servers?

    As we migrate to OS Lion Server, I need to revisit backup scenarios. I'm interested in researching best practices, which may include Time Machine for incrementals but also needs some sort of off-site possibilities (such as tape and then store somewhere else).
    (In Snow Leopard, we're having real trouble with BRU (unintelligible) and Roxio's retrospect for our DLT tape backups.)
    I would think this would be a great discussion for many to have.

    Mark as answered!

  • Best practices for working with large placed bitmap images?

    Hey all,
    I need some advice on the best way to approach building these files. I've been working on some banners that are very large: 3 x 7 feet.
    Each banner has a simple vector graphic treatment at the top and bottom (rectangle with a different colored rule on top, and vector logo) and a small amount of text, just a URL and a headline.The headline is type (not converted to outlines) and usually has some other effect applied to it, say a drop shadow or outer glow. Under these graphics is a full bleed image. The placed images need to be 150ppi at actual size, so they're honking big, sometimes up to 2GB. Once the layouts are approved, they have to go to a vendor for output.
    The Illustrator docs are really large, and I've read in other threads how to combat that (PDF compatibility, raster settings). But even still, does anyone have any insight into the best way to deal with these things? The dimensions are large, and then the images are large, and it just makes for lots of looking at the spinning ball of death...
    If it were me, I'd build them in InDe, but the vector graphics need to be edited for each one, and I so don't like to do that in InDe unless forced. To me, it's still ultimately a page layout app, not a drawing app. (Old school here.)
    FYI, our machines are all MBPs with 8G ram and the latest Intel Core 2 Duo chips, 2.66 and 2.8GHz. If we keep the files local (as opposed to working on the server) it should be fairly zippy... No?
    Any advice is appreciated, thanks!

    You can get into memory trouble with very large placed pdf files. Tiffs too.
    This has to do with the preview, which contains much more information than you need for working with.
    On the other hand if you place EPSs and take care not to turn on overprint preview you can get away with huge files.
    If you do turn on overprint preview your machine will slow down a lot and the file may become totally unmanageable.
    Compare this with to InDesign where you can control the quality of the preview. A hi-res preview will slow you down and most often you don't need it anyway.
    I was working (in Illie) the other day on much larger files than you mention – displays for whole walls – and had some considerable trouble until I reverted to the old EPS format. They say it's dying but it ain't dead yet .

  • Best practices for complex recipe-based system?

    Hi Folks,
    I'm at about the intermediate level (working on my CLD), and tasked with re-vamping a tighly-developed control system(which I'm intimately familiar with) into more of a configurable 'recipe'-based system. Basically the current front-end control software does a lot of the work for the end user - it pre-defines flows, assumes certain electrical/plumbing configurations, etc. This is fine for the 'production floor', however the R&D guys would like something a bit more configurable.
    This system comprises of several flow controllers, mostly controlled/monitored via analog I/O (compact FieldPoint). There's some static analog input channels devoted to temp, humidity, etc. There exists the possibility of 1-2 external RS232 metering devices as well.
    Anyway I'm trying to work out the foundation for the UI. In terms of architecture think a que-ed state machine is my best bet due to the number of parallel processes occuring at once (analog acq, multiple serial comm, TDMS, UI, etc).  Basically I'd like the user to be able to add/remove/modify 'steps'. For instance, "Set Flow: controller IDx, 20cfm", or "Time Delay, Static: 10:00", "Time Delay, Psuedo-Static, based on X". 
    I've worked out a configuration UI (utilizing the in-build NI configuration storage VIs) to associate the analog channels to external devices (ie Aout1="Controller1 SP", Ain1="Controller1 FB"). Later I'll populate a ring control, for instance, for the 'add SetFlow step', to list all of the analog OUTs for selection.
    So I guess what I'm looking for is advice on passing all this info around without having to re-hash it all the time to present to the user. Keeping in enum/ring allows for easy user viewing, changing, and block diagram readbility (vs string constants, which are error prone) - is this something that the 'flatten to string' would be helpful for (something I have no experiance using).
    What tips can you provide for moderate-complexity HMI control systems developed strictly in LV? We currently don't have DSC, and I'm a bit closed-minded about using it for this (but perhaps you can convince me otherwise?).
    Thanks for your time,
    Jamie
    Message Edited by 8bitbanger on 04-21-2010 08:10 AM
    v2009 devel. w/RT

    Cool, thanks for the screenshot!
    This request for more customization was anticipated, so I began working things in last year with other minor revs. The first was this 'hardware configuration' utility. Right now I'm only using the MFC Config page for channel scalling/name info. (the production version still relies on 'static' channel associations to control devices). The enum 'card/slot' selector does exactly as you mentioned - it controls a tab value, which loads other pages (with similar info).
    The second 'generator' page is used to populate a list of generators available for the user to select, and works quite well - users can add somewhat custom generators to the list w/out having to specify "custom" every time (and I don't have to rebuild to add such a simple thing).
    You can see the 'Flow Control' and 'Monitor' channel that have not yet been implimented. :-)
    Lastly the mockup is where I want to end up. I *wish* that labview was able to incorperate enum/ring drop-downs within a table cell (without the hacks that I've seen suggested).
    I intended to setup a similar format for the 'steps' - an Action (or noun as you say), Target (ie file path, device name, etc), Value (setpoint, other pertinent data), etc. Do you pass this info around as a cluster in your VI then simply parse out to the UI in the steps listing? My hurdle is how to ellegantly relate, say, a CSV file back to the enums without a lot of hard-coded (constant) strings.
    Cheers,
    Jamie
    ::edit:: *Finally* found the button to insert images... ::edit::
    Message Edited by 8bitbanger on 04-21-2010 10:30 AM
    v2009 devel. w/RT
    Attachments:
    config_UI.JPG ‏52 KB
    generators.JPG ‏39 KB
    mock-up.JPG ‏33 KB

  • Best practice for upgrading an old system?

    My Archlinux installation seems to have been upgraded over three years ago for the last time. Today, a naive pacman -Syu resulted in a number of file conflict errors and wasn't carried out.
    I then checked the list of announcements since 2011 and identified a few that included the string "manual intervention required". I believe that it was the update of the "filesystems" package that didn't work, again due to conflicts, probably related to the move from /lib to /usr/lib around that time.
    My attempt to update glibc resulted in misconfigured libraries, which took a while to sort out. While I can run commands again, I doubt that my system is in a very healthy state now.
    What should I do, what should I have done to update my Archlinux installation, untouched for 3.5 years?
    Last edited by berndbausch (2014-08-31 04:14:50)

    SoleSoul wrote:If 'pacman -Syu' works now, what makes you ask this question? Is anything still broken?
    Well, I asked the question because nothing worked after following a few of those "manual intervention required" notes. More precisely, the result of the last pacman was that literally no command worked. It turned out that the system didn't find libraries anymore, in particular the loader ld-linux.so. It took me a while to figure this out and to patch the system up enough to have it limp along. Good learning, by the way.
    After that and the suggestion in this forum that a reinstall was the best solution anyway I did just that. Since my only applications were Samba and the acpi daemon, that was not too bad. Unfortunately it's not Archlinux anymore, but Centos, which I am simply more familiar with.

  • Best practices in structuring a large Flash project?

    I'm building an educational site in Flash. A student works through a series of activities, either watching an animated video or answering a question. You can get a good idea of the basic functionality with this mockup: http://imgur.com/Mi4JyHN.
    After a student logs in, the server responds with their current activity and progress. The activity displays, and all student interaction is sent to the server to record (time spent on questions, buttons clicked, correct and incorrect answers, etc). When the student is done with an activity, the server is notified and responds with the next activity to present.
    I'm fairly new to flash and would love to hear how people with experience would structure this project. I can think of the following possibilities.
    One enormous SWF. All audio files and movie clips are embedded in the SWF and swapped out as necessary. This is not a reasonable option because the size of the resulting SWF would be huge.
    Exactly one SWF for each activity. The control buttons and progress bars are obviously shared between each activity, and it seems like a lot of duplication to have them in each compiled SWF. Also, if a button is changed, this requires re-compiling everything, right?
    One main SWF that loads others. The main SWF contains the buttons and progress bars and fetches external SWFs from the server to replace the stage area. I don’t know enough Flash to predict how this will go.
    Part JavaScript, part Flash. The buttons and progress bars are done in HTML + JavaScript. The page fetches external SWFs from the server to replace the stage area. This is the current system, and the problem is the ugliness/difficulty of managing the communication between JS and AS3.
    HTML5. While I would love for this to be a possibility, I don’t feel like it is. Animation is still way easier in Flash and we are still targeting some fairly old browsers. The best part about Flash is the consistency in experience.
    Extra questions:
    Which options leave us open to publishing to mobile using Adobe AIR?
    Which options are best for automated testing / accessibility / version control / general code layout?
    Thanks so much for any advice!

    For a Flash-based design I would go with option 3.  The general controls and objects common for use with each activity would be in the main file.  Whether or not the main file would be responsible for sending activity data to the server could depend on there being determinable similarities between data collections...  otherwise it might fall on individual activity swfs to interact with the database.  There are too many unknowns in this regard for me to offer much.
    If this was going to be a loaded application, such as an AIR app, then option 1 might be more reasonable since you only have to install once for everything to be available.
    When it comes to mobile you are likely to hit a snag if you rely on using AIR/Flash to try to deal with a main and activity swfs approach... mainly in the Apple realm...  unless I forgot a lot of what I was involved with some time ago, a loaded swf cannot contain any code when it comes to iStuff.  So you end up having to make the main file contain all of the coding to deal with each activity's processing.  Every interface/interactive element can only exist by name and the main file has to target them and assign listeners and processing and etc....a mess in my view.  That's why having the one huge AIR file is possibly a tad more reasonable.
    I have nothing to offer in the HTML5 end of things.  I have not yet journeyed down that path.  Since HTML5 is basically wingless without javascript and CSS, it might come to pass that the current system (option 4) is the way to go.

  • Best practices code structure for large projects?

    Hi, I come from the Java world where organizing your code is handled conveniently through packages. Is there an equivalent in XCode/Objective C? I'd rather not lump all my observers, entities, controllers, etc in one place under "Classes"...or maybe it doesn't matter...
    If anyone could point me to a document outlining recommended guidelines I'd appreciate it.
    Thanks! Jon

    If you have a small project, you can setup Groups in Xcode to logically organize your files. Those Groups do not necessarily have to correspond to any directory structure. I have all my source files in one directory but organize them into Groups in Xcode.
    If you have a larger project, you can do the same thing, but with code organized into actual directories. Groups can be defined to be relative to a particular directory.
    If really do have a large project, you should organize things the same was as in Java. Your "packages" would just be libraries - either static or dynamic.
    As far as official guidelines go, there really aren't any. It would be best to stick to the Cocoa Model-View-Controller architecture if that is the type of application you are working on. For other software, you can do it however you want, including following something like Sun's guidelines if you want.

Maybe you are looking for

  • Portal Runtime Error in ESS iview

    Hi All, I have implemented ESS 50.4 on EP 6.0 SP9. For SAP IAC iviws i am getting following error. Portal Runtime Error An exception occurred while processing a request for : iView : pcd:portal_content/AFDB/Role/ESSRole/Office/SAPInbox/SAPInbox Compo

  • Can i use an external dvd to rip music cds to ipad air?

    I have a new Ipad air and a Samsung external DVD writer. Can I rip my music CDs to the Ipad using the Samsung?

  • Is it possible to install dsee7 into a solaris 10 sparse zone?

    hello all, i'm trying to install DSEE7 on a Sun T2000 (solaris 10 sparc) into a zone. The following command never returns: ldap1-root% /local/dsee7/bin/dsccsetup status DSCC Agent is not registered in Cacao DSCC Registry has not been created yet ldap

  • How can my iMac support 2560x1080 external monitor (LG 29EA93)

    How can my iMac support 2560x1080 external monitor (LG 29EA93)? Please, help on this issue? iMac 2012 Late NVIDIA GeForce GT 640M 512MB OS X 10.8.5 (12F37)

  • Itunes will not populate ipod

    this is driving me nuts downloaded itunes7 and while it sees my ipod and goes thru the motions of updating it, nothing actually gets put onto the ipod deleted the software and reinstalled the original old itunes but same problem occurs can anyone hel