Best Practice for Organizing Enterprise Models

We need to migrate our Oracle Designer models (almost 10 years in the works) into OSDM to stay with current design tools. We do not have the option of staying with Designer.
I need to know of any best practices or other documents that describe a way to organize an enterprise model in OSDM. I am coming from years of working with Designer and want to translate the multiple Application Folders in Designer into a similar organization in OSDM. We have 3 COTS packages with 1000's of tables each and many custom applications that use tables from multiple schemes and databases. Our developers like to see all the tables for a single custom application in its own diagram no matter where they come from and the DBA's don't want to wade through several thousand tables to find the handful we need nor have to duplicate table definitions in multiple models. In Designer we have been doing that with Application Folders.
Another area of interest is in the deployment of database objects to multiple databases where the privileges vary from development to production. In the old Designer world this is done via implementations in the DB Admin tab. Can this be done in OSDM?

Hi Marcus,
Our developers like to see all the tables for a single custom application in its own diagram no matter where they come from and the DBA's don't want to wade through several thousand tables to find the handful we need nor have to duplicate table definitions in multiple models. In >Designer we have been doing that with Application Folders.There are no application folders in data Modeler. You can use subviews to define your subject areas. Subview is crated for each application (folder) during import form Designer repository.
Philip

Similar Messages

  • Best Practice for using multiple models

    Hi Buddies,
         Can u tell me the best practices for using multiple models in single WD application?
        Means --> I am using 3 RFCs on single application for my function. Each time i am importing that RFC model under
        WD --->Models and i did model binding seperately to Component Controller. Is this is the right way to impliment  multiple            models  in single application ?

    It very much depends on your design, but One RFC per model is definitely a no no.
    Refer to this document to understand how should you use the model in most efficient way.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/705f2b2e-e77d-2b10-de8a-95f37f4c7022?quicklink=events&overridelayout=true
    Thanks
    Prashant

  • Data Model best Practices for Large Data Models

    We are currently rolling out Hyperion IR 11.1.x and are trying to establish best practces for BQYs and how to display these models to our end users.
    So far, we have created an OCE file that limits the selectable tables to only those that are within the model.
    Then, we created a BQY that brings in the tables to a data model, created metatopics for the main tables and integrated the descriptions via lookups in the meta topics.
    This seems to be ok, however, anytime I try to add items to a query, as soon as i add columns from different tables, the app freezes up, hogs a bunch of memory and then closes itself.
    Obviously, this isnt' acceptable to be given to our end users like this, so i'm asking for suggestions.
    Are there settings I can change to get around this memory sucking issue? Do I need to use a smaller model?
    and in general, how are you all deploying this tool to your users? Our users are accustomed to a pre built data model so they can just click and add the fields they want and hit submit. How do I get close to that ideal with this tool?
    thanks for any help/advice.

    I answered my own question. in the case of the large data model, the tool by default was attempting to calculate every possible join path to get from Table A to Table B (even though there is a direct join between them).
    in the data model options, I changed the join setting to use the join path with the least number of topics. This skipped the extraneous steps and allowed me to proceed as normal.
    hope this helps anyone else who may bump into this issue.

  • Best Practice for BusinessObjects Enterprise 3.1 environment set-up

    Hi there,
    I am wondering if anyone can help me answer this question.
    Is it better to install Business Objects Enterprise 3.1 on its own server, rather than on a server that also hosts other applications, such as Informatica, DB2 database?
    Where I am working at the moment, BOE, including the WAS and FRS are all set-up on the same server as DB2 and Informatica.
    They are talking about using a product called AIX Workload Manager to control how many resources, such as CPU usage, should be allocated to each application. I'm not sure that BOE would perform optimally in this sort of environment. I was always of the belief it should be set-up on completely separate infrastructure because the application can be come resource intensive.
    If anyone can provide any insight into this, I would greatly appreciate it.
    Many thanks,
    Ainsley
    Edited by: Ainsley O'Sullivan on Jan 7, 2011 3:29 AM

    Every application will probably run faster if it has its own hardware to run on.
    If you can afford it to have separate H/W for the BOE server then I would say go for it. THe system is easier to maintain and to troubleshoot.
    BOE CAN run on system where other software is running. Keep in mind though that at the end everything depends from the usage pattern and the expected load. It can be really complicated to do a realiable prediction on that if you have many heavy-duty (eg. DB2) installations on the same H/W.
    SAP consulting offers sizing services for BOE installations.
    Regards,
    Stratos

  • Best practices for organizing a sudden batch of photos?

    So, I just got ahold of hundreds (1000+) of slides from my early childhood, years 0-10. Pictures I had never seen before. (Plus 8mm and 16mm movies, reel to reel audio, and some prints.)
    I'm going to get a slide scanner and scan some, but not all, of the photos in (I don't care about the vista shots). Have any of you done such an all-at-once type of project? What approach did you use to describe the photos?
    How do you ID the people? Make keywords for people, or put names in the comments?
    How do you deal with group photos? It would be nice if the names could attached to the faces, so a mouseover revealed the name.
    Did you make a note somehwere (e.g., comments) of what box the images came from, when they were developed (since the photo date may not be known), which image no. in the series it was? (So you can find the original again if necessary.)
    Do you have any tips to keep this under control?
    I'm both excited by this and overwhelmed. I'd rather do it once and do it right. This is part of a bigger project to piece together this period of my life, so the more info I can track, the better.

    switchbacker:
    You might consider one of the new flatbed scanners that have the capability of scanning multiple sides at one time. My reasoning for this is that after you're finished with the project you'll end up with a useable piece of equipment instead of a nice doorstop if you get a dedicated slide scanner. That's what I have, a nice doorstop.
    I recently got a Canon 8600F scanner that will do 4 slides at a time and put each in their own file after the scan is completed. I'm sure other scanner manufacturers have similar software that will separate multiple images in one scan into their own file. My standalone slide scanner was over 5 times as slow as the Canon scanner on a time per slide basis.
    A bonus with the particular Canon scanner that I got was that Photoshop Elements 4 was bundled with it.
    As far as organizing the resulting image files I use keywords to identify the elements within a photo, i.e people, places and things. I name the files, before importing into iPhoto, with the date of the photo using the international date format YYYY-MM-DD-001.jpg, -002, jpg, etc. This gives me excellent chronological sorting and searching capability. There are 3rd party applications like R-Name that will batch rename sequentially a folder of photos. Doing this before importing will assure that the original file will be names as you want it instead of something link Scan -01.jpg.
    I did the same thing you did, scan about 5000 slides and 5000 photos to document the family history. I created iDVD slideshows for each year breaking the slideshows down by month. My brother had our parents 8 mm movies digitized and I digitized 100 hours of family VHS videos to add to the DVDs also. After burning the iDVD project to disk I would also burn the source files via the Finder to DVD disks and distributed a copy to each of our kids. That gave me multiple backups and gave them a way to preview the family history. My project spans 100+ years from 1906 to 2006 (yes, I'm a couple of years behind because with digital cameras in each of the kid's hands I have so many more photos per year to work with ). The recent years are taking 2-3 DVDs to cover each year. With iDVD it makes it so easy to create a really professional looking DVD. Here are screenshots of an early DVD of VHS videos and a later DVD of slideshows.
    You will have fun!
    TIP: For insurance against the iPhoto database corruption that many users have experienced I recommend making a backup copy of the Library6.iPhoto database file and keep it current. If problems crop up where iPhoto suddenly can't see any photos or thinks there are no photos in the library, replacing the working Library6.iPhoto file with the backup will often get the library back. By keeping it current I mean backup after each import and/or any serious editing or work on books, slideshows, calendars, cards, etc. That insures that if a problem pops up and you do need to replace the database file, you'll retain all those efforts. It doesn't take long to make the backup and it's good insurance.
    I've created an Automator workflow application (requires Tiger), iPhoto dB File Backup, that will copy the selected Library6.iPhoto file from your iPhoto Library folder to the Pictures folder, replacing any previous version of it. It's compatible with iPhoto 08 libraries and Leopard. iPhoto does not have to be closed to run the application, just idle. You can download it at Toad's Cellar. Be sure to read the Read Me pdf file.
    Don't despair, it's doable.

  • Best practices for organizing photos in iphoto09?

    ok, i just rebuilt my iphoto library. here is what i did to get back to where i am now:
    1) took my old iphoto library folder, and searched by files larger than 100k. moved them to a new folder. deleted the old iphoto library from my Pictures folder.
    2) renamed the entire contents of this new folder to all uppercase. compressed and backed this folder up to a new safe location.
    3) used another program to reset the file date to the date the picture was taken (maybe iphoto does this too, i dont know enough about iphoto yet).
    after the above steps, i created a new iphoto library, and imported all of my pictures. i now have one single event titled 'Jan 16, 2001'.
    i was previously using iphoto05, and was used to the year/month/day/latest type of organization, but am willing to learn how the new school of thought of how 09 is supposed to work. im not sure how the photos are supposed to be broken out so i can view 'family', 'house', 'vacations', 'trucks', 'kitties', etc etc.
    any advice would be greately appreciated. thanks!

    Events are time based - not subject based - although it is possible with a lot of work to manually do a set of subject based events you are fighting thesystem - albums and folders (which hold albums) are a much easier and better way to do subject based organization - or keywords and smart albums - or a combination of both
    It sounds like your other program did not make the correct EXIF entries for dates (if you select a photo and use the show extended info option under the Photos menu to see the EXIF dates but set them all to 1/16/2001
    Do you still have the iPhoto 5 library?
    LN

  • Best practices for organizing a large "project" with multiple programmers

    should i put everyone in one application? should everyone get their own applcation? my understanding (limited) is the level of granularity for CVS is the application. sounds to me like multiple developers checking the same application in and out would be a disaster. if every developer has there own application what problems will i have deploying. how do i handle my configuration (navigation) diagram? any thoughts would be appreciated.

    Have a read through these:
    http://download.oracle.com/docs/html/B25947_01/team_productivity.htm#BABBEFFF
    http://brendenanstey.blogspot.com/2006/11/tips-for-using-cvs-with-jdeveloper.html

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • Business Package for SAP Best Practices for Enterprise Portal

    Hi,
    We are implementing EP for ECC 5.0 and we have to web enable some FI transactions like FB50/fb60 etc. We want to use standard business package for this. The only business package that caters this requirement is "Business Package for SAP Best Practices for Enterprise Portal 60.1". When i checked the data source for this business package, the specification is for "SAP R/3 4.6B and above". Does this work for ECC 5.0 as well? In general does the R/3 specific business packages work for ECC versions? Please let me know. Thanks for the help.  I promise rewards.
    Regards
    Hari

    Hari,
    When you download the BPO via Portal Content Portfolio,
    each bpo has additional information on the release.
    For example for the one you are inquiring about - Business Package for SAP Best Practices for Enterprise Portal, the additional release information is provided on the following links;
    https://www2.iviewstudio.com/sdn/info/index.cfm?action=faqs&part=#QSR03
    https://www2.iviewstudio.com/sdn/detail_view/index.cfm?action=package_information&CatalogSet=SAP%20Content&ItemID=17212&CFID=7544845&CFTOKEN=99283970
    Regards,
    James

  • Best practice for sharing data with model window

    Hi team,
    what would the best practice for sharing data with a modal
    window be ? I use a modal window to display record details from a
    record list, but i am not quite sure how to access the data from
    the components in the main application in the modal window.
    Any hints would be welcome
    Best
    Frank

    Pass a reference to the parent into the modal popup. Then you
    can reference anything in the parent scope.
    I haven't done this i 2.0 yet so I can't give you code. I'll
    post if I do.
    Oh, also, you can reference the parent using parentDocument.
    So in the popup you could do:
    parentDocument.myPublicVariable = "whatever";
    Tracy

  • Best Practices for Using Photoshop (and Computing in General)

    I've been seeing some threads that lead me to realize that not everyone knows the best practices for doing Photoshop on a computer, and in doing conscientious computing in general.  I thought it might be a good idea for those of us with some exprience to contribute and discuss best practices for making the Photoshop and computing experience more reliable and enjoyable.
    It'd be great if everyone would contribute their ideas, and especially their personal experience.
    Here are some of my thoughts on data integrity (this shouldn't be the only subject of this thread):
    Consider paying more for good hardware. Computers have almost become commodities, and price shopping abounds, but there are some areas where spending a few dollars more can be beneficial.  For example, the difference in price between a top-of-the-line high performance enterprise class hard drive and the cheapest model around with, say, a 1 TB capacity is less than a hundred bucks!  Disk drives do fail!  They're not all created equal.  What would it cost you in aggravation and time to lose your data?  Imagine it happening at the worst possible time, because that's exactly when failures occur.
    Use an Uninterruptable Power Supply (UPS).  Unexpected power outages are TERRIBLE for both computer software and hardware.  Lost files and burned out hardware are a possibility.  A UPS that will power the computer and monitor can be found at the local high tech store and doesn't cost much.  The modern ones will even communicate with the computer via USB to perform an orderly shutdown if the power failure goes on too long for the batteries to keep going.  Again, how much is it worth to you to have a computer outage and loss of data?
    Work locally, copy files elsewhere.  Photoshop likes to be run on files on the local hard drive(s).  If you are working in an environment where you have networking, rather than opening a file right off the network, then saving it back there, consider copying the file to your local hard drive then working on it there.  This way an unexpected network outage or error won't cause you to lose work.
    Never save over your original files.  You may have a library of original images you have captured with your camera or created.  Sometimes these are in formats that can be re-saved.  If you're going to work on one of those files (e.g., to prepare it for some use, such as printing), and it's a file type that can be overwritten (e.g., JPEG), as soon as you open the file save the document in another location, e.g., in Photoshop .psd format.
    Save your master files in several places.  While you are working in Photoshop, especially if you've done a lot of work on one document, remember to save your work regularly, and you may want to save it in several different places (or copy the file after you have saved it to a backup folder, or save it in a version management system).  Things can go wrong and it's nice to be able to go back to a prior saved version without losing too much work.
    Make Backups.  Back up your computer files, including your Photoshop work, ideally to external media.  Windows now ships with a quite good backup system, and external USB drives with surprisingly high capacity (e.g., Western Digital MyBook) are very inexpensive.  The external drives aren't that fast, but a backup you've set up to run late at night can finish by morning, and if/when you have a failure or loss of data.  And if you're really concerned with backup integrity, you can unplug an external drive and take it to another location.
    This stuff is kind of "motherhood and apple pie" but it's worth getting the word out I think.
    Your ideas?
    -Noel

    APC Back-UPS XS 1300.  $169.99 at Best Buy.
    Our power outages here are usually only a few seconds; this should give my server about 20 or 25 minutes run-time.
    I'm setting up the PowerChute software now to shut down the computer when 5 minutes of power is left.  The load with the monitor sleeping is 171 watts.
    This has surge protection and other nice features as well.
    -Noel

  • Best-practice for Catalog Views ? :|

    Hello community,
    A best practice question:
    The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users.    I would like to know which is the best practice for segment the catalog.   I mean, some users should only see categories 10,20 & 30.  Other users only category 80, etc.    The problem is how can I implement this ?
    My first idea is:
    1. Create 110 Procurement Catalogs (1 for every prod.category).   Each catalog should contain only its product category.
    2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
    Do you have any idea in order to improve this ?
    Saludos desde Mexico,
    Diego

    Hi,
    Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
    The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
    My advice:
    -Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
    -With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
    Good luck.
    Vadim

  • Best Practices for Defining NDS Java Projects...

    We are doing a Proof of Concept on using NDS to develop non-SAP Java applications.  We are attempting to determine if we can replace our current Java development tools with NDS/WAS.
    We are struggling with SAP's terminology and "plumbing" for setting up/defining Java projects.  For example, what is and when do you define Tracks, Software Components, Development Components, etc.  All of these terms are totally foreign to us and do not relate to our current Java environment (at least not that we can see).  We are also struggling with how the DTR and activities tie in to those components.
    If any one has defined best practices for setting up Java projects or has struggled with and overcome these same issues, please provide us with some guidance.  This is a very frustrating and time-consuming issue for us.
    Thank you!!

    Hi Peggy,
    In Component Model we divide software projects into small components.Components can use other components in well defined manner.
    A development object is a part of a component that can be changed or developed in some way; it provides the component with a certain part of its functionality. A development object may be a Java class, a Web Dynpro view, a table definition, a JSP page, and so on. Development objects are always stored as “sources” in a repository.
    A development component can be defined as a frame shared by a number of objects, which are part of the software.
    Software components combine components (DCs) to larger units for delivery and deployment.
    A track comprises configurations and runtime systems required for developing software component versions.It ensures stable states of deliverables used by subsequent tracks.
    The Design Time Repository is for versioning source code management. Distributed development of software in teams. Transport and replication of sources.
    You can also find lot of support in SDN for the above concepts with tutorials.
    Refer this Link for a overview on Java development Infrastructure(JDI)
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/webas/java/java development infrastructure jdi overview.pdf
    To understand further
    Working with Net Weaver Development Infrastructure :
    http://help.sap.com/saphelp_nw04/helpdata/en/03/f6bc3d42f46c33e10000000a11405a/content.htm
    In the above link you can find all the concepts clearly explained.You can also find the required tutorials for development.
    Regards,
    Vijith

  • JSF - Best Practice For Using Managed Bean

    I want to discuss what is the best practice for managed bean usage, especially using session scope or request scope to build database driven pages
    ---- Session Bean ----
    - In the book Core Java Server Faces, the author mentioned that most of the cases session bean should be used, unless the processing is passed on to other handler. Since JSF can store the state on client side, i think storing everything in session is not a big memory concern. (can some expert confirm this is true?) Session objects are easy to manage and states can be shared across the pages. It can make programming easy.
    In the case of a page binded to a resultset, the bean usually helds a java.util.List object for the result, which is intialized in the constructor by query the database first. However, this approach has a problem: when user navigates to other page and comes back, the data is not refreshed. You can of course solve the problem by issuing query everytime in your getXXX method. But you need to be very careful that you don't bind this XXX property too many times. In the case of querying in getXXX, setXXX is also tricky as you don't have a member to set. You usually don't want to persist the resultset changes in the setXXX as the changes may not be final, in stead, you want to handle in the actionlistener (like a save(actionevent)).
    I would glad to see your thought on this.
    --- Request Bean ---
    request bean is initialized everytime a reuqest is made. It sometimes drove me nuts because JSF seems not to be every consistent in updating model values. Suppose you have a page showing parent-children a list of records from database, and you also allow user to change directly on the children. if I hbind the parent to a bean called #{Parent} and you bind the children to ADF table (value="#{Parent.children}" var="rowValue". If I set Parent as a request scope, the setChildren method is never called when I submit the form. Not sure if this is just for ADF or it is JSF problem. But if you change the bean to session scope, everything works fine.
    I believe JSF doesn't update the bindings for all component attributes. It only update the input component value binding. Some one please verify this is true.
    In many cases, i found request bean is very hard to work with if there are lots of updates. (I have lots of trouble with update the binding value for rendered attributes).
    However, request bean is working fine for read only pages and simple binded forms. It definitely frees up memory quicker than session bean.
    ----- any comments or opinions are welcome!!! ------

    I think it should be either Option 2 or Option 3.
    Option 2 would be necessary if the bean data depends on some request parameters.
    (Example: Getting customer bean for a particular customer id)
    Otherwise Option 3 seems the reasonable approach.
    But, I am also pondering on this issue. The above are just my initial thoughts.

  • Best practice for managing a Windows 7 deployment with both 32-bit and 64-bit?

    What is the best practice for creating and organizing deployment shares in MDT for a Windows 7 deployment that has mostly 32-bit computers, but a few 64-bit computers as well? Is it better to create a single deployment share for Windows 7 and include both
    versions, or is it better to create two separate deployment shares? And what about 32-bit and 64-bit versions of applications?
    I'm currently leaning towards creating two separate deployment shares, just so that I don't have to keep typing (x86) and (x64) for every application I import, as well as making it easier when choosing applications in the Lite Touch installation. But I know
    each deployment share has the option to create both an x86 and x64 boot image, so that's why I am confused. 

    Supporting two task sequences is way easier than supporting two shares. Two shares means two boot media, or maintaining a method of directing the user to one or the other. Everything needs to be imported or configured twice. Not to mention doubling storage
    space. MDT is designed to have multiple task sequences, why wouldn't you use them?
    Supporting multiple task sequences can be a pain, but not bad once you get a system. Supporting app installs intelligently is a large part of that. We have one folder per app install, with a wrapper vbscript that handles OS detection. If there are separate
    binaries, they are placed in x86 and x64 subfolders. Everything runs from one folder via the same command, "cscript install.vbs". So, import once, assign once, and forget it. Its the same install package we use for Altiris, and we'll be using a Powershell
    version of it when we fully migrate to SCCM.
    Others handle x86 and x64 apps separately, and use the MDT app details to select what platform the app is meant for. I've done that, but we have a template for the vbscript wrapper and its a standard process, I believe its easier. YMMV.
    Once you get your apps into MDT, create bundles. Core build bundle, core deploy bundle, Laptop deploy bundle, etcetera. Now you don't have to assign twenty apps to both task sequences, just one bundle. When you replace one app in the bundle, all TS'es are
    updated automatically. Its kind of the same mentality as active directory. Users, groups and resources = apps, bundles and task sequences.
    If you have separate build and deploy shares in your lab, great. If not, separate your apps into build and deploy folders in your lab MDT share. Use a selection profile to upload only your deploy side to production. In fact I separate everything (except
    drivers) into Build and deploy folders on my lab server. Don't mix build and deploy, and don't mix Lab/QA and production. I also keep a "Retired" folder. When I replace an app, TS, OS, etcetera, I move it to the retired folder and append "RETIRED - " to the
    front of it  so I can instantly spot it if it happens to show up somewhere it shouldn't.
    To me, the biggest "weakness" of MDT is its flexibility. There's literally a dozen different ways to do everything, and there's no fences to keep you on the path. If you don't create some sort of organization for yourself, its very easy to get lost as things
    get complicated. Tossing everything into one giant bucket will have you pulling your hair out.

Maybe you are looking for

  • VMI: reduce Stock in transit - what is the correct method?

    Hi Gurus, We are using APO/SCM  system to implement VMI functionality to replenish stock at customer location.  We need to build stock in transit at customer location after PGI happens in ECC.   Now, stock in transit is show up in 'Stock in transit'

  • No transperancy rendering 3d pdf on Mac Pro

    I'm in a limbo between Apple and Adobe. Although I find it hard to believe that I am the only one who uses 3d pdf diligently and try to do it on a fullgrown Mac. It started when I took my previous MacPro (Mid 2010) in use almost 4 years ago. All 3d-p

  • Valuate Foreign Currency in the Period End

    Hi Friends, Which method is used to Valuate Foreign Currency in the Period End? (We used Average Rate method at the time of transaction posting) I have a doubt whether we have to use same average rate method at the period end or not. Thanks Chandra

  • Web Services with Attachments

    Hello, I need to consume a web service with attachements (mime type) from sap was j2ee 7.0 (nw2004s). Unfortunately, we have not been able to find any SAP documentation (online help, oss note,,,) describing if SAP WAS Java 7.0 supports this standard,

  • WLC displays different IP for LAP in GUI

    Hi, I have the following issue with one controller  AIR-WLC2112-K9 with 2 LAP .When I ask the info by GUI for the  the LAPs in  Wireless ----> AP name (click) ---->General the info displayed in "IP config"   does not match, for the field static IP is