Mass Rollout - Best practices

We are going to do a rollout of around 800 ipads. Anybody know the best way to get them all configured?  How to make 800 Apple IDs at once, etc? How to get all of the aircards activated?  We have been asking Apple directly, but their answers are VERY vague.  Thanks.

Hey gyrhead - you seem super knowledgeable and you don't have to answer this, but here is our scenario:
Let’s say you receive these in from the factory, still shrink wrapped, starting with unboxing, how would you go about doing this?
Goal: Deploy 800 iPads
10 techs
4 days to clone + configure these iPads
No data to migrate
10 or less free apps
3 or less apps from the VPP store
Setup a few URLs on the home screen
Setup corporate email
iPad Naming convention
Sales force has turnover
Keep ability for us to easily manage account if user leavesWe were thinking generic iTunes accounts ([email protected], SALES002, etc.)
How would they clone them (or would they)?
PowerSync carts?
Apple Configuration Utility?
Apple iPhone Config utility?
Register device with MDM (probably Mobile Iron or Airwatch)
Again, you don't have to take the time, but thanks for any time you can spare!!

Similar Messages

  • Spark Dropdownlist rollover rollout best practice?

    I am seem to be having an issue here. I am trying to get a Spark DropDownList to open on mouseover or rollover and close on mouseout or rollout, but not if the mouse is on popup. But if the mouse rolls out or mouses out of the popup it should also close. The issue is when you use rollover on dropdownlist, plus rollout, when you try to move your mouse down to the popup it closes as it hears the rollout.
    Does this code go in the skin?  Do I need to extend the DropDownList?  Not sure what is the best practice with addition of Spark and Skins?
    Thx, J

    Probably not best practice but put a rect in the skin and pushed it (top = -20) so my rollout handler sees it.

  • Rollout Best practice template

    Hi All,
    We are doing Rollout for one of our New company code. If any one have idea about this rollout projects and also for documentaion. Can you please share with me.
    I will Wait for reply....
    Regards
    siri

    Hi Shirisha
    Rollout project from FICO point view is not so much because
    - you will have common chart of accounts which is already created
    - you need to extend the GL accounts to the company code level.
    - other configs you need to extend to your company code.
    It is not a very difficult task.
    Thanks
    Ashok
    Assign points for useful answer

  • Best Practice for Mass Deleting Transactions

    Hi Gurus
    Can you please guide on this - We need to mass delete Leads from the system. I can use Crm_Order_Delete FM. But want to know if there are any best practices to follow, or anything else that i should consider before deleting these transactions from the system.
    We have our archiving policy under discussion which may take some time, but due to large volume of reduntatn data we have some performance issues. For example when searching for leads and using ACE, the system goes through all the lead data.
    That is the reason we are plannign to delete those old records. My concerns is that using CRM_ORDER_DELETE, would it clear all the tables for those deleted transactions and if there are any best practices to follow.
    Thanks in Advance.
    Regards.
    -MP
    Edited by: Mohanpreet Singh on Apr 15, 2010 5:18 PM

    Hi,
    Please go through the AppModel application which is available at: http://developers.sun.com/prodtech/javatools/jscreator/reference/codesamples/sampleapps.html
    The OnePage Table Based example shows exactly how to use deleting multiple rows from a datatable...
    Hope this helps.
    Thanks,
    RK.

  • Best Practices to Mass Delete Leads

    Hi Gurus
    Can you please guide on this - We need to mass delete Leads from the system. I can use Crm_Order_Delete FM. But want to know if there are any best practices to follow, or anything else that i should consider before deleting these transactions from the system.
    We have our archiving policy under discussion which may take some time, but due to large volume of reduntatn data we have some performance issues. For example when searching for leads and using ACE, the system goes through all the lead data.
    That is the reason we are plannign to delete those old records. My concerns is that using CRM_ORDER_DELETE, would it clear all the tables for those deleted transactions and if there are any best practices to follow.
    Thanks in Advance.
    Regards.
    -MP

    Hello,
    as the root is single label you could get only rid of it with migrating to a new forest. Therefore you should built a lab first and test the steps. As tool you could use ADMT.
    http://blogs.msmvps.com/mweber/2010/03/25/migrating-active-directory-to-a-new-forest/
    Also you might rethink your design if an empty root is really needed, there is no technical requirement and cost only additional hardware and licenses.
    Keep in mind that the new forest MUST use different domain/NetBios names otherwise you cannot create the required trust for migration steps.
    You can NOT switch a sub domain to the root and vice versa.
    Best regards
    Meinolf Weber
    MVP, MCP, MCTS
    Microsoft MVP - Directory Services
    My Blog: http://blogs.msmvps.com/MWeber
    Disclaimer: This posting is provided AS IS with no warranties or guarantees and confers no rights.
    Twitter:  

  • Best-practice for Catalog Views ? :|

    Hello community,
    A best practice question:
    The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users.    I would like to know which is the best practice for segment the catalog.   I mean, some users should only see categories 10,20 & 30.  Other users only category 80, etc.    The problem is how can I implement this ?
    My first idea is:
    1. Create 110 Procurement Catalogs (1 for every prod.category).   Each catalog should contain only its product category.
    2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
    Do you have any idea in order to improve this ?
    Saludos desde Mexico,
    Diego

    Hi,
    Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
    The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
    My advice:
    -Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
    -With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
    Good luck.
    Vadim

  • [CS5.5/6] - XML / Data Merge questions & Best practice.

    Fellow Countrymen (and women),
    I work as a graphic designer for a large outlet chain retailer which is constantly growing our base of centers.  This growth has brought a workload that used to be manageable with but two people to a never ending sprint with five.  Much of what we do is print, which is not my forte, but is also generally a disorganized, ad-hoc affair into which I am wading to try to help reduce overall strain.
    Upon picking up InDesign I noted the power of the simple Data Merge function and have added it to our repetoire in mass merging data sources.  There are some critical failures I see in this as a tool going forward for our purposes, however:
    1) Data Merge cannot handle information stored and categorized in a singular column well.  As an example we have centers in many cities, and each center has its own list of specific stores.  Data merge cannot handle a single column, or even multiple column list of these stores very easily and has forced us into some manual operations to concatenate the data into one cell and then, using delimiter characters, find and replace hard returns to seperate them.
    2) Data Merge offers no method of alternate alignment of data, or selection by ranges.  That is to say:  I cannot tell Data merge to start at Cell1 in one column, and in another column select say... Cell 42 as the starting point.
    3) Data merge only accepts data organized in a very specific, and generally inflexible pattern.
    These are just a few limitations.
    ON TO MY ACTUAL DILEMMA aka Convert to XML or not?
    Recently my coworker has suggested we move toward using XML as a repository / delivery system that helps us quickly get data from our SQL database into a usable form in InDesign. 
    I've watched some tutorials on Lynda.com and havent yet seen a clear answer to a very simple question:
    "Can XML help to 'merge' large, dynamic, data sets like a list of 200 stores per center over 40 centers based off of a single template file?"
    What I've seen is that I would need to manually duplicate pages, linking the correct XML entry as I go rather than the program generating a set of merged pages like that from Data Merge with very little effort on my part.  Perhaps setting up a master page would allow for easy drag and drop fields for my XML data?
    I'm not an idiot, I'm simply green with this -- and it's kind of scary because I genuinely want us to proceed forward with the most flexible, reliable, trainable and sustainable solution.  A tall order, I know.  Correct me if I'm wrong, but XML is that beast, no?
    Formatting the XML
    Currently I'm afraid our XML feed for our centers isnt formatted correctly with the current format looking as such:
    <BRANDS>
         <BRAND>
              • BrandID = xxxx
              [Brand Name]
              [Description]
              [WebMoniker]
              <CATEGORIES>
                   <CATEGORY>
                        • xmlns = URL
                        • WebMoniker = category_type
              <STORES>
                   <STORE>
                        • StoreID = ID#
                        • CenterID = ID#
    I dont think this is currently usable because if I wanted to create a list of stores from a particular center, that information is stored as an attribute of the <Store> tag, buried deep within the data, making it impossible to 'drag-n-drop'. 
    Not to mention much of the important data is held in attributes rather than text fields which are children of the tag.
    Im thinking of proposing the following organizational layout:
    <CENTERS>
         <CENTER>
         [Center_name]
         [Center_location]
              <CATEGORIES>
                   <CATEGORY>
                        [Category_Type]
                        <BRANDS>
                             <BRAND>
                                  [Brand_name]
    My thought is that if I have the <CENTER> tag then I can simply drag that into a frame and it will auto populate all of the brands by Category (as organized in the XML) for that center into the frame.
    Why is this important?
    This is used on multiple documents in different layout styles, and since our store list is ever changes as leases end or begin, over 40 centers this becomes a big hairy monster.  We want this to be as automated as possible, but I'd settle for a significant amount of dragging and dropping as long as it is simple and straightforward.  I have a high tollerance for druding through code and creating work arounds but my co-workers do not.  This needs to be a system that is repeatable and understandable and needs to be able to function whether I'm here or not -- Mainly because I would like to step away from the responsibility of setting it up every time
    I'd love to hear your raw, unadulterated thoughts on the subject of Data merge and XML usage to accomplish these sorts of tasks.  What are your best practices and how would you / do you accomplish these operations?
    Regards-
    Robert

    From what I've gleaned through watching Lynda tutorials on the subject is that what I'm hoping to do is indeed possible.
    Peter, I dont disagree with you that there is a steep learning curve for me as the instigator / designer of this method for our team, but in terms of my teammates and end-users that will be softened considerably.  Even so I'm used to steep learning curves and the associated frustrations -- but I cope well with new learning and am self taught in many tools and programs.
    Flow based XML structures:
    It seems as though as long as the initial page is set up correctly using imported XML, individual data records that cascade in a logical fashion can be flowed automatically into new pages.  Basically what you do is to create an XML based layout with the dynamic portion you wish to flow in a single frame, apply paragraph styles to the different tags appropriately and then after deleting unused records, reimport the XML with some specific boxes checked (depending on how you wish to proceed).
    From there simply dragging the data root into the frame will cause overset text as it imports all the XML information into the frame.  Assuming that everything is cascaded correctly using auto-flow will cause new pages to be automatically generated with the tags correctly placed in a similar fashion to datamerge -- but far more powerful and flexible. 
    The issue then again comes down to data organization in the XML file.  In order to use this method the data must be organized in the same order in which it will be displayed.  For example if I had a Lastname field, and a Firstname field in that order, I could not call the Firstname first without faulting the document using the flow method.  I could, however, still drag and drop content from each tag into the frame and it would populate correctly regardless of the order of appearance in the XML.
    Honestly either method would be fantastic for our current set of projects, however the flow method may be particularly useful in jobs that would require more than 40 spreads or simple layouts with huge amounts of data to be merged.

  • SQL Server 2012 Infrastructure Best Practice

    Hi,
    I would welcome some pointers (direct advice or pointers to good web sites) on setting up a hosted infrastructure for SQL Server 2012. I am limited to using VMs on a hosted site. I currently have a single 2012 instance with DB, SSIS, SSAS on the same server.
    I currently RDP onto another server which holds the BI Tools (VS2012, SSMS, TFS etc), and from here I can create projects and connect to SQL Server.
    Up to now, I have been heavily restricted by the (shared tenancy) host environment due to security issues, and have had to use various local accounts on each server. I need to put forward a preferred environment that we can strive towards, which is relatively
    scalable and allows me to separate Dev/Test/Live operations and utilise Windows Authentication throughout.
    Any help in creating a straw man would be appreciated.
    Some of the things I have been thinking through are:
    1. Separate server for Live Database, and another server for Dev/Test databases
    2. Separate server for SSIS (for all 3 environments)
    3. Separate server for SSAS (not currently using cubes, but this is a future requirement. Perhaps do not need dedicated server?)
    4. Separate server for Development (holding VS2012, TFS2012,SSMS etc). Is it worth having local SQL Server DB on this machine. I was unsure where SQL Server Agent Jobs are best run from i.e. from Live Db  only, from another SQL Server Instance, or to
    utilise SQL ServerAgent  on all (Live, Test and Dev) SQL Server DB instances. Running from one place would allow me to have everything executable from one place, with centralised package reporting etc. I would also benefit from some license cost
    reductions (Kingsway tools)
    5. Separate server to hold SSRS, Tableau Server and SharePoint?
    6. Separate Terminal Server or integrated onto Development Server?
    7. I need server to hold file (import and extract) folders for use by SSIS packages which will be accessible by different users
    I know (and apologise that) I have given little info about the requirement. I have an opportunity to put forward my requirement for x months into the future, and there is a mass of info out there which is not distilled in a way I can utilise. It would
    be helpful to know what I should aim for, in terms of separate servers for the different services and/or environments (Live/Test/Live), and specifically best practice for where SQL Server Agent jobs should be run from , and perhaps a little info on how to
    best control deployment/change control . (Note my main interest is not in application development, it is in setting up packages to load/refresh data marts fro reporting purposes).
    Many thanks,
    Ken

    Hello,
    On all cases, consider that having a separate server may increase licensing or hosting costs.
    Please allow to recommend you Windows Azure for cloud services.
    Answers.
    This is always a best practice.
    Having SSIS on a separate server allows you isolate import/export packages, but may increase network traffic between servers. I don’t know if your provider charges
    money for incoming traffic or outgoing traffic.
    SSAS on a separate server certainly a best practice too.
     It contributes to better performance and scalability.
    SQL Server Developer Edition cost about $50 dollars only. Are you talking about centralizing job scheduling on an on-premises computer than having jobs enable on a
    cloud service? Consider PowerShell to automate tasks.
    If you will use Reporting Services on SharePoint integrated mode you should install Reporting Services on the same server where SharePoint is located.
    SQL Server can coexist with Terminal Services with the exception of clustered environments.
    SSIS packages may be competing with users for accessing to files. Maybe copying them to a disk resource available for the SSIS server may be a better solution.
    A few more things to consider:
    Performance storage subsystem on the cloud service.
    How Many cores? How much RAM?
    Creating a Domain Controller or using active directory services.
    These resources may be useful.
    http://www.iis.net/learn/web-hosting/configuring-servers-in-the-windows-web-platform/sql-2008-for-hosters
    http://azure.microsoft.com/blog/2013/02/14/choosing-between-sql-server-in-windows-azure-vm-windows-azure-sql-database/
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Best Practice on Knowledge Management, IS01 Problems and Solutions

    Been Playing with KM and looking for insight from other users (will give points) using it for ICWC.
    We have mulitple product lines where we have documents with Q & A's in each line.  As I look at moving that into CRM via IS01, I am looking for any Best Practices or recommendations.
    1. Create a single problem and solution for every question.
    2. Create a single problem (list all questions)  for every product line and create multiple solutions that are linked to that problem (solution for each question)
    3. Is LMSW a good tool for loading data in mass?
    The ICWC search brings back the 1st line on the problem & solution on the screen, meaning I try to limit the characters used so it fits on the ICWC screen, a long 1st line on the problem doesn't allow the agent to see enough of the solution without opening the link.
    Thanks,
    Edited by: Glenn Michaels on Jun 14, 2008 9:52 PM

    Hello Glenn,
    If it helps, here's a scenario about KB on my company system.
    Our call center supervisors are the persons who creates problem and solutions in our KB. They maintain it but don't use IS01 transaction. They use instead  People Centric BSP's for Problem s and solutions (they're integrated in IC webclient with the help of transaction launcher).
    Normally, they prefer creating multiple solutions to one problem, instead of single problem - single solution method. This because some questions may have multiple solutions. They could put all the solutions text in one solution object, but for maintainance purposes we think it's better to create multiple solutions object to every solution text, because if one solution becames obsete, all we have to do is unlink instead of changing the text.
    We don't use LSMW. I don't have much experience in LSMW, but if you use it, be careful to respect KB interval numbers for problems and solutions. We implement an initial set of problems and solutions in our Development system, and we passed to the Quality and Produtive system, with the precious help of sap note '728295 - Transport the SDB customization between two CRM systems' and '728668 -Transport the content of the SDB between two CRM systems'.
    One cool idea to use the KB is using Auto-suggest feature. The idea is to integrate the links between problems and solutions with, for example, service ticket categorization, using BSP Category modeler, and when an agent classifies a ticket, at the top of the screen it will appear the suggested solutions/problems for the classification choosen.
    I think that's all.
    Sorry for my poor english. Today I'm not feeling very inspirated
    Kind regards.
    Edited by: Bruno Garcia on Jun 17, 2008 9:51 PM (added note 728668)

  • Best Practice for Production IDM setup

    Hi, what is the best practice for setting up prodcution IDM:
    1. Connect IDM prod to ECC DEV,QA and Prod or
    2. Connect IDM prod to ECC prod only and Connect IDM dev to ECC Dev and QA.
    Please also specify pros and cons for both options if possible.
    Thanks in advance,
    Farhan

    We run our IDM installation as per your option 2 (Prod and non-prod on separate instances)
    We use HCM for the source of truth in production and have a strict policy regarding not allowing non HCM based user accounts. HCM creates the SU01 record and details are downloaded to IDM through the LDAP extract. Access is provision based on Roles attached to the HCM Position in IDM. In Dev/test/uat we create user logins in IDM and push the details out.
    Our thinking was that we definitely needed a testing environment for development and patch testing, and it needs to be separate to production. It was also ideal to use this second environment for dev/test/uat since we are in the middle of a major SAP project rollout and are creating hundreds of test and training users with various roles and prefer to keep this out of a production instance.
    Lately we also created a sandpit environment since I found that I could not do destructive testing or development in the dev/test/uat instance because we were becoming reliant on this environment being available. Almost a second production instance - since we also set the policy that all changes are made through IDM and no direct SU01 changes are permitted.
    Have a close look at your usage requirements before deciding which structure works best for you.

  • Best practice to keep on top of Binding in ADF?

    Hello,
    Today I had to change an adf table in my jsf page because I switched to a different view object and I tought I would be easier for a beginner like me to remove the adf table and add it again with the new binding automatically generated by the IDE.
    This part work but I have to say that in that table (read only table) I had a fillter with a select on choice that was binding to a different view. Of course I lost this filter when I remove the table but I had a backup of the previous jsf page so I added it again manually the one choice filter with a text editor. Worked good too but when I ran the page the one choice list was empty. Then I added to go to the binding section of the page and create a tree binding which create a iterator to link the binding to the data control for the data used by this one choice filter.
    So I conclluded that Jdeveloper remove the binding information when you remove in the jsp page the only component that used it. Could you confirm this? My understand is that when I removed my adf table I lost also some of the binding it was using.
    I added the binding tree in the binding section and it is working again. But I would like to get some best practices to deal with the binding in the ADF. I have the feeling to I could get lost quicky there and wondering why something is not working. How to keep on top of the binding?
    Stephane

    In addition to vinod let me pint out, that jdev automatically removes bindings from the pagedef whenever you remove a tag or component from the page using the view, the structure panel or the property inspector. Only removing a component Udolf the source view prevents this.
    This is not a bug, it's a feature. It helps to keep the pagedef small and clean. Beleave me, if your pagedef gets massed up, you'll get errors you don't connect to wrong bindings.
    So of you keep this in mind, it should not be a problem.
    Timo

  • Mapping Best Practice Doubt

    Dear SDN,
    I have a best practice doubt.
    For an scenario where it is needed to mapping a value to another value, but the conversion is based on certain logic over R/3 data what is the recommended implementation:
    1.  Use Value Mapping Replication for Mass Data   or
    2.  Use XSLT ABAP Mapping calling an RFC ??
    Best regards,
    Gustavo P.

    Hi,
    I would suggest you use XSLT ABAP mapping or,
    Use the RFC LookUp API available from SP 14 onwards to call the RFC from your message mapping itself.
    Regards
    Bhavesh

  • Best Practices for Service Entry Sheet Approval

    Hi All
    Just like to get some opinion on best practices for external service management - particularly approval process for Service Entry Sheet.
    We have a 2 step approval process using workflow:
    1 Entry Sheet Created (blocked)
    2. Workflow to requisition creator to verify/unblock the Entry Sheet
    3. Workflow to Cost Object owner to approve the Entry Sheet.
    For high volume users (e.g. capital projects) this is cumbersome process - we looking to streamline but still maintain control.
    What do other leaders do in this area?  To me mass release seems to lack control, but perhaps by using a good release strategy we could provide a middle ground? 
    Any ideas or experiences would be greatly appreciated.
    thanks
    AC.

    Hi,
    You can have purchasing group (OME4) as department and link cost center to department (KS02). Use user exit for service entry sheet release and can have two characteristics for service entry sheet release, one is for value (CESSR- LWERT) and another one for department (CESSR-USRC1) .Have one release class for service entry sheet release & then add value characteristics (CESSR- LWERT) and department characteristics (CESSR-USRC1). Now you can design release strategies for service entry sheet based on department & value, so that SES will created and then will be released by users with release code based on department & value assigned to him/her.
    Regards,
    Biju K

  • Best practice for MRP

    Hi,
    does someone know where to get a best practices about running MRP or has a good tutorial?
    I have setup my material database ( e.g. MRP 1,2,3,4 tabs) but am not really sure how to continue.
    I'm a little bit confused in which order I have to execute the transaction e.g. MD20 / MDAB, MD01/MDBT, MD15, MD05, etc.!?
    Could someone help me just a little bit
    Thanks in advance.

    Hi,
    Steps sequence which you have written is correct one.
    1)MD20 (Manual Planning File Entry Maintain / MDAB (Background job) :- This is the first step which system checks during total Planning run.System considers only those materials for which entry is maintain over here.But there is no need to maintain manual planning file entry each time.If your plant is activated(T.code OMDU)  for MRP then system will take care of this means entry will managed automatically by the system.but for safty side you can use MDAB for the scheduled maintenance of planning file.
    2) MD01 - Total Planning.
    3) MD15 - you can convert Planned orders to PR by this code in Mass.
    4) MD05 - Its report only and saw the result of last MRP run.
    Regards,
    Dhaval

  • Best practices for EasyDMS Public Folder usage/management

    Hi,
        We are implementing EDMS and are looking for best practices on the use of the Public Folder in EDMS.  We have different sites that have different business models, such as Engineer to Order or a "Projects" based business.  While other sites have a large Flow operation of standard catalog products with ordering options.   Initial thoughts are to put only documents in the public folders that are common to all users at a site, such as document templates or procedures.  Others suggest putting project folders there where anybody can browse through the different documents for a project.   And that raises the question about who is the owner or manager of that public folder.  We don't want the masses to be able to create random folders so that soon the structure of the Public Folder is a big unorganized mess.  Any thoughts on best practices you have implemented or seen in practice are appreciated.
    Thanks,
    Joseph Whiteley

    Hi!
    My suggestion is to skip the folders all together! It will end as a total mess after a couple of years. My recommendation is to use the classification of the document type and classify the document with the right information. You can then search for the documents and you don't need to look through tons of folders to find the right document.
    I know that you have to put the document in a folder to be able to create it in EasyDMS but at my customers we have a year folder and then month folders underneath where they just dump the documents. We then work with either object links or classification to find the right documents in the business processes. Another recommendation is to implement the TREX engine to be able to find your documents. I donu2019t know if this was the answer you wanted to get but I think this is the way forward if you would like to have a DMS system that could be used to 10+ years. Imagine replacing Google with a file browser!
    Best regards,
    Kristoffer P

Maybe you are looking for