Syncing App IDs across servers -- Best Practice?

This is prompted by a comment chrisstephens made in the thread at non-existent applications in non-existent workspaces reserving app id's
Our developers are convinced that the application id's between our dev + staging
+ production environments need to be synchronized.Our team also keeps our dev, test, and prod server app IDs synchronized -- for instance, the Widget Reporting App is always app # 38 on all three servers. For us, it's not something we see as REQUIRED, but it is convenient, and a general sanity check. If the numbers didn't sync, it seems it would be all too easy to get values mixed up and accidentally field an app to the wrong place (possibly overwriting some other application).
What is the community's opinions on this? Would you consider this an Apex Best Practice? Just a habit for some groups? Or overly rigid thinking?
(I personally fall in the Best Practice group.)

One good reason to keep them the same is so that there are no differences between what is tested in one environment and what is deployed in another. Case in point, just last week someone demonstrated that an application's authentication scheme failed when the application ID was changed from xxx to xxxxxxxxx (a longer string of digits). Of course this was due to a previously unknown bug, but that's what testing should reveal.
Another good reason is to make it possible to export application components (pages, etc.) from one database (say, dev) and install them into an application in another database (say, prod). This is not possible if the application IDs are different.
Scott

Similar Messages

  • User Account Authentication across multiple Solaris servers - Best Practice

    Hi,
    I am new to Solaris admin and would like to know the best practice/setup for authenticating user accounts across multiple solaris servers.
    Currently we have 20 - 30 Solaris 8 & 10 servers which each have their own user accounts setup. I am planning to replace these with a similar number of Solaris 10 servers and would like to centralise the user accounts and their authentication.
    I would be grateful for any suggestions on the best setup and any links to tutorials.
    Thanks
    Jools

    i would suggest LDAP + kerberos, LDAP for name lookups and krb5 for auth. provides secure auth + extensable directory for users and other apps if needed. plus, it provides a decent spring board to add other unix plats into the mix since this will support any unix/linux/bsd plat. you could integrate this design with a windows AD env if you want as well.
    [http://www.sun.com/bigadmin/features/articles/kerberos_s10.jsp] sol + ldap+ AD
    [http://docs.lucidinteractive.ca/index.php/Solaris_LDAP_client_with_OpenLDAP_server] sol + ldap (openldap)
    [http://aput.net/~jheiss/krbldap/howto.html] sol + ldap + krb5
    now these links are all using some diff means, however they should give you some ideas as to whats out there. sol 10 comes with suns ldap server and you can use the krb5 server which comes with it as well. many many diff ways to do this. many many more links out there as welll. these are just a few.

  • Trade offs for spreading oraganizatons across suffixes - best practices?

    Hey Everyone, I am trying to figure out some best practices here. I'v looked through the docs but have not found anything that quite touches on this.
    In the past, here is how I created my directory (basically using dsconf create-suffix for each branch I needed)
    dsconf list-suffixes
    dc=example,dc=com
    ou=People,dc=example,dc=com
    ou=Groups,dc=example,dc=com
    o=Services,dc=example,dc=com
    ou=Groups,o=Services,dc=example,dc=com
    ou=People,o=Services,dc=example,dc=com
    o=listserv,dc=example,dc=com
    ou=lists,o=listserv,dc=example,dc=com
    A few years later, learning more, and setting up replication, it seems I may have made my life a bit more complicated that it should be. It seems i would need many more replication agreements to get every branch of the tree replicated. It also seems that different parts of the directory are stored in different backend database files.
    It seems like I should have something like this:
    dsconf list-suffixes
    dc=example,dc=com
    Instead of creating all the branches as suffixes or sub-suffixes, maybe i should have just created organization and organizational unit entries within a single suffix "dc=example,dc=com". This way I can replicate all data by replicating just one suffix. Is there a downside to having one backend db files containing all the data instead of spreading it across multiple files (were talking possibly 90K entries across the entire directory).
    Can anyone confirm the logic here or provide any insight?
    Thanks much in Advance,
    Deejam

    Well, there are a couple of dimensions to this question. The first is simply whether your DIT ought to have more or less depth. This is an old design debate that goes back to problems with changing DNs in X500 style DITs with lots of organizational information embedded in the DN. Nowadays DITs tend to be flatter even though there are more tools for renaming entries. You still can't rename entries across backends, though. The second dimension is, given a DIT, how should you distribute the containers in your DIT across the backend databases.
    As you have already determined, the principal design consideration for your backend configuration will be replication, though scalability and backup configuration might also come into it. From what you have posted, though, it does not look like you have that much data. So yes, you should configure database backends and associated suffixes with sufficient granularity to support your replication requirements. So, if a particular suffix needs to be replicated differently than another suffix, they need to be defined as distinct suffixes/backends. Usually we define the minimal number of suffixes and backends needed to satisfy the topological requirements, though I can imagine there might be cases where suffixes might be more fine grained.
    For large, extensible Directory topologies, I usually look for data that's sensibly divisible into "building blocks". So for instance you might have a top-level suffix "dc=example,dc=com" with a bunch of global ACIs, system users and groups that are going to need to be everywhere. Then you might have a large chunk of external customer data, and a small amount of internal employee data. I would consider putting the external users in a distinct suffix from the employees, because the two types of entries are likely to be quite different. If I have a need to build a public Directory somewhere, all I have to do is configure the external suffix and replicate it. The basic question I would be asking there is if I might ever need to expose a subset of the Directory, will the data already be partitioned for me or will I have to do data reorganization.
    In your case, it does not look likely you will need to chop up your data much, so it's probably simpler to stay monolithic and use only one backend.

  • Is it possible to sync app folders across 2 iphones?

    I have 2 iPhones (because I need two numbers and because one is a company phone).
    I have a lot of apps installed on both phones, and I've spent quite a bit of time organising them into thematic folders. Ideally, I'd like to have the same folder layouts on both phones, but this is a purely manual process right now.
    I'm shortly going to replace my personal 4S with a 6, so I'm trying to see how I can ensure all the apps on the 4S appear on the 6, in their folders and pages.
    Any ideas?
    Gareth

    You can try this:
    from iMessage enable your email to receive on both devices
    For SMS enable message forwarding

  • Mobile App Best Practice When Using SQLite Database

    Hello,
    I have a mobile app that has several views.
    Each view calls a different method of a Database custom class that basically returns the array from a synchronous execute call.
    So, each view has a creationComplete handler in which I have something like this:
    var db:Database=new Database();
    var connectResponse:Object=db.connect('path-to-database');
    if(connectResponse.allOK)//allOK is true if connection was succesful
       //Do stuff with data
    else
       //Present error notice
    However this seems reduntant. Is it OK to do this once (connect the the database) in the Main Application file?
    The do something like FlexGlobals.topLevelApplication.db?
    And even generally speaking, constants and other things that I would need throughout the app, can be placed in the main app? As a best practice, not technically as technically it is possible.
    Thank you.

    no, I only connect it once
    I figured I wanted several views to use it so made it static and singleton as I only have 1 database
    I actually use synchronous calls but there is a sync with remote mysql database function, hence the eventdispatcher
    ... although I am thinking it might be better to use Async and dispatch a custom event and have the relative views subscribe

  • Best Practice for FlexConnect Wireless roaming in MediaNet environment?

    Hello!
    Current Cisco best practice recommendations for enterprise MediaNet design, specify that VLANs be local to a switch / switch stack (i.e., to limit the scope of spanning-tree). 
    In the wireless world, this causes problems if you want users while roaming to keep real-time applications up and running.  Every time they connect to a new AP on a different VLAN, then they will need to get a new IP address, which interrupts real-time apps. 
    So...best practice for LAN users causes real problems for wireless users.
    I thought I'd post here in case there's a best practice for implementing wireless roaming in a routed environment that we might have missed so far!
    We have a failover pair of FlexConnect 7510s, btw, configured for local switching for Internal users, and central switching with an anchor controller on the DMZ for Guest users.
    Thanks,
    Deb

    Thanks for your replies, Stephen and JSnyder.
    The situation here is that the original design engineer is no longer here, and the original design was not MediaNet-friendly, in that it had a very few /20 subnets bridged over entire large sites. 
    These several large sites (with a few hundred wireless users per site), are connected to an HQ location (where the 7510s in failover mode are installed) via 1G ethernet hand-offs (MPLS at the WAN provider).  The 7510s are new, and are replacing older contollers at the HQ location. 
    The internal employee wireless users use resources both local to their site, as well as centralized resources.  There are at least as many Guest wireless users per site as there are internal employee users, and the service to them consists of Internet traffic only.  (When moved to the 7510s, their traffic will continue to be centrally switched and carried to an anchor controller in the DMZ.) 
    (1) So, going local mode seems impractical due to the sheer number of users whose traffic bound for their local site would be traversing the WAN twice.  Too much bandwidth would be used.  So, that implies the need to use Flex / HREAP mode instead.
    (2) However, re-designing each site's IP environment for MediaNet would suggest to go routed to the closet.  However, this breaks seamless roaming for users....
    So, this conundrum is why I thought I'd post here, and see if there was some other cool / nifty solution I wasn't yet aware of. 
    The only other (possibly friendly to both needs) solution I'd thought of was to GRE tunnel a subnet from each closet to the collapsed Core / Disti switch at each site.  Unfortunately, GRE tunnels are not supported in the rev of IOS on the present equipment, and so it isn't possible to try this idea.
    Another "blue sky" idea I had (not for this customer, but possibly elsewhere in the future), is to use LAN switches such as 3850s that have WLC functionality built-in.  I haven't yet worked with the WLC s/w available on those, but I was thinking it looks like they could be put into a mobility group, and L3 user roaming between them might then work.  Do you happen to know if this might be a workable solution to the overall big-picture problem? 
    Thanks again for taking the time and trouble to reply!
    Deb

  • What is the best practice to create tabs ?

    I need help on this issue please, I need to create 3
    different tabs, each one wih different content : 1) plain text 2)
    small form 3) flash app
    What is the best practice ? thanks in advance...

    Is this Scoller Panel what you had in mind?
    http://www.fourlevel.com/product/spanel/go/ex1/index.htm
    Nancy O.
    Alt-Web Design & Publishing
    www.alt-web.com

  • PI best practice and integration design ...

    Im currently on a site that has multiple PI instance's for each region and the question of inter-region integration has been raised my initial impression is that each PI will be in charge of integration of communications for its reginal landscape and inter-region communications will be conducted through PI - PI interface . I havent come across any best practice in this regard and have never been involved with a multiple PI landscape ...
    Any thoughts ? or links to best practice for this kind of landscape ?...
    to Summaries
    I think this is the best way to set it up, although numerous other combinations are possible, this seems to be the best way to avoid any signifcant system coupling. When talking about ECC - ECC inter-region communications
    AUS ECC -
    > AUS PI -
    > USA PI -
    > USA ECC

    abhishek salvi wrote:
    I need to get data from my local ECC to USA ECC, do i send the data to their PI/my PI/directly to their ECC, all will work, all are
    valid
    If LocalECC --> onePI --> USA ECC is valid, then you dont have to go for other PI in between...why to increase the processing time....and it seems to be a good option to bet on.
    The issue is
    1. Which PI system should any given peice of data be routed through and how do you manage the subsequent spider web of interfaces resulting from PI AUS talking to ECC US, ECC AU, BI US, BI AU and the reverse for the PI USA system.
    2. Increased processing time Integration Engine - Integration Engine should be minimal and it will mean a consistent set of interfaces for support and debug, not to mention the simplification of SLD contents in each PI system.
    I tend to think of like network routing, the PI system is the default gateway for any data not bound for a local systems you send and let PI figure out what the next step is.
    abhishek salvi wrote:
    But then what about this statement (is it a restriction or business requirement)
    Presently the directive is that each PI will manage communications with its own landscape only respectively
    When talking multiple landscapes dev / test / qa / prod, each landscape has its own PI system generally, this is an extention of the same idea except that both systems are productive, from a interface and customisation point of view given the geographical remotness of each system local interface development for local systems and support makes sense, whilst not limited to this kind of interaction typically interfaces for a given business function for a given location (location specific logic) would be developed in concert with the interface and as such has no real place on the remote system (PI).
    To answer your question there is no rule, it just makes sense.

  • Best Practices For Household IOS's/Apple IDs

    Greetings:
    I've been searching support for best practices for sharing primarily apps, music and video among multple iOS's/Apple IDs.  If there is a specific article please point me to it.
    Here is my situation: 
    We currently have 3 iPads (2-kids, 1-dad) in the household and one iTunes account on a win computer.  I previously had all iPads on single Apple ID/credit card and controlled the kids' downloads thru the Apple ID password that I kept secret.  As the kids have grown older, I found myself constantly entering my password as the kids increased there interest in music/apps/video.  I like this approach because all content was shared...I dislike because I was constantly asked to input password for all downloads.
    So, I recently set up an individual account for them with the allowance feature at iTunes that allows them to download content on their own (I set restrictions on their iPads).  Now I have 3 Apple IDs under one household.
    My questions:
    With the 3 Apple IDs, what is the best way to share apps,music, videos among myself and the kids?  Is it multiple accounts on the computer and some sort of sharing? 
    Thanks in advance...

    Hi Bonesaw1962,
    We've had our staff and students run iOS updates OTA via Settings -> Software Update. In the past, we put a DNS block on Apple's update servers to prevent users from updating iOS (like last fall when iOS 7 was first released). By blocking mesu.apple com, the iPads weren't able to check for or install any iOS software updates. We waited until iOS 7.0.3 was released before we removed the block to mesu.apple.com at which point we told users if they wanted to update to iOS 7 they could do so OTA. We used our MDM to run reports periodically to see how many people updated to iOS 7 and how many stayed on iOS 6. As time went on, just about everyone updated on their own.
    If you go this route (depending on the number of devices you have), you may want to take a look at Caching Server 2 to help with the network load https://www.apple.com/osx/server/features/#caching-server . From Apple's website, "When a user on your network downloads new software from Apple, a copy is automatically stored on your server. So the next time other users on your network update or download that same software, they actually access it from inside the network."
    I wish there was a way for MDMs to manage iOS updates, but unfortunately Apple hasn't made this feature available to MDM providers. I've given this feedback to our Apple SE, but haven't heard if it is being considered or not. Keeping fingers crossed.
    Hope this helps. Let us know what you decide on and keep us posted on the progress. Good luck!!
    ~Joe

  • Best practice when using Tangosol with an app server

    Hi,
    I'm wondering what is the best practice when using Tangosol with an app server (Websphere 6.1 in this case). I've been able to set it up using the resource adapter, tried using distributed transactions and it appears to work as expected - I've also been able to see cache data from another app server instance.
    However, it appears that cache data vanishes after a while. I've not yet been able to put my finger on when, but garbage collection is a possibility I've come to suspect.
    Data in the cache survives the removal of the EJB, but somewhere later down the line it appear to vanish. I'm not aware of any expiry settings for the cache that would explain this (to the best of my understanding the default is "no expiry"), so GC came to mind. Would this be the explanation?
    If that would be the explanation, what would be a better way to keep the cache from being subject to GC - to have a "startup class" in the app server that holds on to the cache object, or would there be other ways? Currently the EJB calls getCacheAdapter, so I guess Bad Things may happen when the EJB is removed...
    Best regards,
    /Per

    Hi Gene,
    I found the configuration file embedded in coherence.jar. Am I supposed to replace it and re-package coherence.jar?
    If I put it elsewhere (in the "classpath") - is there a way I can be sure that it has been found by Coherence (like a message in the standard output stream)? My experience with Websphere is that "classpath" is a rather ...vague concept, we use the J2CA adapter which most probably has a different class loader than the EAR that contains the EJB, and I would rather avoid to do a lot of trial/error corrections to a file just to find that it's not actually been used.
    Anyway, at this stage my tests are still focused on distributed transactions/2PC/commit/rollback/recovery, and we're nowhere near 10,000 objects. As a matter of fact, we haven't had more than 1024 objects in these app servers. In the typical scenario where I've seen objects "fade away", there has been only one or two objects in the test data. And they both disappear...
    Still confused,
    /Per

  • Best Practice for the Service Distribution on multiple servers

    Hi,
    Could you please suggest as per the best practice for the above.
    Requirements : we will use all features in share point ( Powerpivot, Search, Reporting Service, BCS, Excel, Workflow Manager, App Management etc)
    Capacity : We have  12 Servers excluding SQL server.
    Please do not just refer any URL, Suggest as per the requirements.
    Thanks 
    srabon

    How about a link to the MS guidance!
    http://go.microsoft.com/fwlink/p/?LinkId=286957

  • Lightroom CC syncing and deleting across Mac, iPhone, and iPad Lightroom apps

    I have searched for nearly an hour for some answers and find the Lightroom support sections massively confusing, even contradictory (example here:  Adobe Lightroom on mobile FAQ In one answer it says that "Lightroom on mobile syncs original JPEG and PNG files that originate from your mobile device in the Creative Cloud" and in another question right below it says "Since Lightroom on mobile does not store your original image files, but instead syncs Smart Previews").  I need some basic questions answered around how these various platforms inter-operate [BTW, I am running latest OS and App versions across the board]:
    1) Do images taken with my iPhone automatically get uploaded to Creative Cloud (CC) or not and are they at full resolution? [Seems like they sync once I open the app, but not sure of resolution.  Would be good to know what is supposed to trigger the syncing, e.g., opening the app, taking the picture, etc.]
    2) Once an image is uploaded to CC from the phone, and I can see it in Lightroom desktop, are changes made to the image from either device's version of Lightroom automatically synced, i.e., I will see the changes made from any device on any other device the next time I log into CC from that device? [It seems like this is the way it works, but again not sure]
    3) Assuming the whole image is uploaded intact from the phone to the CC, if I delete the image in the Apple Photos app, will that in any way affect the CC cloud image?  I am not deleting it from within any Lightroom app at this point -- just the native IOS app. [Does not appear to be in any way connected -- which is both good and bad -- must delete in multiple places now]
    4) If I delete the photo in the CC from the desktop, does it delete the native photo on the phone as well, or do I need to delete it in both places to get rid of it entirely? [It would seem that when I delete the photo from the Lightroom iPhone app, after it has been synced, it does not delete it from the Lightroom desktop version -- I still see it there]
    5) I assume videos taken with the phone are not uploaded -- is that right? [They are not loaded to the CC as best as I can tell]
    6) Do the iPhone photos that get uploaded to CC end up only as preview-like images on the desktop version of Lightroom, or is the entire original resolution image downloaded to the desktop?  If it is not downloaded, how do I export a full resolution, modified image from Lightroom desktop (or from the iPhone for that matter)?
    7) I am not seeing any of the 25,000+ photos stored on my desktop version of Lightroom getting synced to the cloud -- isn't supposed to sync a preview image, and if so, how do I enable that?  What happens to changes to that preview image made from the phone -- do they get made back on the desktop version of the image through Lightroom?
    Thanks for the help.  I wish Adobe made this clearer up front so as to avoid any accidental deletions and to ensure the products are interoperating the way they should.

    Let me try to answer your questions.
    1)
    collections created within the Lr Mobile app will be automatically synced to the back-end. You can add photos manually from your camera roll or enable a collection for "Auto Add". With that all future camera shots will be automatically imported to Lr Mobile and synced
    2)
    correct. changes will be synced from Lr Mobile to Lr Desktop - back&forth
    3) local camera roll deletion is not connected to the Cloud back-end. But be carefull that the import/sync is finished before you trigger that.
    4)
    as answerd before. the cloud back-end is not connected to the device camera roll
    5)
    video is not supported yet
    7)
    original resolution images will be  downloaded to Lr Desktop. Preview images will be shown in the Lr Web view Adobe Lightroom
    8)
    when you create a collection within Lr Desktop and add your photos you would like to sync, then you can enable this collection for sync. As soon your are signed-in and start the sync (via the top left activity panel) you will notice a little checkbox beside the collection. When you check this sync start to run. Lr Deskop generates smart previews and uploads these to the back-end (plus thumbnail previews + meta-data). When you open up Lr Mobile app and you would like to make an edit, the Lr Mobile app loads the smart preview form the back-end and syncs back the meta-data or edit setting.
    Hope that helps you to start.
    Guido

  • Ibook to desktop syncing best practices

    trying to keep my ibook in sync with my g5 desktop. Client projects in addition to the entourage data files, etc.. I've come across numerous scenarios and recommendations. Anyone with any best practice suggestions, ie: software, syncing scenarios, automation, etc., would be greatly appreciated.

    Hello Hugh
    The settings that you are looking for are in Itunes. you can choose to sync only unlistened podcasts.
    If you go to the podcast section in itunes, there is a filed that says keep and you can choose from the following options:
    All episodes
    All unplayed episodes
    Most Recent episode
    Then when you connect your ipod you will then see an optionto only sync the unlistened podcasts and you should be all set.

  • Jdev101304 SU5 - ADF Faces - Web app deployment best practice|configuration

    Hi Everybody:
    1.- We have several web applications that provides a service/product used for public administration purposes.
    2.- the apps are using adf faces adf bc.
    2.- All of the apps are participating on javaSSO.
    3.- The web apps are deployed in ondemand servers.
    4.- We have notice, that with the increase of users on this dates, the sessions created by the middle tier in the database, are staying inactive but never destroyed or removed.
    5.- Even when we only sing into the apps using javasso an perform no transacctions (like inserting or deleting something), we query the v$sesisons in the database, and the number of inactive sessions is always increasing, until the server colapse.
    So, we want to know, if this is an issue of the configurations made on the Application Module's properties. And we want to know if there are some "best practices" that you could provide us to configure a web application and avoid this behavior.
    The only configurations that we found recomended for web apps is set the jbo.locking.mode to optimistic, but this doesn't correct the "increasing inactive sessions" problem.
    Please help us to get some documentation or another resource to correct configure our apps.
    Thnks in advance.
    Edited by: alopez on Jan 8, 2009 12:27 PM

    hi alopez
    Maybe this can help, "Understanding Application Module Pooling Concepts and Configuration Parameters"
    see http://www.oracle.com/technology/products/jdev/tips/muench/ampooling/index.html
    success
    Jan Vervecken

  • Best practice for highly available management / publishing servers

    I am testing a highly available appv 5.0 environment, which will deploy appv packages to a Xenapp farm.  I have two SQL 2012 servers configured as an availability group for the backend, and two publishing / management servers for the front end. 
    What is the best practice to configure the publishing / management servers for high availability?  Should I configure them as an NLB cluster, which I have tested and does seem to work, or should I just use the GPO to configure the clients to use both
    publishing servers, which I have also tested and appears to work?
    Thanks,
    Patrick Sullivan

    In App-V 5.0 the Management and Publishing Servers are hosted in IIS, so use the same approach for HA as you would any web application.
    If NLB is all that's available to you, then use that; otherwise I would recommend a proper load balancing solution such as Citrix NetScaler or KEMP LoadManager.
    Please remember to click "Mark as Answer" or "Vote as Helpful" on the post that answers your question (or click "Unmark as Answer" if a marked post does not actually
    answer your question). This can be beneficial to other community members reading the thread.
    This forum post is my own opinion and does not necessarily reflect the opinion or view of my employer, Microsoft, its employees, or other MVPs.
    Twitter:
    @stealthpuppy | Blog:
    stealthpuppy.com |
    The Definitive Guide to Delivering Microsoft Office with App-V

Maybe you are looking for