EDirectory Schema extensions best practices / Mac OS X 10.5

Hello all,
I am integrating Mac OS X clients into my eDirectory environment, and part of my process is to extend the eDirectory schema with the relevant Mac-specific attributes. Is there an easy method to extending the schema, or do I need to manually add each individual attribute that is not already stored in an importable ldif file? Also, are there any best practices to follow when performing this work?
Thanks for the help!

Are these the extensions published by Apple? If so I think they have
fairly good documentation on their site where you got them from. If not,
well, we're going to need to know where you did get them from and what
they are actually doing.
And again, we need to move this to the novell.support.native-file-access
forum, where it belongs. Schema extensions are nothing to do with
netware.communications. Thanks
Andrew C Taubman
Novell Support Forums Volunteer SysOp
http://forums.novell.com/
(Sorry, support is not provided via e-mail)
Opinions expressed above are not
necessarily those of Novell Inc.

Similar Messages

  • Best practice for moving from a G5 to a new Mac with SL

    I am receiving my new iMac today (27") and am very excited
    However I want to move over using the best practices to assure that I remain excited and not frustrated
    My initial thoughts are to boot it up and doe the initial set up - to move my iPhoto library over and to use migration assistance to move the rest of my data files
    Then to install all of the extra software that I can find the packages for from the original installation disks
    And then finally to use migration assistant again to move over any software that I can not find original disks for (I've moved from Mac to Mac to Mac over and over and some of the software goes back to OS 9 (and won't run anymore I guess)
    Is this a good way
    OR
    will I mess up doing it this way
    OR
    am I spending far too much time worrying about moving old problems over and would be better off to just turn MA loose and let it do its thing form the beginning?
    BTW - mail crashes a lot on my existing system - pretty much everything else seems ok - except iPhoto is slow - hoping that the new Intel dual core will help that
    LN

    Migration Assistant is not a general file moving tool. MA will migrate your Applications and Home folders transferring only your third-party applications. MA will transfer any application support folders required by your applications, your preferences, and network setup. You do not have a choice of what will be migrated other than the above. MA cannot determine whether anything transferred is compatible with Snow Leopard. I recommend you look at the following:
    A Basic Guide for Migrating to Intel-Macs
    If you are migrating a PowerPC system (G3, G4, or G5) to an Intel-Mac be careful what you migrate. Keep in mind that some items that may get transferred will not work on Intel machines and may end up causing your computer's operating system to malfunction.
    Rosetta supports "software that runs on the PowerPC G3, G4, or G5 processor that are built for Mac OS X". This excludes the items that are not universal binaries or simply will not work in Rosetta:
    Classic Environment, and subsequently any Mac OS 9 or earlier applications
    Screensavers written for the PowerPC
    System Preference add-ons
    All Unsanity Haxies
    Browser and other plug-ins
    Contextual Menu Items
    Applications which specifically require the PowerPC G5
    Kernel extensions
    Java applications with JNI (PowerPC) libraries
    See also What Can Be Translated by Rosetta.
    In addition to the above you could also have problems with migrated cache files and/or cache files containing code that is incompatible.
    If you migrate a user folder that contains any of these items, you may find that your Intel-Mac is malfunctioning. It would be wise to take care when migrating your systems from a PowerPC platform to an Intel-Mac platform to assure that you do not migrate these incompatible items.
    If you have problems with applications not working, then completely uninstall said application and reinstall it from scratch. Take great care with Java applications and Java-based Peer-to-Peer applications. Many Java apps will not work on Intel-Macs as they are currently compiled. As of this time Limewire, Cabos, and Acquisition are available as universal binaries. Do not install browser plug-ins such as Flash or Shockwave from downloaded installers unless they are universal binaries. The version of OS X installed on your Intel-Mac comes with special compatible versions of Flash and Shockwave plug-ins for use with your browser.
    The same problem will exist for any hardware drivers such as mouse software unless the drivers have been compiled as universal binaries. For third-party mice the current choices are USB Overdrive or SteerMouse. Contact the developer or manufacturer of your third-party mouse software to find out when a universal binary version will be available.
    Also be careful with some backup utilities and third-party disk repair utilities. Disk Warrior 4.1, TechTool Pro 4.6.1, SuperDuper 2.5, and Drive Genius 2.0.2 work properly on Intel-Macs with Leopard. The same caution may apply to the many "maintenance" utilities that have not yet been converted to universal binaries. Leopard Cache Cleaner, Onyx, TinkerTool System, and Cocktail are now compatible with Leopard.
    Before migrating or installing software on your Intel-Mac check MacFixit's Rosetta Compatibility Index.
    Additional links that will be helpful to new Intel-Mac users:
    Intel In Macs
    Apple Guide to Universal Applications
    MacInTouch List of Compatible Universal Binaries
    MacInTouch List of Rosetta Compatible Applications
    MacUpdate List of Intel-Compatible Software
    Transferring data with Setup Assistant - Migration Assistant FAQ
    Because Migration Assistant isn't the ideal way to migrate from PowerPC to Intel Macs, using Target Disk Mode, copying the critical contents to CD and DVD, an external hard drive, or networking
    will work better when moving from PowerPC to Intel Macs. The initial section below discusses Target Disk Mode. It is then followed by a section which discusses networking with Macs that lack Firewire.
    If both computers support the use of Firewire then you can use the following instructions:
    1. Repair the hard drive and permissions using Disk Utility.
    2. Backup your data. This is vitally important in case you make a mistake or there's some other problem.
    3. Connect a Firewire cable between your old Mac and your new Intel Mac.
    4. Startup your old Mac in Target Disk Mode.
    5. Startup your new Mac for the first time, go through the setup and registration screens, but do NOT migrate data over. Get to your desktop on the new Mac without migrating any new data over.
    If you are not able to use a Firewire connection (for example you have a Late 2008 MacBook that only supports USB:)
    1. Set up a local home network: Creating a small Ethernet Network.
    2. If you have a MacBook Air or Late 2008 MacBook see the following:
    MacBook (13-inch, Aluminum, Late 2008) and MacBook Pro (15-inch, Late 2008)- Migration Tips and Tricks;
    MacBook (13-inch, Aluminum, Late 2008) and MacBook Pro (15-inch, Late 2008)- What to do if migration is unsuccessful;
    MacBook Air- Migration Tips and Tricks;
    MacBook Air- Remote Disc, Migration, or Remote Install Mac OS X and wireless 802.11n networks.
    Copy the following items from your old Mac to the new Mac:
    In your /Home/ folder: Documents, Movies, Music, Pictures, and Sites folders.
    In your /Home/Library/ folder:
    /Home/Library/Application Support/AddressBook (copy the whole folder)
    /Home/Library/Application Support/iCal (copy the whole folder)
    Also in /Home/Library/Application Support (copy whatever else you need including folders for any third-party applications)
    /Home/Library/Keychains (copy the whole folder)
    /Home/Library/Mail (copy the whole folder)
    /Home/Library/Preferences/ (copy the whole folder)
    /Home /Library/Calendars (copy the whole folder)
    /Home /Library/iTunes (copy the whole folder)
    /Home /Library/Safari (copy the whole folder)
    If you want cookies:
    /Home/Library/Cookies/Cookies.plist
    /Home/Library/Application Support/WebFoundation/HTTPCookies.plist
    For Entourage users:
    Entourage is in /Home/Documents/Microsoft User Data
    Also in /Home/Library/Preferences/Microsoft
    Credit goes to Macjack for this information.
    If you need to transfer data for other applications please ask the vendor or ask in the Discussions where specific applications store their data.
    5. Once you have transferred what you need restart the new Mac and test to make sure the contents are there for each of the applications.
    Written by Kappy with additional contributions from a brody.
    Revised 1/6/2009
    In general you are better off reinstalling any third-party software that is PPC-only. Otherwise update your software so it's compatible with Snow Leopard.
    Do not transfer any OS 9 software because it's unsupported. You can transfer documents you want to keep.
    Buy an external hard drive to use for backup.

  • 2nd Mac - best practices using iPhoto on both?

    Hi -
    I just got a new MacBook and have an iMac that is still the "hub" of my photo library. It is, in fact, about a 180 GB iPhoto library. I know that I can't sync libraries between Macs (a shame - someone should come up with a way to that assuming they haven't already!) so I'm just looking for any best practices?
    I got the MacBook to be able to work on some photos while on the road - I can at least work on post processing in Photoshop, etc. I'm thinking now that my best strategy is to possibly work with the images on my MacBook, importing them into the iPhoto library if desired. Then use my Photo sharing service - Phanfare - to "sync" them? It requires me to download them on the other side and pull them again into the iPhoto Library on the iMac?
    I don't use the Mobile Me Gallery but I suppose that would be another way to have access to them on the alternate computer?
    Any other best practices or suggestions?
    Thx!

    So, if there are times when I'm not home to access my external drive, then going with the two libraries is the best solution, yes?
    Perhaps, but you can get very small and portable external HDs these days.
    I'm not sure though if I should really make both a 180 GB iPhoto library, do you? It is a back up true, but seems like a chunk to move
    But you only do it once. The first time. Thereafter you're simply updating the other with the changes.
    At least maybe I could split into pictures from 2009 - 2010 and have that library for both my iMac and the MacBook. I very rarely access before then (only if I need something specific) so then I could access that via the iMac exclusively?
    That would be viable.
    I would maintain a +full Library+ on the Desktop, the mobile versions a Smaller subset.
    I'm sort of ruling out the one library on the external solution because it eliminates the possibility of being remote -
    As I said above you can get tiny portable drives...
    unless there is some swanky Login to My Computer or something that works with a Mac that can go remotely to my computer and then to my external drive.
    *_This_* might help.
    Regards
    TD

  • Best practiceS for setting up Macs on Network

    Greetings.
    We have six Macs on our Windows Server network; three iMacs and three laptops. We have set up all the machines and they are joined to the Active Directory. In the past, we have always created local users on the machines and then "browsed" to the server shares and mounted the them. We've learned things have improved/changed over the years and we're just now realizing we can probably have the machines set up to work better. So, I have a couple of questions for "best practices" when setting up each of the machines.
    1. Since we’re in a network environment, should we not set up “local logins/users” and instead have users login using their AD login? It seems having a local account creates some conflicts with the server since upgrading to lion.
    2. Should we set the computer to not ask for a “list of users” and instead ask for a username and password for logins?
    3. For the user that uses the machine most often, they can still customize their desktop when they use an AD login, correct?
    4. Should we set up Mobile User Accounts? What exactly does this do?
    Any other advice on how we should best be setting up the clients for our environment to make sure we are following best practices would be great!
    Thanks for any help!
    Jay

    Greetings.
    We have six Macs on our Windows Server network; three iMacs and three laptops. We have set up all the machines and they are joined to the Active Directory. In the past, we have always created local users on the machines and then "browsed" to the server shares and mounted the them. We've learned things have improved/changed over the years and we're just now realizing we can probably have the machines set up to work better. So, I have a couple of questions for "best practices" when setting up each of the machines.
    1. Since we’re in a network environment, should we not set up “local logins/users” and instead have users login using their AD login? It seems having a local account creates some conflicts with the server since upgrading to lion.
    2. Should we set the computer to not ask for a “list of users” and instead ask for a username and password for logins?
    3. For the user that uses the machine most often, they can still customize their desktop when they use an AD login, correct?
    4. Should we set up Mobile User Accounts? What exactly does this do?
    Any other advice on how we should best be setting up the clients for our environment to make sure we are following best practices would be great!
    Thanks for any help!
    Jay

  • Best practice for TM on AEBS with multiple macs

    Like many others, I just plugged a WD 1TB drive (mac ready) into the AEBS and started TM.
    But in reading here and elsewhere I'm realizing that there might be a better way.
    I'd like suggestions for best practices on how to setup the external drive.
    The environment is...
    ...G4 Mac mini, 10.4 PPC - this is the system I'm moving from, it has all iPhotos, iTunes, and it being left untouched until I get all the TM/backup setup and tested. But it will got to 10.5 eventually.
    ...Intel iMac, 10.5 soon to be 10.6
    ...Intel Mac mini, 10.5, soon to be 10.6
    ...AEBS with (mac ready) WD-1TB usb attached drive.
    What I'd like to do...
    ...use the one WD-1TB drive for all three backups, AND keep a copy of system and iLife DVD's to recover from.
    From what I'm reading, I should have a separate partition for each mac's TM to backup to.
    The first question is partitioning... disk utility see's my iMac's internal HD&DVD, but doesn't see the WD-1TB on the AEBS. (when TM is activity it will appear in disk utility, but when TM ends, it drops off the disk utility list).
    I guess I have to connect it via USB to the iMac for the partitioning, right?
    I've also read the benefits of keeping a copy of the install DVD's on the external drive... but this raises more questions.
    How do I get an image of the install DVD onto the 1TB drive?
    How do I do that? (install?, ISO image?, straight copy?)
    And what about the 2nd disk (for iLife?) - same partition, a different one, ISO image, straight copy?
    Can I actually boot from the external WD 1TB while it it connected to the AEBS, or do I have to temporarily plug it in via USB?
    And if I have to boot the O/S from USB, once I load it and it wants to restore from the TM, do I leave it USB or move it to the AEBS? (I've heard the way the backups are created differ local vs network)>
    I know its a lot of question but here are the two objectives...
    1. Use TM in typical fashion, to recover the occasion deleted file.
    2. The ability to perform a bare-metal point-in-time recovery (not always to the very last backup, but sometimes to a day or two before.)

    dmcnish wrote:
    From what I'm reading, I should have a separate partition for each mac's TM to backup to.
    Hi, and welcome to the forums.
    You can, but you really only need a separate partition for the Mac that's backing-up directly. It won't have a Sparse Bundle, but a Backups.backupdb folder, and if you ever have or want to delete all of them (new Mac, certain hardware repairs, etc.) you can just erase the partition.
    The first question is partitioning... disk utility see's my iMac's internal HD&DVD, but doesn't see the WD-1TB on the AEBS. (when TM is activity it will appear in disk utility, but when TM ends, it drops off the disk utility list).
    I guess I have to connect it via USB to the iMac for the partitioning, right?
    Right.
    I've also read the benefits of keeping a copy of the install DVD's on the external drive... but this raises more questions.
    Can I actually boot from the external WD 1TB while it it connected to the AEBS, or do I have to temporarily plug it in via USB?
    I don't think so. I've never tried it, but even if it works, it will be very slow. So connect via F/W or USB (the PPC Mac probably can't boot from USB, but the Intels can).
    And if I have to boot the O/S from USB, once I load it and it wants to restore from the TM, do I leave it USB or move it to the AEBS? (I've heard the way the backups are created differ local vs network)
    That's actually two different questions. To do a full system restore, you don't load OSX at all, but you do need the Leopard Install disc, because it has the installer. See item #14 of the Frequently Asked Questions *User Tip* at the top of this forum.
    If for some reason you do install OSX, then you can either "transfer" (as part of the installation) or "Migrate" (after restarting, via the Migration Assistant app in your Applications/Utilities folder) from your TM backups. See the *Erase, Install, & Migrate* section of the Glenn Carter - Restoring Your Entire System / Time Machine *User Tip* at the top of this forum.
    In either case, If the backups were done wirelessly, you must transfer/migrate wirelessly (although you can speed it up by connecting via Ethernet).

  • Best practice to host websites on xserve with mac os x server leopard.

    Hi Guys,
    I'm trying to optimize the xserve to host multiple joomla sites...
    Can some one help me with "hidden manuals or using your experience" about best practices out there...!!
    It'd be great on your part...
    Cheers

    Erm, Joomla site hosting 'just works' with Leopard Server site virtualisation and the built in mysql.
    If you want the best practice try Mac OS X Server Essentials Second Edition which has chapters about setting up multiple web sites.

  • What is "best practice" to set up and configure a Mac Mini server with dual 1 TB drives, using RAID 1?

    I have been handed a new, out of the box, Mac Mini server.  Has two 1 TB drives in it.  Contractor suggested RAID 1 for the set up.  I have done some research
    and found out that in creating the software RAID, this takes away the recovery partition, so I have been reading up on how to create a recovery "disk" using a thumb drive.  this part of the operation I am comfortable with, but there are other issues/concerns that I have.
    Basically, what is the "best practice" to setup the Mini, configure the RAID and then start the server.  I am assuming the steps would be something like this:
    1) start up the Mini and run through the normal Maverick setup/config - keep it plain and vanilla
    2) grab a copy of the Server app and store it offline in a safe place
    3) perform the RAID configuration / reinstall of OS X Maverick using the recovery tools
    4) copy down and start the server app
    This might be considered a very simplified version of this article (http://support.apple.com/kb/HT4886 - Mac mini server (Late 2012 and Mid 2011): How to install OS X Server on a software RAID volume), with the biggest difference being I grab a copy of the Server App off of the mini before I reinstall, since I did not purchase it from the App store, but rather it came with the mini.
    Is there a best practice /  how-to tutorial somewhere that I can follow/learn from? Am I on the right track or headed for a train wreck?
    thanks in advance

    I think this article will answer your question. Hope this helps: http://wisebyte.blogspot.com/2014/01/best-configuration-for-mac-mini-server.html

  • Naming a customisation by extension concurrent program - best practice

    Hi,
    I am developing a customisation by extension of the seeded concurrent program ‘Open Purchase Orders Report(by Cost Center)’ and need to decide on a name for my custom program in the Concurrent Programs (FNDCPMCP) form.
    I would like to know if there is documented best practice for naming my custom program i.e. should I name my custom program with a prefix to differentiate it from the seeded version e.g. ‘ABCD Open Purchase Orders Report(by Cost Center)’.
    Thanks in advance.
    Mark

    Hi,
    have a look at Oracle Application Developer's Guide. There's a chapter (named "Defining your custom application")
    with recommendations regarding defining custom applications:
    http://docs.oracle.com/cd/B34956_01/current/html/docset.html
    In fact you should start with defining a custom application in System Administrator (according to Oracle's recommendation
    it shouldn't be more than 4 characters, starting with "XX" to avoid conflicts with Oracle's own application short names).
    After that you can register your custom reports/forms/concurrents with the newly created application.
    Regards

  • SAP HANA Security - Best Practice for Access to Schemas??

    Hi,
    Currently we don'y have a defined Security model in HANA Studio.Neither there is no defined duties of a BASIS / Security / Developers.
    I want to understand what best practices are followed at other customers for defining security for Schema.
    1. Who should be creating the schema for Developers / Modelers?
    2. Should we use our own ID's to create/maintain these Schema or a Generic ID?
    Right now, when developers log in to Studio, by default they are assigned to their own schema (User ID) and they create objects under that.
    We(Security team), face issues when other developers need access to schema of another user as they want to develop objects under schema of different user
    Also, who should be owning the "SYSTEM" user ID and what steps needs to be done whenever a new schema is created.
    Thanks for the help in advance.

    Hi,
    I created a project (JDeveloper) with local xsd-files and tried to delete and recreate them in the structure pane with references to a version on the application server. After reopening the project I deployed it successfully to the bpel server. The process is working fine, but in the structure pane there is no information about any of the xsds anymore and the payload in the variables there is an exception (problem building schema).
    How does bpel know where to look for the xsd-files and how does the mapping still work?
    This cannot be the way to do it correctly. Do I have a chance to rework an existing project or do I have to rebuild it from scratch in order to have all the references right?
    Thanks for any clue.
    Bette

  • Best Practices regarding AIA and CDP extensions

    Based on the guide "AD CS Step by Step Guide: Two Tier PKI Hierarchy Deployment", I'll have both
    internal and external users (with a CDP in the DMZ) so I have a few questions regarding the configuration of AIA/CDP.
    From here: http://technet.microsoft.com/en-us/library/cc780454(v=ws.10).aspx
    A root CA certificate should have an empty CRL distribution point because the CRL distribution point is defined by the certificate issuer. Since the roots certificate issuer is the root CA, there is no value in including a CRL distribution point for
    the root CA. In addition, some applications may detect an invalid certificate chain if the root certificate has a CRL distribution point extension set.A root CA certificate should have an empty CRL distribution point because the CRL distribution point is defined
    by the certificate issuer. 
    To have an empty CDP do I have to add these lines to the CAPolicy.inf of the Offline Root CA:
    [CRLDistributionPoint]
    Empty = true
    What about the AIA? Should it be empty for the root CA?
    Using only HTTP CDPs seems to be the best practice, but what about the AIA? Should I only use HTTP?
    Since I'll be using only HTTP CDPs, should I use LDAP Publishing? What is the benefit of using it and what is the best practice regarding this?
    If I don't want to use LDAP Publishing, should I omit the commands: certutil -f -dspublish "A:\CA01_Fabrikam Root CA.crt" RootCA / certutil -f -dspublish "A:\Fabrikam Root
    CA.crl" CA01
    Thank you,

    Is there any reason why you specified a '2' for the HTTP CDP ("2:http://pki.fabrikam.com/CertEnroll/%1_%3%4.crt"
    )? This will be my only CDP/AIA extension, so isn't it supposed to be '1' in priority?
    I tested the setup of the offline Root CA but after the installation, the AIA/CDP Extensions were already pre-populated with the default URLs. I removed all of them:
    The Root Certificate and CRL were already created after ADCS installation in C:\Windows\System32\CertSrv\CertEnroll\ with the default naming convention including the server name (%1_%3%4.crt).
    I guess I could renamed it without impact? If someday I have to revoke the Root CA certificate or the certificate has expired, how will I update the Root CRL since I have no CDP?
    Based on this guide: http://social.technet.microsoft.com/wiki/contents/articles/15037.ad-cs-step-by-step-guide-two-tier-pki-hierarchy-deployment.aspx,
    the Root certificate and CRL is publish in Active Directory:
    certutil -f -dspublish "A:\CA01_Fabrikam Root CA.crt" RootCA
    certutil -f -dspublish "A:\Fabrikam Root CA.crl" CA01
    Is it really necessary to publish the Root CRL in my case?
    Instead of using dspublish, isn't it better to deploy the certificates (Root/Intermediate) through GPO, like in the Default Domain Policy?

  • Authorization Scheme -- Best Practices?

    Hi All --
    We have a reporting application containing approximately 300 pages and 60 or so menu items all using authorization schemes (exists SQL method) as a means to determine whether or not a use can see the menu items and/or access the pages. We've been seeing an issue where a user logging into the application experiences poor performance upon login and have traced it to our access checks and the number of "exists" queries run when a user logs in and before our menu is displayed.
    What would be considered best practice in a case such as this? Does anyone have any ideas on how to increase the performance on these authorizaton checks?
    Thanks,
    Leigh Johnson
    Fastenal Company

    Leigh - No, the asktom post Joel referred to is posted above: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:62048567543425
    We just want to know if this post if from you folks or not.
    About the authorization schemes for each page, I would think that whatever scheme you code to authorize a link to a page, e.g., on a menu, would be the same scheme you'd want to attach to the page itself.
    So the authorization has to take place first at the point you render (or suppress) a link to a page and again at the point the page is requested (the latter being necessary because a user can bypass the menu links and try to access pages directly by entering the page ID in the URL.
    So again, if you have X links on the menu page, each requiring a distinct query for authorization, you'll have to pay the price to do all that authorization once per session because of the design of the menu page. More precisely, the authorization scheme code, e.g., their EXISTS queries, have to be executed once per session per resource access attempted. For performance purposes, the results of these checks are cached for the duration of the session (because you set them up to be evaluated once per session and not on every page view).
    One thing that might help you is region caching (or page caching) for the menu. You'd use the Cache By User option, of course. Then if the same named user logged in and out numerous times during the "cache valid" period, which is adjustable, the user would see the cached menu "instantly". Authorization checks will not have been performed during these page requests however, so you'd want to be sure that it makes sense to present cached versions of these links. However, the corresponding authorization schemes that you'd attach to the pages themselves would be evaluated when the user clicked on a "cached" link, so you'll get the protection you need, ultimately.
    Scott

  • Best Practice - Securing Schema from User Access

    Scenario:
    User A requires access to schema called BLAH.
    User A is a developer that built an application using this schema in a separate development environment, although has the same privileges mirrored to production (same roles etc - required for operation of the application built).
    This means that the User has roles that grant Select, Update etc rights for the schema / table in order to use (and maintain) the applications.
    How can we restrict access to the BLAH schema in PRODUCTION, enforcing it to only be accessible via middle tier / application (proxy authentication?)?
    We've looked at using proxy authentication, however, it's not possible to grant roles and rights to the proxy account and NOT have them granted to the user (so they can dive straight in using development tooling and hit prod etc)>
    We've tried granting it on a session basis using proxy authentication (i.e. user a connects via proxy, an we ENABLE a disabled role on the user based on this connection), however, it causes performance issues.
    Are we tackling this the wrong way? What's the best practice for securing oracle schemas (and objects in general) for user access where the users actually get oracle user account (or even use SSO) for day to day business as usual.
    To me this feels like a common scenario, especially where SSO comes into play ...

    What about situations where we have Legacy Oracle Forms stuff? In these cases the user must be granted select etc rights to particular objects, as this can't connect via a middle tier.
    The problem we have is that our existing middle tier implementation is built expecting the user credentials to be passed to it during initial authentication and does not use a proxy, or super user style account.  We have, historically, been 100% reliant on Oracle rights and controls to validate and restrict access to our underlying data.  From what you are saying, we should start to look at using proxy or super user access and move this control process further up - i.e. into Code or Packages ?  If so, does this mean that there is no specific way to restrict schema access to given proxy accounts and then grant normal user accounts to connect through these to get access (kind of a delegated access scenario), without using disabled roles?

  • "Best Practices" for using different Authentication Schemes ?

    Hi
    We are using different authentication schemes in different environments (Dev/QA/Prod). Changing the authentication scheme between the environments is currently a manual step during the installation. I am wondering if there are better "Best Practices" to follow, where the scheme is set programmatically as part of the build/ load process for a specific environment. ... or any other ideas.
    We refrained from merging the authentication schemes (which is possible) for the following reasons:
    - the authentication code becomes unnecessary complex
    - some functions required in some environments are not available in all environments (LDAP integration through centrally predefined APIs), requiring dynamic execution
    Any suggestions / experience / recommendation to share are appreciated.
    Regards,
    - Thomas
    [On Apex 4.1.0]

    t-o-b wrote:
    Thanks Vikram ... I stumbled over this post, I was more interested in what the "Work Around" / "Best Practices" given these restrictions.
    So I take it that:
    * load & change; or
    * maintain multiple exports
    seem to be the only viable options
    ... in addition to the one referred to in my questions.
    Best,
    - ThomasThomas,
    Its up-to you really and depends on many criteria +(i think its more of release process and version controlling)+.
    I haven't come across a similar scenario before..but I would maintain multiple exports so that the installation can be automated (no manual intervention required).
    Once the API is published +(god knows when it will be)+ you can just maintain one export with an extra script to call the API.
    I guess you can do the same thing with the load & change approach but I would recommend avoiding manual intervention.
    Cheers,
    Vikram

  • What is the best practice for connecting to different schemas?

    Hi all,
    We are porting an application from SQL Server to oracle and would like to know what the best practices are in oracle for user connections to an Oracle instance.
    More or less the question could be put like this:
    1) The equivalent of a SQL Server Database in Oracle is a Schema. (more or less)
    2) A specific application has it's own schema where it keeps all related objects (Tables, etc)
    3) In SQL Server you grant access to the Database and its objects (Tables, etc) to all users of the application.
    4) In Oracle do you grant access to the Schema and its objects (Tables, etc) to all users of the application also? Or do all users log
    in as the schema owner?
    So in Oracle if there existed [SchemaApplication].[table1], how would [userChris] and [userDave] query [SchemaApplication].[table1]?
    Would Chris and Dave log in as [userChris] and [userDave], or would they normally log in as [userApplication]?
    finally, is it good practice to log in as a unique user eg [userChris] and then issue the
    alter session set current_schema = shemaApplication;
    command to change the way references to tables are interpreted?

    We are porting an application from SQL Server to oracle and would like to know what the best practices are in oracle for user connections to an Oracle instance.
    More or less the question could be put like this:
    1) The equivalent of a SQL Server Database in Oracle is a Schema. (more or less)
    2) A specific application has it's own schema where it keeps all related objects (Tables, etc)
    3) In SQL Server you grant access to the Database and its objects (Tables, etc) to all users of the application.
    4) In Oracle do you grant access to the Schema and its objects (Tables, etc) to all users of the application also? Or do all users log
    in as the schema owner?There are ways to implement the same.
    Case 1.
    Create different roles, such as APP_ROLE, READONLY_ROLE. Create public synonym for objects in SchemaApplication user. Grant these role to single user say appUser this is different from you SchemaApplication user. Use appUser to connect to application and for different user like userChris, userDave provide another layer of security. Say userDave is allowed only to deal with cash related transaction, so allow him to open only those screens which are related to cash transaction only.
    Case 2.
    Create public synonym and grant privilege on tables from SchemaApplication to different users (say userChris, userDave).
    So in Oracle if there existed [SchemaApplication].[table1], how would [userChris] and [userDave] query [SchemaApplication].[table1]?This is resolved by public synonym. There are private synonym as well, you can create this also but in this case you have to create private synonym for each of the users.
    Would Chris and Dave log in as [userChris] and [userDave], or would they normally log in as [userApplication]? I would suggest you to connect either using a new user(Case 1) or the user itself has account in the database(Case2).
    finally, is it good practice to log in as a unique user eg [userChris] and then issue the
    alter session set current_schema = shemaApplication;
    No. It is not a good practice to allow the user to login to database using the application owner.
    command to change the way references to tables are interpreted?The public/private synonym can be used to resolve the schema.object value. For example, if SchemaApplication has table T, then you can create public synonym as 'CREATE PUBLIC SYNONYM T FOR SchemaApplication.T'; and now you can refer this table as T from any other schema(user).
    HTH
    Virendra

  • New mac - what is best practice for accounts?

    I am about to get a new mac (imac g5), and would like it to work well (ie file transfer and backup to from) my existing powerbook.
    Is there a best practice for account set up? should I use the same accounts between the two or can I set up a new account on the new mac?
    Related to this: what will key change sync give me? does that only work with the same accounts on two macs?
    thanks
    John

    With Tiger there is a migration assistant that will move everything over from your Powerbook to your new iMac G5. All you need is a firewire cable and when prompted in your first start-up select migration assistant and connect the two computers. You will need to boot up your Powerbook holding down the 't' key before you connect the two together. Good luck, Jack.

Maybe you are looking for

  • Site created in iweb8 doesn't show up in safari!!

    I just had this wierd problem. I created a website using iweb 8, everything was fine until I published it. I tried to view it in safari (2.04 419.3), but I could only see the background and some pieces here and there. I then tried firfox and windows

  • Changing Audio Levels

    Good Morning, I want to change the levels of audio within a single clip.  I used to know how to set the points in the audio line, but for the life of me I can't remember how to do it.  Thanks for your help. Andy

  • Strange keyboard controller behavior in Mainstage - any ideas?

    I just upgraded to Logic 9 and noticed some weird behavior in Mainstage. Whenever I trigger a note on my Roland D-50 (I'm using it as a MIDI controller), sounds cut off right after I let go of the key. My sustain pedal doesn't work either. I also hav

  • Itunes is only downloading 7 seconds of each song

    Hi, I am trying to download my own cds to itunes and only 7 seconds of each song is getting downloaded. How do I fix this??? Thanks! Cherie

  • Helvetica regular font missing in Photoshop CS3, but not in CS5 on the same computer

    I have both Photoshop CS3 and CS5 installed on a Windows 7 64-bit system.  Photoshop CS5 has two Helvetica fonts: one TrueType and one Type 1.  The TT version only offers italic and bold italic styles.  The T1 font offers the full suite of bold, ital