SQL 2008 R2 Best Practices for Updating Statistics for a 1.5 TB VLDB

We currently have a ~1.5 TB VLDB (SQL 2008 R2) that services both OLTP and DSS workloads pretty much on a 24x7x365 basis. For many years we have been updating statistics (full scan- 100% sample size) for this VLDB once a week on the weekend, which
is currently taking up to 30 hours to complete.
Somewhat recently we have been experiencing intermitent issues while statistics are being updated, which I doubt is just a coincidence. I'd like to understand exactly why the process of updating statistics can cause these issues (timeouts/errors). My theory
is that the optimizer is forced to choose an inferior execution plan while the needed statistics are in "limbo" (stuck between the "old" and the "new"), but that is again just a theory. I'm somewhat surprised that the "old" statistics couldn't continue to
get used while the new/current statistics are being generated (like the process for rebuilding indexes online), but I don't know all the facts behind this mechanism yet so that may not even apply here.
I understand that we have the option of reducing the sample percentage/size for updating statistics, which is currently set at 100% (full scan).  Reducing the sample percentage/size for updating statistics will reduce the total processing time, but
it's also my understanding that doing so will leave the optimizer with less than optimal statistics for choosing the best execution plans. This seems to be a classic case of not being able to have one’s cake and eat it too.
So in a nutshell I'm looking to fully understand why the process of updating statistics can cause access issues and I'm also looking for best practices in general for updating statistics of such a VLDB. Thanks in advance.
Bill Thacker

I'm with you. Yikes is exactly right with regard to suspending all index optimizations for so long. I'll probably start a separate forum thread about that in the near future, but for now lets stick to the best practices for updating statistics.
I'm a little disappointed that multiple people haven't already chimed in about this and offered up some viable solutions. Like I said previously, I can't be the first person in need of such a thing. This database has 552 tables with a whole lot more statistics
objects than that associated with those tables. The metadata has to be there for determining which statistics objects can go (not utilized much if at all so delete them- also produce an actual script to delete the useless ones identified) and what
the proper sample percentage/size should be for updating the remaining, utilized statistics (again, also produce a script that can be used for executing the appropriate update statistics commands for each table based on cardinality).
The above solution would be much more ideal IMO than just issuing a single update statistics command that samples the same percentage/size for every table (e.g. 10%). That's what we're doing today at 100% (full scan).
Come on SQL Server Community. Show me some love :)
Bill Thacker

Similar Messages

  • SQL Server 2008 / 2012 - Best practices document

    Hello Everyone
    Can anybody share SQL Server 2008 / 2012 - Best practices.
    Regards
    Prashanth
    SharePoint Administrator

    Take a look here:
    http://channel9.msdn.com/Series/Tuning-SQL-Server-2012-for-SharePoint-2013/Tuning-SQL-Server-2012-for-SharePoint-2013-01-Key-SQL-Server-and-SharePoint-Server-Integration-Conce (4 part video series)
    https://technet.microsoft.com/en-us/library/hh292622.aspx
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Best practice to define length for varchar field of table in sql server

    What is best practice to define length for a varchar field in table
    where field suppose Remarks By Person  varchar(max) or varchar(4000)
    Could it affect on optimization in future????
    experts Reply Must ... 
    Dilip Patil..

    Hi Dilip,
    Varchar(n/max) is a variable-length, non-unicode character data. N defines the string length and can be a value from 1 through 8,000. Max indicates that the maximum storage size is 2^31-1 bytes (2 GB). The storage size is the actual length of the data entered
    + 2 bytes. We always use varchar when the sizes of the column data entries vary considerably. While if the filed data size might exceed 8,000 bytes in some way, we should use varchar(max).
    So the conclusion is just like Uri said, use varchar(max) or varchar(4000) is depends on how much characters we are going to store.
    The following document about varchar in SQL Server is for your reference:
    http://technet.microsoft.com/en-us/library/ms176089.aspx
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • License type of SQL Server 2005 Best Practices Analyzer

    Hi everybody.
    I need to install in my organization the software "SQL Server 2005 Best Practices Analyzer" but I need to know if this application it's free licensing. I have seen on several web sites about this tool it's free but not in official microsoft
    web page. So, where can I find the official microsoft information about the type of licensing of "SQL Server 2005 Best Practices Analyzer" ?
    Thanks of your support

    Hello Erland.
    I followed your advice and I have read the terms of use of this software. I stop at point 3 (which I highlighted). Based on this point, I doubt it is about using this application. Furthermore nowhere says that this software is free to use.
    Would appreciate if someone can clarify this to me.
     =============================================================
    MICROSOFT SOFTWARE LICENSE TERMS
    MICROSOFT SQL SERVER 2005 BEST PRACTICES ANALYZER:
    These license terms are an agreement between Microsoft Corporation (or based on where you live, one of its affiliates) and you. 
    Please read them.  They apply to the software named above, which includes the media on which you received it, if any. 
    The terms also apply to any Microsoft
    *  updates,
    *  supplements,
    *  Internet-based services, and
    *  support services
    for this software, unless other terms accompany those items. 
    If so, those terms apply.
    BY USING THE SOFTWARE, YOU ACCEPT THESE TERMS. 
    IF YOU DO NOT ACCEPT THEM, DO NOT USE THE SOFTWARE.
    If you comply with these license terms, you have the rights below.
    1. 
    INSTALLATION AND USE RIGHTS.  You may install and use any number of copies of the software on your devices.
    2. 
    INTERNET-BASED SERVICES.  Microsoft provides Internet-based services with the software. 
    It may change or cancel them at any time.
    3. 
    SCOPE OF LICENSE.  The software is licensed, not sold. This agreement only gives you some rights to use the software. 
    Microsoft reserves all other rights. 
    Unless applicable law gives you more rights despite this limitation, you may use the software only as expressly permitted in this agreement. 
    In doing so, you must comply with any technical limitations in the software that only allow you to use it in certain ways. 
    You may not
    *  work around any technical limitations in the software;
    *  reverse engineer, decompile or disassemble the software, except and only to the extent that applicable law expressly permits, despite this limitation;
    *  make more copies of the software than specified in this agreement or allowed by applicable law, despite this limitation;
    *  publish the software for others to copy;
    *  rent, lease or lend the software;
    *  transfer the software or this agreement to any third party; or
    *  use the software for commercial software hosting services.
    4. 
    BACKUP COPY.  You may make one backup copy of the software. 
    You may use it only to reinstall the software.
    5. 
    DOCUMENTATION.  Any person that has valid access to your computer or internal network may copy and use the documentation for your internal, reference purposes.
    6. 
    EXPORT RESTRICTIONS.  The software is subject to United States export laws and regulations. 
    You must comply with all domestic and international export laws and regulations that apply to the software. 
    These laws include restrictions on destinations, end users and end use. 
    For additional information, see www.microsoft.com/exporting.
    7. 
    SUPPORT SERVICES.  Because this software is "as is," we may not provide support services for it.
    8. 
    ENTIRE AGREEMENT.  This agreement, and the terms for supplements, updates, Internet-based services and support services that you use, are the entire agreement for the software and support services.
    9. 
    APPLICABLE LAW.
    a.  United States.  If you acquired the software in the United States, Washington state law governs the interpretation of this agreement and applies to claims for breach of it, regardless of conflict of laws principles. 
    The laws of the state where you live govern all other claims, including claims under state consumer protection laws, unfair competition laws, and in tort.
    b.  Outside the United States.  If you acquired the software in any other country, the laws of that country apply.
    10. 
    LEGAL EFFECT.  This agreement describes certain legal rights. 
    You may have other rights under the laws of your country. 
    You may also have rights with respect to the party from whom you acquired the software. 
    This agreement does not change your rights under the laws of your country if the laws of your country do not permit it to do so.
    11. 
    DISCLAIMER OF WARRANTY.  THE SOFTWARE IS LICENSED "AS-IS." 
    YOU BEAR THE RISK OF USING IT.  MICROSOFT GIVES NO EXPRESS WARRANTIES, GUARANTEES OR CONDITIONS. 
    YOU MAY HAVE ADDITIONAL CONSUMER RIGHTS UNDER YOUR LOCAL LAWS WHICH THIS AGREEMENT CANNOT CHANGE. 
    TO THE EXTENT PERMITTED UNDER YOUR LOCAL LAWS, MICROSOFT EXCLUDES THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.
    12. 
    LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES.  YOU CAN RECOVER FROM MICROSOFT AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO U.S. $5.00. 
    YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.
    This limitation applies to
    *  anything related to the software, services, content (including code) on third party Internet sites, or third party programs; and
    *  claims for breach of contract, breach of warranty, guarantee or condition, strict liability, negligence, or other tort to the extent permitted by applicable law.
    It also applies even if Microsoft knew or should have known about the possibility of the damages. 
    The above limitation or exclusion may not apply to you because your country may not allow the exclusion or limitation of incidental, consequential or other damages.
    Please note: As this software is distributed in Quebec, Canada, some of the clauses in this agreement are provided below in French.

  • AUTOMATIC UPDATE STATISTICS for VB* tables ON automatically

    Hello All,
    We had reviewed our ECC system with SAP and they recommended us to OFF the AUTOMATIC UPDATE STATISTICS for VBDATA, VBHDR and VBMOD.
    We executed the script EXEC sp_autostats <tablename>, 'OFF' but the status goes ON after a while.
    Checked the SAP note 771352 but did not get proper idea from it.
    MS SQL database used is 2008.
    Can someone suggest and share his/her experience.
    Regards,
    Mohit

    Hi Mohit,
    Did you ran the sap_z* script ?
    What is your SP level - did you check 1702325 - Alerts appear for VB tables
    Also, I asked to to use NORECOMPUTE - have you tried that ?
    Regards

  • SAP ASE Best Practice latest update

    Hello experts,
    just wondering if somebody already reviewed thoroughly latest guide for best practices on SAP Sybase ASE?
    I am talking about the document from note 1680803 - SYB: Migration to SAP Adaptive Server Enterprise - Best Practice (former note 1722359 - SYB: Running SAP applications on SAP ASE - Best Practice).
    The guide for normal runtime operation was merged with the guide for migration, but there are some contradictory statements.
    Apart from that the study case is again designed for server with huge memory and lot of CPU cores (so not so real case normally, I wonder who setup so often such huge servers...), I have found some inconsistencies.
    E.g. in part "Reconfigure Engines and Parallel Processing", they talk about to limit ASE engines to 16, but the command configures 32.
    alter thread pool syb_default_pool with thread count = 32, idle timeout = 2000
    No change to the previous setup for migration. Is this just typo? I understand it should be 16, and then also number of network tasks for normal operation would be 4 (as mentioned in the beginning of guide that normally you set up 1 per 3-4 engnes). If this is not typo, then number of network tasks is wrong as it should be 8.
    Also they introduced idle timeout, but only talking about ERP and possible lower value for Solman - does this mean that for BW you keep default value (which if I am not mistaken is 100)? As per ADM540 you should even decrease this timeout when SAP system is sharing server with database - I know that document is old, but is again contradictory, not saying that it is wrong, but not well explained.
    If anybody checked new version of guide, please let me know, I think it is bit messed up and is bit difficult to distinguish what you should set up for migration case and what for normal operation case.
    Thanks!
    Regards,
    Matus

    Actually, quite a few customers run with that many engines/memory.   In fact, it is difficult these days to even buy a server with less than 128GB of memory and 16 cores/32 threads.    Pretty much the only time we see less is when the install is in a VM.   Interestingly, we had comments from the first version suggesting the numbers were not realistic given the typical size of systems being deployed were much larger....    In addition, in my experience with customers on SAP systems, they were not aware of how  much memory was necessary to really support medium to large systems based on the configurations they were attempting.
    I am sorry that you feel some of the examples are contradictory.  You are correct in pointing out that the text refers to 16 engines and the example configures 32....   So yes, for that specific example, it should have been 16. 
    Secondly, not having seen ADM540, but I think there is a bit of a problem if they suggest that.   I my opinion (and I have spent a lifetime tuning ASE), the idle timeout for ERP and BW should likely both be 1000+ and 2000 is not unreasonable.   The comment in ADM540 is likely due to if ASE and a NW CI are sharing the same cores - e.g. you have a 4 core box and ASE is running on 2 cores (we will ignore threads for this discussion) and you have 30 NW worker processes - which obviously will need to bump ASE off the cpu in order to run.   This may be fine in a test/dev or even a solution manager system, but bumping ASE off the core is NOT a good thing for a production system.  In fact, I would encourage using numactl or similar to fence off the the cores used for ASE from NW worker processes if at all possible.   We have seen cases of overloaded NW installations with multiple CI instances with hundreds of worker processes each starving cpu away from ASE......sooo....I would tend to actually be a bit more than firm on suggesting that 100 is a very bad starting point.   Given the number of client side joins that SAP uses to avoid [DBMS proprietary] temp tables, it is critical that ASE's (or any DBMS) response time be minimized as much as possible.....having ASE yield the core practically as soon as it gets done processing one task (and puts it to sleep pending an IO) just really causes things to run slow.   Think of a typical query that returns 10 rows - say wide enough that each row fills 1 packet.   If the packet transmit time (and client ACK) takes more than 100 microseconds on CPU (almost a given for network interactions...as clock ticks are in nanoseconds and networking is minimally milliseconds - 1000 microseconds), ASE would yield the CPU every time it sent a packet.    When the client wanted the next packet, the OS would have to wake up the ASE process (an interrupted sleep) which is a nasty heavy weight operation.   Hence it is best for ASE to hang out on the CPU until reasonably sure that nothing more is going to happen very soon....and on current cpus...and having it run for 1-2ms (1000-2000 microseconds) shouldn't be a hardship.     If you created a separate thread pool for batch worker processes, then I could see maybe using a lower idle timeout such as 200 or 250......100 is just plain too low in my mind...it is like saying ASE is expecting an odd query every few seconds vs. a steady workload.  Basically at that level, there had better be a task in the ASE job queue or one on the way on the network already, or that engine is going to sleep.
    While I state that with regards to ADM540 itself, I have not seen the class (perhaps)...one customer did show me the notebook of a class (ASE Sys Admin) they went to and it was really targeted at non-SAP installations more than SAP installations - from a reality/experience aspect.   Part of the issue with the class the customer showed me was it borrowed liberally from the old SY classes as a starting point, but at the point the class was developed there was not a lot of experience with running SAP installations on ASE to really point out the fine tweaking areas such as idle timeout.
    However, the document was really aimed primarily at Business Suite vs. BW systems or a Solution Manager install (which are much smaller) - there are a lot of other considerations for BW the guide doesn't get into - although some of the sizing is a better start than the defaults provided by SAPINST
    The former runtime guide essentially was just merged in to the Post-Migration Steps section.
    May do a quick refresh in the near-future (due to some recent experiences), so if you have other specific examples of the text and SQL not aligning - please let me know.

  • Best practice on Oracle VM for Sparc System

    Dear All,
    I want to test Oracle VM for Sparc System but I don't have new model Server to test it. What is the best practice of Oracle VM for Sparc System?
    I have a Dell laptop which has spec as below:
    -Intel® CoreTM i7-2640M
    (2.8GHz, 4MB cache)
    - Ram: 8GB DDR3
    - HDD: 750GB
    -1GB AMD Radeon
    I want to install Oracle VM VirtualBox on my laptop and then install Oracle VM for Sparc System in Virtual Box, is it possible?
    Please kindly give advice,
    Thanks and regards,
    Heng

    Heng Horn wrote:
    How about computer desktop or computer workstation with the latest version has CPU supports Oracle VM or SPARC?Nope. The only place you find SPARC T4 processors is in Sun Servers (and some Fujitsu servers, I think).

  • Best practices to reduce downtime for Database releases(rolling changes)

    Hi,
    What are best practices to reduce downtime for database releases on 10.2.0.3? What DB changes can be rolling and what can't?
    Thanks in advance.
    Regards,
    RJiv.

    I would be very dubious about any sort of universal "best practices" here. Realistically, your practices need to be tailored to the application and the environment.
    You can invest a lot of time, energy, and resources into minimizing downtime if that is the only goal. But you'll generally pay for that goal in terms of developer and admin time and effort, environmental complexity, etc. And you generally need to architect your application with rolling upgrades in mind, which necessitates potentially large amounts of redesign to existing applications. It may be perfectly acceptable to go full-bore into minimizing downtime if you are running Amazon.com and any downtime is unacceptable. Most organizations, however, need to balance downtime against other needs.
    For example, you could radically minimize downtime by having a second active database, configuring Streams to replicate changes between the two master databases, and configure the middle tier environment so that you can point different middle tier servers against one or the other database. When you want to upgrade, you point all the middle tier servers against database A other than 1 that lives on a special URL. You upgrade database B (making sure to deal with the Streams replication environment properly depending on requirements) and do the smoke test against the special URL. When you determine that everything works, you configure all the app servers to point at B and have Streams replication process configured to replicate changes from the old data model to the new data model), upgrade B, repeat the smoke test, and then return the middle tier environment to the normal state of balancing between databases.
    This lets you upgrade with 0 downtime. But you've got to license another primary database. And configure Streams. And write the replication code to propagate the changes on B during the time you're smoke testing A. And you need the middle tier infrastructure in place. And you're obviously going to be involving more admins than you would for a simpler deploy where you take things down, reboot, and bring things up. The test plan becomes more complicated as well since you need to practice this sort of thing in lower environments.
    Justin

  • How to check/verify running sql in lib cache is using updated statistics of table

    How to check/verify running sql in lib cache is using updated statistics of table used in from clause.
    one of my application table is highly busy i.e frequent update/insert/delete.
    we gather table stats every 30 min.

    Hello, "try dynamic sampling" = think "outside the box", maybe hit two birds with same stone.
    As a matter of fact, I was just backing up your statement: "30 minutes seems pretty extreme"
    cheers

  • EBS Supplier best practice to update vendor site code, update or create a new one

    I have a question related to EBS Supplier vendor site code. Application lets you update the vendor site code, but what is the best practice to update the site code?....would you inactivate the exiting one and create a new one? or would you just update the existing value?

    Ok,
    My workaround was to put in my TaskFlow an action to commit. After that I put two more actions (execute) and then back to my page. This way works but I would like to know if there is any more efficient way to do this just when I am inserting.
    Regards

  • Best Practice setting up NICs for Hyper V 2008 r2

    I am looking at some suggestions for best practice for setting up a hyper V 2008 r2 at a remote location with 5 nics, one for managment vlan and other 4 on the data vlan.  This server will host  2 virtual machines, one is a DC and the other
    is a member local DHCP server.  The server is setup now with one nic on the management Vlan and the other nic's set to get there ip from the local dhcp server on on the host.   We have the virtual networks setup in Hyper V to
    point to each of the nics using the "external connection".  The virtual servers 'DHCP and AD" have there own ip set within them.  Issues we are seeing,  when the site looses external connections for a while they cannot get ip
    addresses from the local dhcp server anymore.
    1. NIC on management Vlan -- IP Static -- Physical host
    2. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V  -- virtual server DHCP
    3. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- Virtual server domain controller
    4. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- extra
    5. NIC on the Data network Vlan -- DHCP linked as a connection "external" in Hyper V -- extra
    Thanks in advance

    Looks like you may be over complicating things here.  More and more of the recommendations from Microsoft at this point would be to create a Logical Switch and then layer on Logical Networks for your management layers, but here is what I would do for
    you simple remote office.  
    Management NIC:  Looks good (Teaming would be better, but only if you had 2 different switching to protect against link failures at the switch level.  Doesn't seem relevant in this case however.
    NIC for Data Network VLAN:  I would use one NIC in your case if you can have the ability to Trunk multiple VLANs at the switch level to the NIC.  That way you are setting the VLAN on the VMs NIC that you want to access and your
    Virtual Switch configuration is very simple.  On this virtual switch however, I would uncheck IPv4 and IPv6.  There is no need to give this NIC an address as you are just passing traffic through them from the VMs that are marked with VLAN tags.  Again,
    if you have multiple physical switches in the building teaming could be an option, but probably adds more complexity than is necessary for a small office. 
    Even if you keep your Virtual Switches linked to separate NICs unchecking IPv4 and IPv6 makes sense. 
    Disable all the other NICs
    Beyond that, check your routing.  Can you ping between all hosts when there is not interruption? What DHCP server are they getting there addresses on normally?  Where are your name resolution servers (DNS, WINS)?  
    No silver bullet here, but maybe a step in the right direction.
    Rob McShinsky (VirtuallyAware.com)
    VirtuallyAware - Experiences in a Virtual World (Microsoft MVP - Virtual Machine)

  • Best Practices to update Cascading Picklist mapping for Account record type

    1. Most of the existing picklist values name in parent and related picklist has been modified in external app master list, so the same needs to be updated in CRMOD.
    2. If we need to update picklist value, do we need to DISABLE the existing value and CREATE a new picklist.
    3. Is there any Best Practices to avoid doing Manual Cascading picklist mapping for Account record type? because we have around 500 picklist values to be mapped with parent and related picklist.
    Thanks!

    Mahesh, I would recommend disabling the existing values and create new ones. This means manually remapping the cascading picklists.

  • Update Statistics for the database MS-SQL

    hi all ,
    I want to run the statistics programmatically ( update statistics ) and our database is MS-SQL. Can any one tell me which is the suitable function module to do the same.
    Thanks,
    Ram

    If you did not find this - I found the function module 'update_stats'.
    How this helps
    Or maybe you found somthing else?

  • Best Practice on Updating From a DB

    Hi Everyone,
    What are some best practices surrounding getting data from an oracle database into the cache layer when a data change event (insert, update, delete) happens? I've searched far and wide and the best answer I can find is to use Extractor/Replicator -> JMS -> Subscriber -> cache.
    Thank you for your help.

    You're right, DCN is interesting idea, but it's again the case where technology is working on simple Hello World things, but fails to deliver on real word.
    To me DCN looks like an unfinished Oracle project, lot of marketing stuff, but poor features, it's good mostly to student's works or testlabs, but not for real world complexity.
    Two reasons:
    1.DCN has severe limitations on complexity of joins and queries in case you plan to use query change notification feature.
    2. it puts too bug pressure on database by creating a tons on events, when you don't need and don't expect them, because it's too generic.
    Instead of DCN, create ordinary Oracle AQ queues, using tiny SQL object type event as a payload, then create triggers and/or PL/SQL stored procedures, which ale filling the event with all the primary keys you need and the unique ID of the object you need to extract.
    Triggers will filter out unnesessary updates, sending events only when you wish.
    If conditions are too complex for triggers, you may create & place events either by call from the event source app itself or on scheduled basis, it's entirely up to you. Also, technique with creating object views + using instead of trigger on this object view works pretty well.
    And finally, implement listener at Coherence side, which will be reading the event, making necessary extracts & assemble Java object ready to be placed into the cache, based on the event ID and set of event's primary keys. After Java object will be assembled, you can place it into the cache.
    Don't use Hibernate, TopLink or any other relational-to-object frameworks, they're too slow and add excessive and unnecessary overhead to the process, use standard Oracle database features, they're much faster and transaction-safe. Usage of these frameworks within 10g or 11g database is obsolete and caused mainly by lack of knowledge among Java developers about database features on this regard.
    In order to make a whole system fail-safe and scalable, you have to implement listener in fail-safe fashion, in a form of workmanager + slave processes, spawned on the other nodes.Work manager has to be auto fail-safe and auto scalable, so that if the node holding work manager instance fails due to cache cluster member departure or reset or smth else, another workmanager is automatically spawned on first available node.
    Also, workmanager should spread & synchronize the work among the slave listener processes based on the current cache cluster members, automatically re-balancing and recovering work in case of cache member join/departure.
    Out-of-the box Coherence has an implementation of workmanager, but it's not fail-safe and does not provide automatic scale-up/recover work features described above, so you have to implement your own.
    All the features I've described are implemented and happily used in complex OLTP + workflow system backed up by big Oracle RAC cluster with huge workload, processing millions transactions per day.

  • Best practices with LDIF Development for RBAC?

    I'm currently working on enforcing RBAC (Role Based Access controls) in OID that may be subject to change every few months. What I've currently been doing is writing LDIF files to make changes to the existing RBAC once the changes have been finalized.
    Unfortunately, now we have ended up with a growing list of LDIF files that must be run in sequential order if we were to build a new environment. Any defects or development errors that slip through developer unit testing must be handled in the same manner.
    What is the best practice process for performing this type of development? Would it make more sense to have one LDIF file that removes all of the RBAC enforcement (via ldapmodify -c), and then a separate file that will install the latest and most up to date version? I've also considered just using one LDIF file, appending any updates to the end of it and using the ldapmodify command with the -c parameter

    With regard to the 29.97/30 thing, you'll find that video people are idiosyncratically imprecise about that. We say 60 when we mean 59.94, we say 30 when we mean 29.97 and we say 24 when we mean 23.976.
    We're quirky.
    Whenever somebody says one of those nice, round numbers, you can assume they're really talking about the corresponding ugly fraction.
    Unless they're film people, in which case +24 means 24, dangit.+

Maybe you are looking for

  • Recognising Sony PRS-T1 in Adobe Digital Editions

    Why doesn't Adobe Digital Editions recognise my Sony PRS-T1 ereader?

  • Imac g3 Early 2001 - Indigo cd drive replace

    hi, i have a imac g3 early 2001. It's cd drive is broken is there a cd drive that will fit it and work without and drivers you can read about it here http://everymac.com/systems/apple/imac/specs/imac_400_indigo.html thanks DM1000

  • Home sharing across state

    Okay - so I have traveled away from my home for business - but now I want some of my music from my computer at home. I instructed my family to turn the computor on and open iTunes - but I still can't get my shared library to show up on this computer.

  • Skype for Windows 8.1 doesn´t allow me to logout +...

    I have two problems here which started after I got installed Windows 8.1: 1) I seem to appearing online when I´m closing the Skype window (by dragging it down). 2) I can´t logout from Skype for Windows 8.1 app I already unlinked my Microsoft account

  • How to sum the amount

    how can I add the amount shown below for year 1999. I would like to add all the amount between 01/22/1999 - 12/17/1999. thank you here is my simple sql looks like select sum(l.contamt),b.datelet from bidlet b, letprop l where b.LETTING = l.LETTING an