Contained DB best practices

Hello!
A standard procedure for converting an ordinary db to the contained one is as follows:
http://msdn.microsoft.com/en-us/library/ff929275.aspx
sp_migrate_user_to_contained
@username = N'Barry',
@rename = N'keep_name',
@disablelogin = N'do_not_disable_login' ;On the other hand, this article says:http://msdn.microsoft.com/en-us/library/ff929055.aspx
"As a best practice, do not create contained database users with passwords
who have the same name as SQL Server logins."-
But the commands mentioned at the begining of this post do create
"contained database users with passwords who have the same name as SQL Server logins."Does it mean we must NOT use @rename = N'keep_name' (or @disablelogin = N'do_not_disable_login')
to follow best practices for contained DBs?Thank you in advance,Michael

Hello Michael,
What it means is to either have a user login in a contained database or a normal login, but not both. For example, you would not want to have a sql login called 'TestLogin' at the instance level and the same user with a password at the contained database
level. This could cause undesired results from a permissions point of view.
If you only wanted TestLogin to be able to access the contained database, but there existed a TestLogin with the same password at the instance level it would in effect NOT be contained within the contained databases as a normal contained user would.
It is more of a reason because of security (and potential other things).
I would follow the recommendation unless you specifically need the contained user to access other server level items, in which case I would question the actual need for a contained user and database.
Sean Gallardy | Blog | Microsoft Certified Master

Similar Messages

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Best practice for adding text to Flex container?

    Hi,
    I'm having some troubles to lay a TextFlow class out properly
    inside a Flex container. What's the best practice to achieving
    this, for example adding a lot of text to a small Panel?
    Is it possible to pass anything other than a static width and
    height to DisplayObjectContainerController constructor, or is this
    not the place to implement this? I guess what I am looking for is
    the layout logic I'd normally pack into a custom Flex component and
    implement inside measure() and so on.
    My use case: a chat application which adds multiple TextFlow
    elements to a Flex container such as Panel. Or use TextFlow as a
    substitute for UITextField.
    Some example code would help me greatly.
    I'm using Flex 3.2.
    Regards,
    Stefan

    Thanks Brian, the example helps. However problems quickly
    arise if I modify it slightly to this (please compile it to see):
    <?xml version="1.0" encoding="utf-8"?>
    <mx:Application xmlns:mx="
    http://www.adobe.com/2006/mxml"
    layout="absolute" initialize="init()">
    <mx:Script>
    <![CDATA[
    import flashx.textLayout.compose.StandardFlowComposer;
    import
    flashx.textLayout.container.DisplayObjectContainerController;
    import flashx.textLayout.container.IContainerController;
    import flashx.textLayout.elements.TextFlow;
    import flashx.textLayout.conversion.TextFilter;
    private var _container:Sprite;
    private var _textFlow:TextFlow;
    private function init():void
    _container = new Sprite();
    textArea.rawChildren.addChild(_container);
    var markup:String = "<TextFlow xmlns='
    http://ns.adobe.com/textLayout/2008'><p><span>Hello
    World! Hello World! Hello World! Hello World! Hello World! Hello
    World! Hello World! Hello World! Hello World! Hello World! Hello
    World! Hello World! </span></p></TextFlow>";
    _textFlow = TextFilter.importToFlow(markup,
    TextFilter.TEXT_LAYOUT_FORMAT);
    _textFlow.flowComposer.addController(new
    DisplayObjectContainerController(_container, 200, 50));
    _textFlow.flowComposer.updateAllContainers();
    ]]>
    </mx:Script>
    <mx:Canvas width="100" height="100" id="textArea" x="44"
    y="46" backgroundColor="#F5EAEA"/>
    </mx:Application>
    What is the best way to make my textflow behave like a
    'normal' UIComponent in Flex? Should I use UIComponent instead of
    Sprite as a Container? Will that take care of resize behaviour?
    I have never before needed to use rawChildren.addChild for
    example, maybe you can explain why that's needed here?
    I realise that the new Textframework works on an AS basis and
    is not Flex or Flash specific, but this also poses some challenges
    for those of us using the Flex framework primarily.
    I think it would help to have some more basic examples such
    as using the new text features in a 'traditional' context. Say for
    example a TextArea that is just that, a TextArea but with the
    addition of inline images. I personally feel that the provided
    examples largely try to teach me to run before I can walk.
    Many thanks,
    Stefan

  • Best Practice for Updating children UIComponents in a Container?

    What is the best practice for updating children UIComponents in response to a Container being changed?  For instance, when a Canvas is resized, I would like to update all the children UIComponents height and width so the content scales properly.
    Right now I am trying to loop over the children calling InvalidateProperties, InvalidateSize, and InvalidateDisplayList on each.  I know some of the Containers such as VBox and HBox have layout managers, is there a way to leverage something like that?
    Thanks.

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • Best practice question -- copy container, assemble it, build execution plan

    So, this is a design / best practice question:
    I usually copy containers as instructed by docs
    I then set the source system parameters
    I then generate needed parameters / assemble the copied container for ALL subject areas present in the container
    I then build an execution plan JUST FOR THE 4 SUBJECT AREAS and build the execution plan and set whatever is needed before running it.
    QUESTION - When i copy the container, should i delete all not needed subject areas out of it or is it best to do this when building the execution plan? I am basically trying to simplify the container for my own sake and have the container just have few subject areas rather than wait till i build the execution plan and then focus on few subject areas.
    Your thoughts / clarifications are appreciated.
    Regards,

    Hi,
    I would suggest that you leave the subject areas and then just don't include them in the execution plan. Otherwise you have the possibility of running into the situation where you need to include another subject area in the future and you will have to go through the hassle of recreating it in your SSC.
    Regards,
    Matt

  • Best practice for a deplomyent (EAR containing WAR/EJB) in a productive environment

    Hi there,
    I'm looking for some hints regarding to the best practice deployment in a productive
    environment (currently we are not using a WLS-cluster);
    We are using ANT for buildung, packaging and (dynamic) deployment (via weblogic.Deployer)
    on the development environment and this works fine (in the meantime);
    For my point of view, I would like to prefere this kind of Deploment not only
    for the development, also for the productive system.
    But I found some hints in some books, and this guys prefere the static deployment
    for the p-system.
    My question now:
    Could anybody provide me with some links to some whitepapers regarding best practice
    for a deployment into a p-system ??
    What is your experiance with the new two-phase-deploment coming up with WLS 7.0
    Is it really a good idea to use the static deployment (what is the advantage of
    this kind of deployment ???
    THX in advanced
    -Martin

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • Best practices for additonal storage devices in a clustered container

    I am wondering what is considered as best practices related to adding more storage devices. I have a Solaris Cluster 3.2, a failover resource group with a SZBT resource and a HASP resource for the zone root path. Now i have to add more storage devices (for Oracle to the zone). The additional storage devices are in the same metaset as the zone root. I see the following options:
    -> Add it to the zone (simple not managed by HASP)
    -> Add it to the existing HASP for the RG and lofs mount it in the Zone (managed by HASP, but needs lofs mount)
    Are there other options, and what is considered best practices?
    Fritz

    Hi Fritz,
    no matter which option you take it should resoult in lofs mounts.
    I in person would not mix the concepts, you started with HASP, so I would continue with HASP.
    To achieve this you have to take three steps:
    1 add the file systems to the HASP resource,
    2. create the mountpoints in the local zone
    3 add the file system to the Mounts variable in the SCZBT's parameter file.
    If you do not want to reboot the zones, you can mount the loopback files manually, but the safest option ( you will not mess up with typos in th mount pints and realize it later) would be to disable and reenable th sczbt resource.
    Cheers
    Detlef

  • Best-practice for Catalog Views ? :|

    Hello community,
    A best practice question:
    The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users.    I would like to know which is the best practice for segment the catalog.   I mean, some users should only see categories 10,20 & 30.  Other users only category 80, etc.    The problem is how can I implement this ?
    My first idea is:
    1. Create 110 Procurement Catalogs (1 for every prod.category).   Each catalog should contain only its product category.
    2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
    Do you have any idea in order to improve this ?
    Saludos desde Mexico,
    Diego

    Hi,
    Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
    The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
    My advice:
    -Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
    -With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
    Good luck.
    Vadim

  • Best Practice Table Creation for Multiple Customers, Weekly/Monthly Sales Data in Multiple Fields

    We have an homegrown Access database originally designed in 2000 that now has an SQL back-end.  The database has not yet been converted to a higher format such as Access 2007 since at least 2 users are still on Access 2003.  It is fine if suggestions
    will only work with Access 2007 or higher.
    I'm trying to determine if our database is the best place to do this or if we should look at another solution.  We have thousands of products each with a single identifier.  There are customers who provide us regular sales reporting for what was
    sold in a given time period -- weekly, monthly, quarterly, yearly time periods being most important.  This reporting may or may not include all of our product identifiers.  The reporting is typically based on calendar-defined timing although we have
    some customers who have their own calendars which may not align to a calendar month or calendar year so recording the time period can be helpful.
    Each customer's sales report can contain anything from 1,000-20,000 rows of products for each report.  Each customer report is different and they typically have between 4-30 columns of data for each product; headers are consistently named.  The
    product identifiers included may vary by customer and even within each report for a customer; the data in the product identifier row changes each week.  Headers include a wide variety of data such as overall on hand, overall on order, unsellable on hand,
    returns, on hand information for each location or customer grouping, sell-through units information for each location or customer grouping for that given time period, sell-through dollars information for each location or customer grouping for that given time
    period,  sell-through units information for each location or customer grouping for a cumulative time period (same thing for dollars), warehouse on hands, warehouse on orders, the customer's unique categorization of our product in their system, the customer's
    current status code for that product, and so on.
    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables for our largest customers so I can create queries and pivot tables to more quickly look at sales-related information by category, by specific product(s), by partner, by specific products or categories across partners, by specific products or
    categories across specific weeks/months/years, etc.  We do have a separate product table so only the product identifier or a junction table may be needed to pull in additional information from the product table with queries.  We do need to maintain
    the sales reporting information indefinitely.
    I welcome any suggestions, best practice or resources (books, web, etc).
    Many thanks!

    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables .....
    I assume you want to migrate to SQL Server.
    Your best course of action is to hire a professional database designer for a short period like a month.
    Once you have the database, you need to hire a professional DBA to move your current data from Access & Excel into the new SQL Server database.
    Finally you have to hire an SSRS professional to design reports for your company.
    It is also beneficial if the above professionals train your staff while building the new RDBMS.
    Certain senior SQL Server professionals may be able to do all 3 functions in one person: db design, database administration/ETL & business intelligence development (reports).
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Best Practice to implement row restriction level

    Hi guys,
    We need to implement a security row filter scenario in our reporting system. Following several recommendations already posted in the forum we have created a security table with the following columns
    userName  Object Id
    U1             A
    U2             B
    where our fact table is something like that
    Object Id    Fact A
    A                23
    B                4
    Additionally we have created row restriction on the universe based on the following where clause:
    UserName = @Variable('BOUSER')
    If the report only contains objects based on Fact table the restriction is never applied. This has sense as docs specify that the row restrictions are only applied if the table is actually invoked in the SQL statement (SELECT statment is supposed).
    Question is the following: Which is the best practice recommended in this situation. Create a dummy column in the security table, map into it into the universe and include the object in the query?
    Thanks
    Edited by: Alfons Gonzalez on Mar 8, 2012 5:33 PM

    Hi,
    This solution also seemed to be the most suitable for us. Problem that we have discover: when the restriction set is not applied for a given user (the advantage of using restriction set is the fact that is not always applied) the query joins the fact table with the security table withou applying any where clause based on @variable('USER'). This is not a problem if the secuity table contains a 1:1 relationship betwwen users and secured objects , but (as in our case) relathion ship is 1:n query provide "additional wrong rows".
    By the moment we have discarded the use of the restriction sets. The effect of putting a dummy column based on the security table may have undesired effects when the condition is not applied.
    I don't know if anyone has found how to workaround this matter.
    Alfons

  • Best practice for multi-language content in common areas

    I've got a site with some text in header/footer/nav that needs to be translated between an English and Spanish site, which use the same design. My intention was to set up all the text as content to facilitate. However, if I use a standard dialog with the component's path set to a child of the current page node, I would need to re-enter the text on every page. If I use a design dialog, or a standard dialog with the component's path set absolutely, the Engilsh and Spanish sites will share the same text. If I use a standard dialog with the component's path set relatively (eg path="../../jcr:content/myPath"), the pages using the component would all need to be at the same level of the hierarchy.
    It appears that the Geometrixx demo doesn't address this situation, and leaves copy in English. Is there a best practice for this scenario?

    I'm finding that something to the effect of <cq:include path="<%= strCommonContentPath + "codeEntry" %>" resourceType ...
    works fine for most components, but not for parsys, or a component containing a parsys. When I attempt that, I get a JS error that says "design.path is null or not an object". Is there a way around this?

  • Data Federator XI 3.1 - best practices

    We are planning of rolling out a data federator setup in our company and I'm looking for some best practices.
    The major question I have is, do we install the data federator server components on a dedicated server or can/should we install the components on one of the machines of our BOE r3 cluster (4 nodes -> 2 mgmt and 2 processing)
    Is there any document that contains a summary of the best practices for setting up a data federator environment.
    Kind regards
    Guy

    Hello,
    the advice is to have a specific machine for the DF server.
    DF can become memory and CPU intensive for large queries so a dedicated machine allows to improve DF performances and avoid negative impacts on other services (e.g.BOE).
    A lot of calculation and temporary storage is done in memory so the advice is to add as much RAM as needed for large queries. If the RAM is not large enough you will have disk swap and hence you'll notice a lower performance.
    Hope that it helps
    Regards
    PPaolo

  • Oracle EPM 11.1.2.3 Hardware Requirement and best practice

    Hello,
    Could anyone help me find the Minimum Hardware Requirement for the Oracle EPM 11.1.2.3 on the Windows 2008R2 Server? What's best practice to get the optimum performance after the default configuration i.e. modify or look for the entries that need to be modified based on the hardware resource (CPU and RAM) and number of users accessing the Hyperion reports/files.
    Thanks,
    Yash

    Why would you want to know the minimum requirements, surely it would be best to have optimal server specs, the nearest you are going to get is contained in the standard deployment guide - About Standard Deployment
    Saying that it is not possibly to provide stats based on nothing, you would really need to undertake a technical design review/workshop as there many topics to cover before coming up with server information.
    Cheers
    John

  • Best Practice question - null or empty object?

    Given a collection of objects where each object in the collection is an aggregation, is it better to leave references in the object as null or to instantiate an empty object? Now I'll clarify this a bit more.....
    I have an object, MyCollection, that extends Collection and implements Serializable(work requirement). MyCollection is sent as a return from an EJB search method. The search method looks up data in a database and creates MyItem objects for each row in the database. If there are 10 rows, MyCollection would contain 10 MyItem objects (references, of course).
    MyItem has three attributes:
    public class MyItem implements Serializable {
        String name;
        String description;
        MyItemDetail detail;
    }When creating MyItem, let's say that this item didn't have any details so there is no reason to create MyitemDetail. Is it better to leave detail as a null reference or should a MyItemdetail object be created? I know this sounds like a specific app requirement, but I'm looking for a best practice - what most people do in this case. There are reasons for both approaches. Obviously, a bunch of empty objects going over RMI is a strain on resources whereas a bunch of null references is not. But on the receiving end, you have to account for the MyItemDetail reference to be null or not - is this a hassle or not?
    I looked for this at [url http://www.javapractices.com]Java Practices but found nothing.

    I know this sounds like a specific apprequirement,
    , but I'm looking for a best practice - what most
    people do in this case. It depends but in general I use null.Stupid.Thanks for that insightful comment.
    >
    I do a lot of database work though. And for that
    null means something specific.Sure, return null if you have a context where null
    means something. Like for example that you got no
    result at all. But as I said before its's best to
    keep the nulls at the perimeter of your design. Don't
    let nulls slip through.As I said, I do a lot of database work. And it does mean something specific. Thus (in conclusion) that means that, in "general", I use null most of the time.
    Exactly what part of that didn't you follow?
    And exactly what sort of value do you use for a Date when it is undefined? What non-null value do you use such that your users do not have to write exactly the same code that they would to check for null anyways?

  • Best practice for creating RFC destination entries for 3rd parties(Biztalk)

    Hi,
    We are on SAP ECC 6 and we have been creating multiple RFC destination entries for the external 3rd party applications such as Biz-talk and others using TCP/IP connection type and sharing the programid.
    The RFC connections with IDOC as data flow have been made using Synchronous mode for time critical ones(few) and majority through asynchronous mode for others. The RFC destination entries have been created for many interfaces which have unique RFC destinations with its corresponding ports defined in SAP. 
    We have both inbound and outbound connectivity.with the large number of RFC destinations being added we wanted to review the same. We wanted to check with others who had encountered similar situation and were keen to learn their experiences.
    We also wanted to know if there are any best practices to optimise on number of RFC destinations.
    Here were a few suggestions we had in mind to tackle the same.
    1. Create unique RFC destinations for every port defined in SAP for external applications as Biztalk for as many connections. ( This would mean one for inbound, one for outbound)
    2. Create one single RFC destination entry for the external host/application and the external application receiving the idoc control record to interpret what action to perform at its end.
    3. Create RFC destinations based on the modules it links with such as materials management, sales and distribution, warehouse management. This would ensure we can limit the number of RFC to be created and make it simple to understand the flow of data.
    I have done checks on SAP best practices website, sap oss notes and help pages but could not get specific information I was after.
    I do understand we can have as unlimited number of RFC destinations and maximum connections using appropriate profile parameters for gateway, RFC, client connections, additional app servers.
    I would appreciate if you can suggest the best architecture or practice to achieve  RFC destinations in an optimized manner.
    Thanks in advance
    Sam

    Not easy to give a perfect answer
    1. Create unique RFC destinations for every port defined in SAP for external applications as Biztalk for as many connections. ( This would mean one for inbound, one for outbound)
    -> be careful if you have multi cllients ( for example in acceptance) RFC's are client independ but ports are not! you could run in to trouble
    2. Create one single RFC destination entry for the external host/application and the external application receiving the idoc control record to interpret what action to perform at its end.
    -> could be the best solution... its easier to create partner profiles and the control record will contain the correct partner.
    3. Create RFC destinations based on the modules it links with such as materials management, sales and distribution, warehouse management. This would ensure we can limit the number of RFC to be created and make it simple to understand the flow of data.
    -> consider this option 2.
    We send to you messagebroker with 1 RFC destination , sending multiple idoctypes, different partners , different ports.

Maybe you are looking for

  • Import Statement - Which is Better ???

    Which import statement is better? Are there any performance advantages of calling the ones you need explicitly instead of using the shorthand (.*)? import java.util.* or import java.util.Vector; import java.util.TreeMap;

  • How to create a multi-line table/column comment

    Can someone tell me how to create a multi-line table or column comment? Apparently, the concatenate operator (||) does not work with the COMMENT statement. I've searched the Oracle manuals and couldn't find an answer. COMMENT ON TABLE sometbl IS 'i w

  • Exception table names

    hi, I have created a mapping and in the configuration I have set the "Exceptions table name" parameter to "schema_name.exception_table". What does this mean.....is it that all the records that caused exception are dumped into this "exception_table".

  • Migration from legacy to UEFI/GPT

    Hi everyone. I have an old laptop with Arch installed. I would like to move the ssd with Arch from old laptop to the new one. This wouldn't be a problem if the new laptop supported legacy boot nad mbr, but it doesn't. Is it possible to create partiti

  • SQLite3.dll

    Why do I keep getting "The procedure entry point sqlite3_wal_checkpoint could not be located in the dynamic link library SQLite3.dll" when I start up my computer?