Best practice to lock a track ?

I have a ESS track that I want to lock from any code modifications.
How to do this ?
Lock the track ?
CLOSE the buildspace ?
Is there a best practice ?

Hi Henrik,
The Best way is lock the track. and give display only access to all users
Please look into below documents.
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e0f341af-e86e-2910-3e8a-d9e3c227d938
http://help.sap.com/saphelp_nw70/helpdata/en/c0/5a1b42d10b5633e10000000a155106/frameset.htm
Regards
Praveen

Similar Messages

  • CMS Tracks - best practice?

    Hi,
    we are developing our product with CVS right now and want to move over to DTR. The basic concepts are clear and I did a test migration yet which is successful.
    But I am unclear on the chamge management piece:
    Let's say we develop a version 1.0
    Now this version has service packs 1.0 SP1, SP2, SP3 and so on. These service packs also have to be maintained, they might contain bugs, so you could have something like
    1.0 SP1 Patch 1, 1.0 SP1 Patch 2 and so on.
    How do I handle this with CMS tracks? Whats the best practice? Do I setup a track for every major version and for every support package in that version? I.E. i will have a track 10SP0, 10SP1, 10SP2, 10SP3 and so on? Will this work?
    Right now we have a lot of CVS tags and branches to make this work... but how do you do that in DTR? I need to be able to jump back to a specific version and SP and fix bugs in there if a customer needs it.
    In CVS the concept is that I will develop in HEAD and bugfix in branches (which is all in the same repository / "workspace"). But in DTR how do I do it? Is there something analog to this? Or do I always just use the track with the highest versio number as the "HEAD"?
    Any input is appreciated.
    Thanks
    Bruno

    Hello Bruno,
    For each state of your product that you wish to maintain, you must create a track. So in your case, you will have a track structure as follows:
    Track1.0
    Track1.0_SP1
    Track1.0_SP2
    DTR does not support tags (yet), so the state that you wish to retain for possible future fixes must be isolated in a workspace of a given track. That is, "Track1.0_SP1" will contain the workspaces that represent the SP1 state, and a fix for SP1 must be done in this track.
    And you must develop on the Main Release track ("Track1.0") and do the bugfixes in the track for the approrpriate SP. You should set up a transport connection of type "Repair" from each SP track to the Main Release track, so the fixes you make in the SP track are automatically back-transported to the Main Release track. (This connection can be setup in the "Track Connections" tab in the CMS Landscape Configurator.)
    Also note that the DTR version graph represents a global version history, so for any file you will be able to view the changes made in the different tracks (workspaces) from the Version Graph view (in the DTR Perspective of the SAP NetWeaver Developer Studio).
    Regards,
    Manohar

  • RD Session Host lock down best practice document

     
    Hello,
    I am currently working on deploying an RDS Farm. My farm has several RD Session host servers. Today I learned that you can do some bad things to the RD Session hosts, if a user presses
    CTRL + Alt + End when having a open session. I locked all of this down using different GPOs which include disabled access task manager, cmd, locking the server, reboot and shutdown etc.
    However, this being sad how would I know what else to lock down since I am new to this topic. I tried to find some Microsoft document about best practices what should be locked down but I wasn’t
    successful and unfortunately a search in the forum did not bring up anything else.
    With all the different features and option Windows Server 2008 R2 has I do not even know where to start.
    Can some please point me into the right direction.
    Thank you
    Marcus

    Hi,
    The RD Session host  lock down best practices of each business is different, every enterprise admin can only to find the most suitable for their own solutions based on their IT infrastructure.
    I collected some resource info for you.
    Remote Desktop Services: Frequently Asked Questions
    http://www.microsoft.com/windowsserver2008/en/us/rds-faq.aspx
    Best Practices Analyzer for Remote Desktop Services
    http://technet.microsoft.com/en-us/library/dd391873(WS.10).aspx
    Remote Desktop Session Host Capacity Planning for 2008 R2
    http://www.microsoft.com/downloads/details.aspx?FamilyID=CA837962-4128-4680-B1C0-AD0985939063&displaylang=en   
    RDS Hardware Sizing and Capacity Planning Guidance.
    http://blogs.technet.com/iftekhar/archive/2010/02/10/rds-hardware-sizing-and-capacity-planning-guidance.aspx
    Technical Overview of Windows Server® 2008 R2 Remote Desktop Services
    http://download.microsoft.com/download/5/B/D/5BD5C253-4259-428B-A3E4-1F9C3D803074/TDM%20RDS%20Whitepaper_RC.docx
    Remote Desktop Load Simulation Tools
    http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=c3f5f040-ab7b-4ec6-9ed3-1698105510ad
    Hope this helps.
    Technology changes life……

  • Temp Tables - Best Practice

    Hello,
    I have a customer who uses temp tables all over their application.
    This customer is a novice and the app has its roots in VB6. We are converting it to .net
    I would really like to know the best practice for using temp tables.
    I have seen code like this in the app.
    CR2.Database.Tables.Item(1).Location = "tempdb.dbo.[##Scott_xwPaySheetDtlForN]"
    That seems to work, though i do not know why the full tempdb.dbo.[## is required.
    However, when i use this in the new report I am doing I get runtime errors.
    i also tried this
    CR2.Database.Tables.Item(1).Location = "##Scott_xwPaySheetDtlForN"
    I did not get errors, but I was returned data i did not expect.
    Before i delve into different ways to do this, i could use some help with a good pattern to use.
    thanks

    Hi Scott,
    Are you using the RDC still? It's not clear but looks like it.
    We had an API that could piggy back the HDBC handle in the RDC ( craxdrt.dll ) but that API is no longer available in .NET. Also, the RDC is not supported in .NET since .NET uses the framework and RDC is COM.
    Work around is to copy the temp data into a data set and then set location to the data set. There is no way that I know of to get to the tempdb from .NET. Reason being is there is no CR API to set the owner of the table to the user, MS SQL Server locks the tempdb to that user has exclusinve rights on it.
    Thank you
    Don

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • Best Practice on using and refreshing the Data Provider

    I have a �users� page, that lists all the users in a table - lets call it master page. One can click on the first column to of the master page and it takes them to the �detail� page, where one can view and update the user detail.
    Master and detail use two different data providers based on two different CachedRowSets.
    Master CachedRowSet (Session scope): SELECT * FROM UsersDetail CachedRowSet (Session scope): SELECT * FROM Users WHERE User_ID=?I want the master to be updated whenever the detail page is updated. There are various options to choose from:
    1. I could call masterDataProvider.refresh() after I call the detailDataProvider.commitChanges() - which is called on the save button on the detail page. The problem with this approach is that the master page will not be refreshed across all user sessions, but only for the one saving the detail page.
    2. I could call masterDataProvider.refresh() on the preRender() event of the master page. The problem with this approach is that the refresh() will be called every single time someone views the master page. Further more, if someone goes to next page (using the built in pagination on the table on master page) and clicks on a user to view its detail and then close the detail page, it does not keep track of the pagination (what page the user was when he/she clicked on a record to view its detail).
    I can find some work around to resolve this problem, but I think this should be a fairly common usage (two page CRUD with master-detail). If we can discuss and document some best practices of doing this, it will help all the developers.
    Discussion:
    1.     What is the best practice on setting the scope of the Data Providers and CahcedRowSet. I noticed that in the tutorial examples, they used page/request scope for Data Provider but session scope for the associated CachedRowSet.
    2.     What is the best practice to refresh the master data provider when a record/row is updated in the detail page?
    3.     How to keep track of pagination, (what page the user was when he/she clicked on the first column in the master page table), so that upon updating the detail page, we cab provide user with a �Close� button, to take them back to whaterver page number he/she was.
    Thanks
    Message was edited by:
    Sabir

    Thanks. I think this is a useful information for all. Do we even need two data providers and associated row sets? Can't we just use TableRowDataProvider, like this:
    TableRowDataProvider rowData=(TableRowDataProvider)getBean("currentRow");If so, I am trying to figure out how to pass this from master to detail page. Essentially the detail page uses a a row from master data provider. Then I need user to be able to change the detail (row) and save changes (in table). This is a fairly common issue in most data driven web apps. I need to design it right, vs just coding.
    Message was edited by:
    Sabir

  • What is the best practice of deleting large amount of records?

    hi,
    I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
    Scenario:
    I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to  remove all the records which is older than 3 days every day.
    For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
    To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
    5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
    1. Get total amount of old records (older then 3 days)
    2. Get the total iterations: iteration = (total count/5000)
    3. Call SP in a loop:
    for(int i=0;i<iterations;i++)
       Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
    And the stored procedure is something like this:
     BEGIN
      INSERT INTO @table
      SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
     END
     DECLARE @RowsDeleted INTEGER
     SET @RowsDeleted = 1
     WHILE(@RowsDeleted > 0)
     BEGIN
      WAITFOR DELAY '00:00:01'
      DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
      SET @RowsDeleted = @@ROWCOUNT
     END
    It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
    Following is the web job log for deleting around 1.7 million records:
    [01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
    [01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
    1721586
    [01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
    1000, Total iterations: 345
    [01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
    [01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
    00:03:25.2410404
    [01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
    [01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
    00:03:16.5033831
    [01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
    [01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
    00:03:336439434
    Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
    11 hours.
    Any suggestion to improve the deleting records performance?

    This is one approach:
    Assume:
    1. There is an index on 'createtime'
    2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
    3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
    Steps:
    1. Find count of records more than 3 days old (TotalN), say 1,000,000.
    2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
    3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
    4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
    In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
    Frank

  • Best Practices for Defining NDS Java Projects...

    We are doing a Proof of Concept on using NDS to develop non-SAP Java applications.  We are attempting to determine if we can replace our current Java development tools with NDS/WAS.
    We are struggling with SAP's terminology and "plumbing" for setting up/defining Java projects.  For example, what is and when do you define Tracks, Software Components, Development Components, etc.  All of these terms are totally foreign to us and do not relate to our current Java environment (at least not that we can see).  We are also struggling with how the DTR and activities tie in to those components.
    If any one has defined best practices for setting up Java projects or has struggled with and overcome these same issues, please provide us with some guidance.  This is a very frustrating and time-consuming issue for us.
    Thank you!!

    Hi Peggy,
    In Component Model we divide software projects into small components.Components can use other components in well defined manner.
    A development object is a part of a component that can be changed or developed in some way; it provides the component with a certain part of its functionality. A development object may be a Java class, a Web Dynpro view, a table definition, a JSP page, and so on. Development objects are always stored as “sources” in a repository.
    A development component can be defined as a frame shared by a number of objects, which are part of the software.
    Software components combine components (DCs) to larger units for delivery and deployment.
    A track comprises configurations and runtime systems required for developing software component versions.It ensures stable states of deliverables used by subsequent tracks.
    The Design Time Repository is for versioning source code management. Distributed development of software in teams. Transport and replication of sources.
    You can also find lot of support in SDN for the above concepts with tutorials.
    Refer this Link for a overview on Java development Infrastructure(JDI)
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/webas/java/java development infrastructure jdi overview.pdf
    To understand further
    Working with Net Weaver Development Infrastructure :
    http://help.sap.com/saphelp_nw04/helpdata/en/03/f6bc3d42f46c33e10000000a11405a/content.htm
    In the above link you can find all the concepts clearly explained.You can also find the required tutorials for development.
    Regards,
    Vijith

  • SAP Best Practice for Document Type./Item category/Acc assignment cat.

    What is the Best Practice for the document Type & Item category
    I want to use NB -  Item category  - B & K ( Blanket PO) , D ( Service)  and T( Text) .
    Is sap recommends to use FO Only for the Blanket Purchase Order.
    We want to use service contract (with / without service entry sheet) for all our services.
    We want to buy asset for our office equipments .
    Which is the best one to use NB or FO ?
    Please give me any OSS notes or reference for this
    Thanks
    Nick

    Thank you very much for your response. 
    I hope I can provide some clarity on how the accounting needs to be handle per FERC  Regulations.  The G/L balance on the utility that is selling the assets will be in the following accounts (standard accounts across all FERC Regulated Utilities):
    101 - Acquisition Value for the assets
    108 - Accumulated Depreciation Value for the assets
    For an example, there is Debit $60,000,000 in FERC Account 101 and a credit $30,000,000 in FERC Account 108.  When the purchase occurs, the net book value for the asset will be on our G/L in FERC Account 102.  Once we have FERC Approval to acquire the plant assets, we will need to enter the Acquisition Value and associated Accumulated Depreciation onto our G/L to FERC Account 101 and FERC Account 108 respectively with an offset to FERC Account 102.
    The method that I came up with is to purchase the NBV of the assets to a clearing account.  I then set up account assignments that will track the Acquisition Value and respective Accumulated Depreciation for each asset that is being purchased.  I load the respective asset values using t-code AS91 and then make an entry to the 2 respective accounts with the offset against the clearing account using t-code OASV.  Once my company receives FERC approval, I will transfer the asset to new assets that has the account assignments for FERC Account 101 and FERC Account 108 using t-code ABUMN or FB01.

  • Best Practice for Managing Cookies in an Enterprise Environment

    We are upgrading to IE11 for our enterprise. One member of the team wants to set a group policy that will delete all cookies every time the user exits IE11.  We have some websites that users access that use cookies to track progress in training,
    but are deleted when the user closes the browser.  What is the business best practice regarding deleting all history, temp internet files and, especially cookies when closing a browser.
    If you can point me to a white paper on this topic, that would be helpful.
    Thanks
    Bill

    Hi,
    Regarding cookie settings, we could manage IE privacy settings using Administrative templates for IE 11:
    Administrative templates and Internet Explorer 11
    Delete and manage cookies
    The Administrative templates for IE 11, we could download from here:
    Administrative Templates for Internet Explorer 11
    Hope this may help
    Best regards
    Michael Shao
    TechNet Community Support

  • Best Practice For Database Parameter ARCH_LAG_TARGET and DBWR CHECKPOINT

    Hi,
    For best practice - i need to know - what is the recommended or guideline concerning these 2 Databases Parameter.
    I found for ARCH_LAG_TARGET, Oracle recommend to setup it to 1800 sec (30min)
    Maybe some one can guide me with these 2 parameters...
    Cheers

    Dear unsolaris,
    First of all if you want to track the full and incremental checkpoints, make the LOG_CHECKPOINT_TO_ALERT parameter TRUE. You will see the checkpoint SCN and the completion periods.
    Full checkpoint is being triggered when a log switch happens and checkpoint position in the controlfile is written in the datafile headers. For just a really tiny amount of time the database could be consistent eventhough it is open and in read/write mode.
    ARCH_LAG_TARGET parameter is disabled and set to 0 by default. Here is the definition for that parameter;
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams009.htm
    If you want to set this parameter up the Oracle recommends it to be 1800 as you have said. This can subject to change from database to database and it is better for you to check it by experiencing it.
    Regards.
    Ogan

  • Best Practice for Portable Home Directories

    What are the 'best practice' directories to sync for Portable Homes - at login and in the background. I want to make my user experience a little better than it is now.
    Login and logout take about 2 minutes - even over ethernet 100Mb, and longer using Airport, and 'background' home directory syncing seems to always suck all of my network bandwidth - making apps like Safari unusable - even though I have barely changed anything in the folders I am syncing.
    My personal home directory is 1.5Gb, and I keep my Music, Pictures and Movies on the network - as Apple suggest.

    I generally recommend the following for the least impact on user experience:
    1. Put your server and clients that will use mobile accounts and portable homes on a Gigabit Ethernet switch. It's a small price to pay for much more customer satisfaction.
    2. Put more RAM in the server, especially if you're dealing with a few users with large homes or several users with moderately-sized (less than 1.0GB) ones. This will also let you employ server-side tracking (for 10.5 server).
    3. Only sync and login/logout. Use Workgroup Manager to define all portable preferences. Choose to manage login/logout sync, and specify the items to sync; for the whole home, use "~". Omit things like ~/.Trash. Choose to manage the background sync, but remove all items from the "sync these items" list. Choose to manage the background sync interval by setting it to manual. This way, the user doesn't accidentally configure a background sync: we've told it to sync nothing only we say it can.
    --Gerrit

  • Best Practice to use a single root Application Module?

    I was reading in another thread that it may be a good idea to have all application modules nested within a single root application module (AM) so that there is only 1 session maintained for the root AM, versus an individual session for each AM. Is this a best practice? If yes, should the root AM be a skeleton AM (minimal customer service methods), or, should you select the most heavily used AM and nest the other AM's underneath of it?
    In my case, I currenlty have 2 AM's (and will have 3 AM's in the future) each representing a different set of use cases withn the application (i.e., one supports users searches / shopping cart-like functionality, and the second supports an enrollment process.) It could be the case that a user only accesses pages on the web site to do searches (first AM), or only to do enrollment (2nd AM), or, they may access pages of the site that access both AM's. Right now I have 2 separate AM's that are not nested. Should I nest the AM's and define a root AM?
    thanks

    Hi javaX
    The main physical effect of having 2 separate AMs is that they have their own transactions with the database, and presumably sit in the application module pool as their own instances consuming connections from the connection pool. Alternatively a single root AM with 2 nested AMs share a single transaction through the root AM; only the root AM controls the transaction in this scenario.
    As such it's a question of do you need separate transactions or will one suffice?
    How you group your EOs/VOs etc within the AMs is up to you, but usually falls into logical groups such as you have done. If a single transaction is fine, instead of creating multiple AMs, you could instead just create logical package structures instead. Neither method is right or wrong, they're just different ways of structuring your application.
    When you create a nested AM structure, within your ViewController project in the Data Control Palette you'll actually see 3 data controls mapped to each AM. In addition expanding the root AM data control, you'll see the nested AMs again. Create a dummy project with a nested AM structure and you'll see what I mean.
    If you base your page definitions on anything from the root AM and it's children in the Data Control Palette, this will work on the root AM's transaction.
    If you base your page definitions on something from one of the other AM data controls that isn't inside the main root AM in the Data Control Palette, instead of using the root AM's transaction, the separate child AM will be treated as root AM and will have its own transaction.
    The thing to care of when developing web pages is to consistently use the AM and it's nested AMs, or the child AMs directly with their separate transactions, otherwise it might cause a bit of a nightmare debugging situation later on when the same application is locking and blocking on the same records from 2 separate AM transactions.
    Hope this helps.
    CM.

  • Best practice for Tags

    Hello,
    In packaged applications Tags are used in most of the Apps. Eg. in Customer Tracker App, we can add tags to a customer where these tags are stored in a varchr2 column in the Customers Table.
    In my case, I have predefined tags for Properties (Real Estate) in a lookup table called TAGS . Eg, Full floor, Furnished, Fitted, Duplex, Attached... What is the best Practice to tag the properties:
    1- To store these tags in a varchar column in PROPERTIES table using Shuttle box.
    OR
    2- To store them in a third table Eg, PROPERTIES_TAGS (ID PK, PROPERTY_ID FK , TAG_ID FK ), Then use LISTAGG function to show the tags in one line in the Properties Report.
    OR
    Do you have a better option ??
    Regards,
    Fateh

    Fateh wrote:
    Hello,
    In packaged applications Tags are used in most of the Apps. Eg. in Customer Tracker App, we can add tags to a customer where these tags are stored in a varchr2 column in the Customers Table.
    In my case, I have predefined tags for Properties (Real Estate) in a lookup table called TAGS . Eg, Full floor, Furnished, Fitted, Duplex, Attached...These appear to me to be two different use cases. In the packaged applications the tags allow end users to attach free-form metadata to data for their own purposes (these are sometimes called "folk taxonomies"). Users may use tags for different purposes, or different tags for the same purpose. For example, I might add "Monday", "Thursday" or "Friday" tags to customers because those are the days they receive their deliveries. For the same purpose you might tag the same customers "1", "8", and "15" using the route numbers of the trucks making the deliveries. You might use "Monday" to indicate that the customer is closed on Mondays...
    In your application you are assigning known, predefined attributes to the properties. This is a standard 1:M attribute model. Displaying them using the tag metaphor does not make them equivalent to free-form user tags.
    What is the best Practice to tag the properties:
    1- To store these tags in a varchar column in PROPERTIES table using Shuttle box.If you do this, how do you:
    <li>Efficiently search for furnished duplex properties?
    <li>Globally change "fitted" to "built-in"?
    <li>Report the number of properties, broken down by full floor, duplex, fitted...
    OR
    2- To store them in a third table Eg, PROPERTIES_TAGS (ID PK, PROPERTY_ID FK , TAG_ID FK ), Then use LISTAGG function to show the tags in one line in the Properties Report.As Why to use Look up Table, this the correct way to do this. It enables the data to be indexed for efficient retrieval, and questions like those above should be handled simply using joins and grouping.
    You might want to investigate the possibility of eliminating the ID PK and using an index organised table for this.
    OR
    Do you have a better option ??I'd also look carefully at your data model. Ensure you're not flirting with the EAV anti-pattern. Should some/all of these values not simply be attributes on the property?

  • What is the best practice for package source locations?

    I have several remote servers (about 16) that are being utilized as file servers that have many binaries on them to be used by users and remote site admins for content. Can I have SCCM just use these pre-existing locations as package sources, or is this
    not considered best practice? 
    Or
    Should I create just one package source within close proximity to the Site Server, or on the Site Server itself?
    Thanks

    The primary site server is responsible for grabbing the source data and turning it into packages for Distribution points.  so while you can use ANY UNC to be a source location for content, you should be aware of where that content exists in regards
    to your primary site server.  If your source content is in Montana but your primary server is in California ... there's going to be a WAN hit ... even if the DP it's destined for is also in Montana.
    Second, I strongly recommend locking down your source UNC path so that only the servers and SCCM admins can access it.  This will prevent side-loading of content  as well as any "accidental changing" of folder structure that could cause
    your applications/packages to go crazy.
    Put the two together and I typically recommend you create a DSL (distributed source library) share and slowly migrate all your content into it as you create your packages/applications.  You can then safely create batch installers, manage content versions,
    and other things without fear of someone running something out of context.

Maybe you are looking for