Architecture question regarding record management

We are designing a contract management system in SharePoint 2013. The contracts are separated by departments and by locations. Users in a particular department/location should only have access to contracts in their own department/location.
We are thinking to create 1 site collection for each department - this ensures that we stay within the 200 GB content database recommendation. In each department site collection, we will have 1 document library with 1 folder for each location - permission inheritance
is broken on each folder to restrict access on a location basis. We thought of 1 document library so that we can use out-of-box SharePoint views for reports across locations for executives to see.
The problem with this design is that there are about 250 different locations. This means, in each department site collection, we will have to create 250 folders in our document library and break inheritance on the 250 folders.
Another approach would be to give no one access to the document library & use a custom drop off library form to add documents. We can create a web part which elevates permissions and displays only those documents that users are supposed to see. The advantage
with this approach is much less broken permission inheritance. The disadvantage is that we won't be able to use OOTB views and we'll have to implement our own search.
Thoughts appreciated!

You could consider "Audience Targeting". It works well with the Content Query WebPart. You can enable it on one document library and it provides a TargetAudience column allowing you to assign groups/roles to the content. This could get around the
creation of separate site collections and breaking permission inheritance.
http://technet.microsoft.com/en-us/library/cc261958(v=office.14).aspx#Section4
Blog | SharePoint Field Notes Dev Tools |
SPFastDeploy | SPRemoteAPIExplorer

Similar Messages

  • Architecture Question regarding RPD (Best design practice)

    Hello, I need some design help regarding the best way to build a RPD for a Bank. Your help / guidance is greatly appreciated as always:
    Following is data (example)
    Revenue ALL (1 million record)
    Revenue by filter A ONLY (250k)
    Revenue by filter B ONLY (50k)
    Revenue by filter C ONLY (150k)
    Revenue by filter D ONLY (25k)
    Requirement:
    Report Revenue ALL
    Report Revenue % A of ALL = (250k / 1 million) * 100
    Report Revenue % B of ALL = (50k / 1 million) * 100
    Report Revenue % C of ALL = (150k / 1 million) * 100
    Report Revenue % D of ALL = (25k / 1 million) * 100
    Should i build this from a single FACT source or should i have something like the following:
    Source one: Revenue ALL
    Source two: Revenue by filter A ONLY (250k)
    Source three: Revenue by filter B ONLY (50k)
    Source four: Revenue by filter C ONLY (150k)
    Source five: Revenue by filter D ONLY (25k)
    All of these will have ONE common Bank dimension allowing me to join across if needed.
    Essentially, the question is, should i use a single source table containing ALL data or have multiple sources each providing me exactly what i am looking for?
    Thanks,
    Jes

    I would use single source data at ALL level and then filter it as needed.
    user
    100.00 * count (column filterd by ..) / count (column)
    to get your percentages.

  • Basic question regarding recording of sounds..

    So I have this idea that I would like to use some voice during my music making and since I don't own a direct microphone but I do have a microphone on my headset and built-in to my Mac I would like to use one of those.
    But I can't seem to find out how to do it in Logic.
    I know it's a quite simple question but I still hope one of you can help me
    Best Regards.

    Hi Simon,
    simply set the Audio Device to "built in" and select the Microphone in the OS X Audio Prefs. Should work. Don't forget Logic can only use one Device at once, so if you are using an audio interface, you'll have tu built an aggregate device or switch back and forth.
    Fox

  • Have a Question regarding RTP Manager?

    Please SomeBody Tell me!
    I am Creating an RTP Player for Each New Receive Stream using RTP Manager.
    I have some question::
    What is the role of SourceDescription in initialize(SessionAddress[] local,Source
    Descritpion[] source,double d,double d,EncryptionInfo encry)?
    Is there any role of AddTarget(SessionAddress dest) nedeed in my application?

    What is the role of SourceDescription in initialize(SessionAddress[] local,Source
    Descritpion[] source,double d,double d,EncryptionInfo encry)?From the API:
    sourceDescription - An array of SourceDescription objects  containing information to send in RTCP SDES packets for the local participant.  This information  can be changed by calling setSourceDescription() on the local Participant object.
    So, source description is used to tell the remote participants things about the sender.
    Is there any role of AddTarget(SessionAddress dest) nedeed in my application?AddTarget tells the RTPManager where to send the RTP streams, so if you expect anyone to receive your streams, you damned well better add them as a target.

  • Question regarding recording vocals?

    Hi, I'm considering in purchasing an IMAC real soon. Will Garage Band allow me to record vocals while playing an mp3, wave, cd, etc, from an internal player within the mac? For instance, I currently have a WINDOWS PC which I use winamp to play the audio and use CoolEditPro to record that same audio as well as vocals simultaneously. Will I be able to do this using Garageband and an internal audio player which comes with mac?
    Thanks
    Wil
      Windows XP  

    they are recorded as wave files on your player so you should be able to transfer them to your computer using mediasource

  • Question regarding Records and Collection

    HI All,
    I have a record/Collection say rec_t
    For i in rec_t.FIRST..rec_t.Last
    Loop
    -- i need to have a condition with which i can find out, if in the loop i am at the last record
    if rec_t.last is true --I know this is not working but do we have something to get t
    END LOOP;

    padders wrote:
    Well at least we're all consistent :-DYeah, I saw that.. :-)
    But I would have posted:
    if i = rec_t.Count then ...Cannot recall ever using the +.last()+ and +.first()+ methods for normal array processing. Using
    for i in 1..array.Count loop .... is a lot safer as it deals with array variables with no value (null) just fine. No need for additional code to check for a null state prior to entering the FOR loop.

  • Where to post Architecture question regarding Hyp Plan/Essbase installation

    *1st Installation (uses HypDB1 and Hypapp1 server) ----- PRODUCTION*
    Server HypDB1: <------ Server Hypapp1
    ESSBASE1 Hyp Plan1
    HYP Shared Service1 Studio1
    EAS1 Apache
    ODI Workspace1
    Oracle 11g
    *2nd Installation (uses HypDB2 and Hypapp2 server BUT Shares) ---- PROPOSED ADD ON TO PRODUCTION*
    BUT USES ORACLE11G of  1st Installation
    Server HypDB2: <------ Server Hypapp2
    ESSBASE2 Hyp Plan2
    HYP Shared Service2 Studio2
    EAS2 Apache
    ODI Workspace2
    Is the above possible?

    Another idea I found from this app note.
    http://www.xjtag.com/app-note-16.php
    But there is a note in this app note that speaks to the DONE signal.
    http://www.xjtag.com/app-note-14.php
    "When the programming operation of a Xilinx FPGA completes it will toggle its DONE signal; if this occurs when it is not expected then the PROM or processor that configures the FPGA can automatically re-start the process of programming the FPGA with its functional image (undoing the clearing that has just been done through XJTAG)."
    I couldn't find any info about this in the 7 Series Config User Guide. Does this apply to the 7 Series FPGAs?

  • Design/Architecture questions regarding streams, triggers, JMS, XML

    I need to devise a way to enable an application that I am currently developing to act upon a change to a database table and to generate a message to notify an external system of the change.
    The scenario I envisage is:
    - A 3rd party (a user or a batch job) modifies (INSERT, DELETE, UPDATE) a row in a given table.
    - There is a trigger that is fired by the modification which would put a message onto a Stream (thus the notification would be persistent if the database were to go down).
    - A Java server process would be running that continually checks the Stream (referenced above) for a message . When the message has been dropped in the queue by the trigger the Java server process would read the message and determine what had changed, and then it would generate an XML message that it would pass onto the external system to notify the change. NOTE : The external system would not have access to the database so the outbound XML message would contain all of the information that describes what has changed.
    This sounds simple enough to me, but there is a fair bit of "hand waving" around how this would actually work in practice!
    Can anyone provide any assistance with any Oracle infrastructure that I might be able to use to ease any of this? My main area of concern is how the trigger should indicate what has changed when a modification occurs. I could create the trigger to write a message in a simple (but proprietary format - or potentially XML) to the Stream queue, and then the Java server process could read the message (via OJMS) and parse the message and using JDBC determine what modification actually ocurred, but this could be quite a bit of work... Is there a smarter way to do this?
    Perhaps I could use XMLDB to allow the trigger to immediately render the change into XML format which would slightly ease the parsing that the Java server process has to do.
    Any help would be greatly appreciated!
    Thanks,
    James

    I need to devise a way to enable an application that I am currently developing to act upon a change to a database table and to generate a message to notify an external system of the change.
    The scenario I envisage is:
    - A 3rd party (a user or a batch job) modifies (INSERT, DELETE, UPDATE) a row in a given table.
    - There is a trigger that is fired by the modification which would put a message onto a Stream (thus the notification would be persistent if the database were to go down).
    - A Java server process would be running that continually checks the Stream (referenced above) for a message . When the message has been dropped in the queue by the trigger the Java server process would read the message and determine what had changed, and then it would generate an XML message that it would pass onto the external system to notify the change. NOTE : The external system would not have access to the database so the outbound XML message would contain all of the information that describes what has changed.
    This sounds simple enough to me, but there is a fair bit of "hand waving" around how this would actually work in practice!
    Can anyone provide any assistance with any Oracle infrastructure that I might be able to use to ease any of this? My main area of concern is how the trigger should indicate what has changed when a modification occurs. I could create the trigger to write a message in a simple (but proprietary format - or potentially XML) to the Stream queue, and then the Java server process could read the message (via OJMS) and parse the message and using JDBC determine what modification actually ocurred, but this could be quite a bit of work... Is there a smarter way to do this?
    Perhaps I could use XMLDB to allow the trigger to immediately render the change into XML format which would slightly ease the parsing that the Java server process has to do.
    Any help would be greatly appreciated!
    Thanks,
    James

  • Records Management

    Hello!
    I've got a question from Records Management
    When I create a record model and want to set visibility for a node, this option is not available, i.e. the Roles Dialog opens and lists them but I can't choose any of them. "Enter" does not close the dialog and green tick in this dialog is unavailable.
    Why does it happen?
    Regards,
    Anthony

    Keerthika,
    this does not work for both of fields, neither for "visible in role", nor for "ID"
    the difference is:
    when I choose f4 help for  "visible in role", choise dialog appears for single or composite roles, but when I choose any of them it disappears with message
    "No role was selected
    Message no. SRM_BR052"
    if I call f4 for "ID" the list of roles either for single or for composite roles appears but the green tick is disabled and I can't choose any of them
    note: this option does not work for any type of node, neither model nodes, nor structure nodes
    note2: as to other options, such as relationship, attributes and element types it works fine!

  • Architecture question, global VDI deployment

    I have an architecture question regarding the use of VDI in a global organization.
    We have a pilot VDI Core w/remote mysql setup with 2 hypervisor hosts. We want to bring up 2 more Hypervisor hosts (and VDI Secondaries) in another geographic location, where the local employees would need to connect desktops hosted from their physical location. What we don't want is to need to manage multiple VDI Cores. Ideally we would manage the entire VDI implementation from one pane of glass, having multiple Desktop Provider groups to represent the geographical locations.
    Is it possible to just setup VDI Additional Secondaries in the remote locations? What are the pros and cons of that?
    Thanks

    Yes, simply bind individual interfaces for each domain on your web server,
    one for each.
    Ensure the appropriate web servers are listening on the appropriate
    interfaces and it will work fine.
    "Paul S." <[email protected]> wrote in message
    news:407c68a1$[email protected]..
    >
    Hi,
    We want to host several applications which will be accessed as:
    www.oursite.com/app1 www.oursite.com/app2 (all using port 80 or 443)
    Is it possible to have a separate Weblogic domain for each application,all listening
    to ports 80 and 443?
    Thanks,
    Paul

  • A question regarding Management pack dependency.

    Hi All,
    I am new to SCOM, I have a question regarding management pack dependency.
    My question is, Is Dependency is required when New alerts are created in a unsealed MP and the object class selected during alert creation is (i.e Windows server 2012 full operating system) and it is on a Sealed management pack ? 
    For example i have a Sealed Windows server 2012 monitoring management pack.
    I have made a custom one, for windows server 2012, So if it the custom is not dependent on the sealed Windows server 2012 monitoring management pack, Then cant i create any alerts in the custom management pack targeting the class Windows server
    2012 full operating system ?

    Hi CyrAz,
    Thank you for the reply. Now if your's and my understanding is the same, Then look at the below what happened.
    I created a Alert monitor targeting a Windows Server 2012 class in my custom management pack which
    is not dependent on the Windows server 2012 management pack, But how was i successfully able to create them when the dependency is not there at all, If our understanding is same, then there must be an Error thrown while creating the monitor its self right
    ? But how was SCOM able to create that ?
    Look at the below screenshot.
    I was able to create a monitor targeting Windows server 2012 Full operating system and create a alert on the custom management pack which is not at all dependent
    on the Windows server 2012 Sealed MP.
    Look at the dependency of the management pack where i do not have the Windows server 2012 management as my custom management is not dependent on that.
    Then how come this is possible ?

  • Few questions regarding Oracle Scorecard and strategy management.

    Hi,
    I have following questions regarding Oracle Scorecard and strategy management:
    1. What are the ways in which i can show status of a KPI, like we have colors and symbols, what are others?
    2. can we keep log of KPIs, store them, keep report of feedback/action taken on that?
    3. Does Scorecard and strategy management have ability to retain history on feedback and log of
    entries i.e. date/time, user name?
    4. Does Scorecard and strategy management have ability to use common mathematical formulas e.g. median, average, percentiles. Describe.?
    Thanks in advance for your help.

    bump.

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

  • TREX index for Records management in CRM

    Hi All,
    Hopefully this is the correct forum to post this question, if not please let me know the correct one.
    Does anyone have any information on linking TREX to RMPS?
    This is to resolve an issue where the case search on business partner first and last name fields are case sensitive in the electronic desk.  In the DMWB we have ticked the Index-Relevant option but have been advised that we also need the link to TREX.
    I have set up a TREX search index for the Solutions database in the past but setting this up for Records management doesn't seem to be as easy.
    Many thanks in advance.
    Gary Hawkins

    Hi Vishnu,
    Try this BAPI: BAPI_BUSPROCESSND_CREATEMULTI
    BAPI_BUSPROCESSND_SAVE
    Regards
    Arun Kumar
    Rewards Points if it helps.

  • Path processing in Records Management

    Hi, All.
    A am beginner in Records Management and, i hope, my problem will solve easyly by professionals.
    I want to add some customer activities in disposition path. For this purpose, i declared and trying to implement my own class using interface <b>IF_SRM_DIS_WFPATH_ACTIVITY</b>.
    But i have  problem implementing method EXECUTE_FUNCTION of this interface. Import parameters of method containes parameter <b>IM_DP_SEL_OBJECTS</b> (list of POID ID's for selected objects). But i can't translate POID ID's to object referenes.
    Standard beginning
    <b>service_object = me->if_srm~get_srm_service( ).</b>
    does not work. Interface IF_SRM does not declared in my class and nothing with this interface passed from calling class.
    I will be very plesured, if anybody hepl me.
    Best regards,
    Konstantin

    Dear Poster,
    As no response has been provided to the thread in some time I must assume the issue is resolved, if the question is still valid please create a new thread rephrasing the query and providing as much data as possible to promote response from the community.
    Best Regards,
    SDN SRM Moderation Team

Maybe you are looking for

  • How do i copy my iphoto pics to a hard drive

    I have a ton of pictures on iphoto and I want to transfer them onto an external hard drive to save space on my iMac. Is there a easy way to mass transfer them like on a windows based PC?

  • How to get the Sysman password in a fetchlet?

    Folks, Need advise. I have a fetchlet and I need to get the username and password of the oracle management repository (using some Perl and SQL Scripts) to fetch some details. How I can get the password of Sysman user in a fetchlet? Is there some thin

  • Displaying BLOB jpg images

    I have saved gifs and large jpgs into database BLOB columns in a table, using JDeveloper's interMedia imageControl. The smaller gifs display peachy in an Image column using reports 6i in the live previewer. The larger jpgs all come back with a REP-18

  • 802.1x wired problems

    Hello. I'm trying to install arch, but I've been stuck on configuring network for a day already. The point is: I need to connect to the local university network (wired) to be able to connect to the Internet via PPPoE. The university network has PEAP

  • Cant Download trail X Pro?

    I'm trying to download the trial version of Adobe Acrobat X Pro. When I click the "Download Now" button nothing happens what so ever. What can I do?