Best practice to populate metadata of the content based on the folder

Hi,
What is the best practice to follow to automatically populate metadata of a content being checked-in based on the folder in which it is coming in?!
The folder I have may be a contribution folder or a collab project folder.
But I would like to populate the metadata of the content automatically when the content is dropped into a folder using the desktop integrator.
Thanks,
Leo

Yes Leo, that's correct, all documents inheriting the metadata of the folder and the option to propagate changes to documents and sub-folders is out-of-the-box functionality.
Just create a folder and set the metadata fields you want, then add some documents via the desktop integartion or simply via webdav (you can map ucm as a web folder in windows explorer without having to install the UCM desktop integration).. all the document should have the folder's metadata by default.
Give it a try and let me know how you go.
Regards,
Juan

Similar Messages

  • Best practice for multiple instances of the same BEX query

    Hi there,
    I'm wondering what's the best way to use multiple instances of the same BEX query. Let me explain what I mean:
    I have a dashboard with different queries feeding different period of time such as: week to date, month to date and so on. One query for each since it is based on a user exit.
    For each query I want to show different data in different sections of my dashboard. Per example: sales per directors or sales per customer group, sales per day, sales per week and the like.I tried to connect a simple bar chart via a direct connection but with no success due to the multiple lines generated by the addition of the sales director, customer group, week number and so on.
    My question is about the way to connect the different queries efficiently in order to show the different data while avoiding multiple useless lines.
    The image above shows the query browser where, per example, for a Month to date query there will be mutiple line for each week as well as one line for each director. If, for two different components, I want to show data per week and data per director or other representation what is the best practice:
    Add another instance of the same query and only put the week information and another one will only the director info?
    Should I bind those to the excel file and use formulas to make final calculations?
    Will there be a performance issues for adding different instances of the same query
    I have 6 different queries (read 6 user exit that filters time via user exit).
    Depending on the best practices there might be 4 instances for each for a total of 24 instances in the query browser.
    I hope my question is clear enough, if not please do not hesitate I'll clarify as much as possible.
    Regards,
    Steve

    Hi Steve,
    Might be trying for solution for a long time, If i understood your question clear let me clarify you few points.
    You are trying to access the bex query which is designed with the exit's in the background based on the logic and trying to call the entire dimensions and key-figures in a single connection. Then you are trying to map those data in the charts.
    Steve, try to make more connections based upon the logic and split them. use the same query but split them by sales per customer group, sales per day, sales per week by making three different connections and try. You can merge the prompts from all connections.
    Hope this Helps!!!
    Sorry if i misunderstood your question.
    --SumanT

  • Best Practice regarding using and implementing the pref.txt file

    Hi All,
    I would like to start a post regarding what is Best Practice in using and implementing the pref.txt file. We have reached a stage where we are about to go live with Discoverer Viewer, and I am interested to know what others have encountered or done to with their pref.txt file and viewer look and feel..
    Have any of you been able to add additional lines into the file, please share ;-)
    Look forward to your replies.
    Lance

    Hi Lance
    Wow, what a question and the simple answer is - it depends. It depends on whether you want to do the query predictor, whether you want to increase the timeouts for users and lists of values, whether you want to have the Plus available items and Selected items panes displayed by default, and so on.
    Typically, most organizations go with the defaults with the exception that you might want to consider turning off the query predictor. That predictor is usually a pain in the neck and most companies turn it off, thus increasing query performance.
    Do you have a copy of my Discoverer 10g Handbook? If so, take a look at pages 785 to 799 where I discuss in detail all of the preferences and their impact.
    I hope this helps
    Best wishes
    Michael Armstrong-Smith
    URL: http://learndiscoverer.com
    Blog: http://learndiscoverer.blogspot.com

  • Best practice?-store images outside the WAR file?

    I have an EAR project with several thousand images that are constantly changing. I do not want to store the images in the WAR project since it will take an extremely long time to redeploy with every image change. What is the best practice for storing images? Is it proper to put them in the WAR and re-deploy? Or is there a better solution?

    Perryier wrote:
    Can you expand on this? Where do they get deployed and in what format? How do I point to them on a jsp?
    I am using Sun Application server 9.0, and I don't really think this has a "stand alone" web server. How will this impact it?You could install any web server you want (Apache?). The request comes in and if the request matches something like .jpg or .gif or whatever, you serve up the file. If you have a request for a jsp or what not, you forward the request to the app server (Sun App Server in your case). i.e. your web server acts as a content-aware proxy.

  • Best practices when it comes on online content creation

    I'm trying my luck since Google is not really providing me the results I'm looking for or I'm using the wrong word searches. Is there any information available that lets a novice like me to know when developing an online content, what are some of the best practices to adopt from a user prespective or human factor engineering. Examples of some scenarios
    - When developing text caption, what should be the appropriate font size be?
    - When having a light color background, what color should the text caption and highlight box looks like?
    - In software simulation and adopting the TTS agent, should I have only one voice or should I have multiple voices?
    Any advice or point of reference is appreciated.
    Regards
    AJ

    Definitely no internet-wide standards for any kind of presentation, but most mid-to-large sized companies have a standards document somewhere.
    That said, if you're like me and lacking in visual design sensibilities, the answer is simple - theft!  Find a video or training presentation you like, and adapt it.  The rules say you can't re-use their actual content, but nothing is stopping you from ethically borrowing a few aspects of a visual style - font, background colors, etc.
    Also, your approach will evolve over time, as you gain experience, and the specifics will change quite a bit depending on your subject matter, and your intended audience.  A marketing video is distinct from an informational video in some fundamental ways, and aiming at customers vs. prospects will also significantly affect how you present your message.
    If all else fails, go to Fiverr.com and ask someone to build you a title slide based on your company logo and the title of your piece.  That should give you a great starting point, for $5!

  • Best Practice on using and refreshing the Data Provider

    I have a �users� page, that lists all the users in a table - lets call it master page. One can click on the first column to of the master page and it takes them to the �detail� page, where one can view and update the user detail.
    Master and detail use two different data providers based on two different CachedRowSets.
    Master CachedRowSet (Session scope): SELECT * FROM UsersDetail CachedRowSet (Session scope): SELECT * FROM Users WHERE User_ID=?I want the master to be updated whenever the detail page is updated. There are various options to choose from:
    1. I could call masterDataProvider.refresh() after I call the detailDataProvider.commitChanges() - which is called on the save button on the detail page. The problem with this approach is that the master page will not be refreshed across all user sessions, but only for the one saving the detail page.
    2. I could call masterDataProvider.refresh() on the preRender() event of the master page. The problem with this approach is that the refresh() will be called every single time someone views the master page. Further more, if someone goes to next page (using the built in pagination on the table on master page) and clicks on a user to view its detail and then close the detail page, it does not keep track of the pagination (what page the user was when he/she clicked on a record to view its detail).
    I can find some work around to resolve this problem, but I think this should be a fairly common usage (two page CRUD with master-detail). If we can discuss and document some best practices of doing this, it will help all the developers.
    Discussion:
    1.     What is the best practice on setting the scope of the Data Providers and CahcedRowSet. I noticed that in the tutorial examples, they used page/request scope for Data Provider but session scope for the associated CachedRowSet.
    2.     What is the best practice to refresh the master data provider when a record/row is updated in the detail page?
    3.     How to keep track of pagination, (what page the user was when he/she clicked on the first column in the master page table), so that upon updating the detail page, we cab provide user with a �Close� button, to take them back to whaterver page number he/she was.
    Thanks
    Message was edited by:
    Sabir

    Thanks. I think this is a useful information for all. Do we even need two data providers and associated row sets? Can't we just use TableRowDataProvider, like this:
    TableRowDataProvider rowData=(TableRowDataProvider)getBean("currentRow");If so, I am trying to figure out how to pass this from master to detail page. Essentially the detail page uses a a row from master data provider. Then I need user to be able to change the detail (row) and save changes (in table). This is a fairly common issue in most data driven web apps. I need to design it right, vs just coding.
    Message was edited by:
    Sabir

  • Best practice implementation. What are the steps for this?.

    We had an upgrade from CRM 4 to 7 undertaken by some outfit (name deleted). After the upgrade was complete it would look as though the comapny carried on using the GUI interface rather than the WebUI interface, mainly because that's how CRM 4 worked. Now they would like to use the WebUI interface of CRM 7, as the GUI interface is no longer supported, but are receiving a number of errors when using certain features of teh WebUI. It would look as though a lot of config is missing, especially in the UI Framework area. I can only assume that whichever company performed the upgarde simply skipped this section when upgrading.
    I assume that I could download the best practice install/upgrade (?) and then just execute the section regarding the UI Framework, if such a section exists. bearing in mind that there seems to be a lot of config missing in the UI Framework section, would you recommend the course of action that I have mentioned?.
    Our WebUI Interaction centre is giving errors when we go in and I have been informed that I need to complete the config for:
    Customer Relationship Management->UI Framework->UI Framework Definition->Maintain Runtime Framework Profile
    But as I mentioned, there are lots of other sections in the UI Framework area that are empty and hence the suggestion I made above. Hopwever, I would specifically be interested to hear from anyone who can tell me what the entries are in the view table BSPWDV_RF_PROF_C and possibly the table BSPWDV_CTRL_REPL and BSPWDV_DEF_CC.
    I know this only completes part of the config, but it might be enough so that the WebUI IC can be viewed.
    On another subject, I have just come into this company and if I wanted to see what had been installed how do I go about that. for example, if I wanted to know if there had been an upgrade from 4 to 7 for a particular Industry solution, where do I check this?.
    Jason

    I have been through the following steps:
    Entered this URL http://help.sap.com/bp/initial/index.htm
    Clicked on 'Cross-industry Packages'
    Clicked on 'CRM'
    Clicked on 'Englilsh'
    Then the following page is displayed:
    http://help.sap.com/bp_crm70/CRM_DE/HTML/index.htm displayed
    But now what?. How do I get the Best practice instructions for a CRM implemenation?.
    Jason

  • What are some best practices for Effective Sequences on the PS job record?

    Hello all,
    I am currently working on an implementation of PeopleSoft 9.0, and our team has come up against a debate about how to handle effective sequences on the job record. We want to fully grasp and figure out what is the best way to leverage this feature from a functional point of view. I consider it to be a process-related topic, and that we should establish rules for the sequence in which multiple actions are inserted into the job record with a same effective date. I think we then have to train our HR and Payroll staff on how to correctly sequence these transactions.
    My questions therefore are as follows:
    1. Do you agree with how I see it? If not, why, and what is a better way to look at it?
    2. Is there any way PeopleSoft can be leveraged to automate the sequencing of actions if we establish a rule base?
    3. Are there best practice examples or default behavior in PeopleSoft for how we ought to set up our rules about effective sequencing?
    All input is appreciated. Thanks!

    As you probably know by now, many PeopleSoft configuration/data (not transaction) tables are effective dated. This allows you to associate a dated transaction on one day with a specific configuration description, etc for that date and a different configuration description, etc on a different transaction with a different date. Effective dates are part of the key structure of effective dated configuration data. Because effective date is usually the last part of the key structure, it is not possible to maintain history for effective dated values when data for those configuration values changes multiple times in the same day. This is where effective sequences enter the scene. Effective sequences allow you to maintain history regarding changes in configuration data when there are multiple changes in a single day. You don't really choose how to handle effective sequencing. If you have multiple changes to a single setup/configuration record on a single day and that record has an effective sequence, then your only decision is whether or not to maintain that history by adding a new effective sequenced row or updating the existing row. Logic within the PeopleSoft delivered application will either use the last effective sequence for a given day, or the sequence that is stored on the transaction. The value used by the transaction depends on whether the transaction also stores the effective sequence. You don't have to make any implementation design decisions to make this happen. You also don't determine what values or how to sequence transactions. Sequencing is automatic. Each new row for a given effective date gets the next available sequence number. If there is only one row for an effective date, then that transaction will have a sequence number of 0 (zero).

  • Best Practices for configuring ICMP from the outside

    Question,
    Are there any best practices or best recommendations on how ICMP should be configured from the outside? I have been cleaning up the rules on our ASA as a lot were simply ported over years ago when we retired our PIX. I noticed that there is a rule to allow ICMP any any and began to wonder how this works when the rules above are specific IP addresses and specific ports. This in thurn started me looking to see if there was any documentation or anything to help me determine a best practice. Anyone know of anything?
    As a second part how does this flow on a firewall if all the addresses are natted? It the ICMP traffic simply passed through the NAT and the destiantion simply responds?
    Brent                   

    Here you go, bro!
    http://checkthenetwork.com/networksecurity%20Cisco%20ASA%20Firewall%20Best%20Practices%20for%20Firewall%20Deployment%201.asp#_Toc218778855
    access-list inside permit icmp any any echo
    access-list inside permit icmp any any echo-reply
    access-list inside permit icmp any any unreachable
    access-list inside permit icmp any any time-exceeded
    access-list inside permit icmp any any packets-too-big
    access-list inside permit udp any any eq 33434 33464
    access-list deny icmp any any log
    P/S: if you think this comment is useful, please do rate them nicely :-)

  • Two localizations of the same Best Practice on one instance on the same client

    Hi Gurus -
    I have a situation where I need to install the Food & Beverage Best Practice for more than one localization on the same instance.  This company has locations in more than one country and needs Best Practices for those countries.  Is it possible, for example, to install Best Practice F&B for US and Best Practice F&B for Germany on the same client on the same instance?  If so, how?  Also is there any documentation on this?
    Regards,
    Jim McCollum

    No one knows anything about this?

  • Lync2013 Best Practices Analyzer can not scan the edge server details

    Hi All,
    I encount one strange question that the Lync 2013 Best Practices Analyzer tool can find there's one edge server in the lync infrastructure when scanning, but the scan result does not display the edge server details as front end server (front end server can
    scan all details like hardware CPU, fqdn and so on. But the edge server has not)
    Anyone can help, much appreciated.
    Elva

    It seems a network issue.
    You should check you have the proper network access to Lync Edge Server as Lync Edge Server is not in the same subnet of Lync Front End Server.
    Lisa Zheng
    TechNet Community Support

  • Defining metadata at the folder level.

    We are migrating from Content DB to Webcenter Content 11.1.1.7, using FrameworkFolders.
    Our old Content Management solution (Content DB), allowed us to define metadata on a folder level. For example a group of metadata (a category in Content DB) could be defined and applied to a specific folder. When documents are uploaded, the server enforces the metadata to be populated for all documents uploaded to that folder and all child folders.
    Webcenter Content seems to behave quite differently. From what I can tell, this is set up in the "Configuration Manager" Admin applet. Metadata fields are defined in the "Information Fields" tab. Sets of fields are defined as a "Rule". Roles are enforced on uploaded content through "Profiles". A given profile is enforced by the user of a "Trigger". The trigger is specified to look at a specific field, which is global to everything on the server. This is all documented here: http://docs.oracle.com/cd/E23943_01/doc.1111/e10978/c04_metadata.htm#DAFJDCEH
    Having the trigger field be global seems to be very restrictive. We will have several different systems storing content on the server. Isn't this a big limitation of the product?
    How can create a rule to enforce metadata to be populated for all content that is uploaded to a folder?
    Thanks!

    I'd say the answer is: create a folder and assign it a set of metadata (and inherit, if you will).
    I don't think WebCenter Content, or UCM, has a concept equal to the mentioned category as described; i.e. an object that represents a metadata settings that can be assigned to 'something'. There are, however, many concepts that are close:
    - rules/profiles mentioned in the initial posts - a profile (consisting of rules) may correlate to a "content type". It defines what fields make sense for this particular content item, what is required, default, etc. (relationship between fields). The caveat is that profiles define actually just how content items behave in the GUI; (global) rules can enhance also back-end processing (e.g. fill in an automated contentID identifier, if missing), but in reality they don't define data model
    - folders - folders help to define hierarchy on items. As Srinath explained, we can also benefit from propagation of metadata, or inheritance. This is probably the closest concept to the Content DB category, but the holder is not the category, but the folder itself. Keep also in mind that folder is an optional parameter, and metadata inherited from a folder can be overriden for both items and folders
    - in URM, where the categorization is more strict is another hierarchical structure; retention category (which can be assigned other settings, esp. disposition rules, and which can contain retention folders)
    - yet, another concept that might be useful in some use cases are folios; a folio is an item (XML-file) that has its own metadata, and can reference other content items. However, operations on a folio item OOTB does not affect contained content items, and a content item can be referenced by several folios.
    Finally, if no standard concept helps, you may always create your own customization via filters. A filter is a Java piece of code, which can be "hooked" to standard events (like "uploading a file", or check-in as it is called in UCM) and which can do whatever validation you like. For instance, I created a component which checks that an assigned quota on a folder to which an item is uploaded is not exceeded.
    I assume that for your use case, you might be good to go with a combination of folders and profiles (a profile setting can also be assigned to a folder as default metadata). Let us know if you get stuck.

  • Best practices for Refreshing Unv Structure for SAP based universes

    This may sound basic and I apologize for asking this, but the refresh structure in SAP based universes is somewhat different from noraml universes.
    Here is what i'd like to do:
    - Hide all the L00 objects.
    - Rename all the L01 objects and move them to a new Class.
    - Change some of the detail (attribute) objects to dimension objects.
    - Hide the format and Unit for key figures.
    - Hide all of the classes/subclasses  that get automatically generated when a SAP based universe is created.
    I have noticed that when I do the above and refresh the universe, it assumes that all these objects have gone missing from the original classes and adds them back to the universe.
    I also want to make sure that if the select of an object gets updated and the object is re-named, then it should automatically pick up the change.
    Lastly, I have some reports which were built prior to this renaming. I want to make sure that the reports do not break.
    Thanks,
    Kashif

    Hi,
    This thread is really old. 
    Yes it was a common problem back in the earlier XI 3.x days . a lot of bugs in this area were eliminated by the time of XI 3.1 SP03 FP3.x   -  you don't quote your version.
    Actually, you need to be aware that a refresh structure is often not needed, and can corrupt the olap universe . Pls check out Note 1278216 - What are the best practices for OLAP Universe Change Management when using SAP Integration Kit?
    in essence :
    Only use 'Refresh Structure' functionality If: 
    - A new Object (Dimension/Characteristic) has been added to the BEx query (Rows/Columns/Free Characteristics)
    - A new Variable Restriction has been added to the Bex query Filters
    Do not use 'Refresh Structure' functionality after:
    - Having modified a STRUCTURE in BEx. i.e. 'Detail view of Formula' or 'Details of Selection', or changing the General Description of structure members.
    - Doing manual actions on objects/classes in the OLAP Universe like:  Move ; Cut/Paste ; Drag/Drop ; Hide ; Delete.
    (because these workflow can lead to corruption)
    regards,
    H

  • Best practice for moving portal solution using content db from UAT to PROD

    Hi,
     Would like to know can we backup the database from UAT env. and restore the  same to  PROD. if all of my functionality is working fine in UAT env.
    I have event receivers[web level features], site collection level features,custom web parts, custom permissions, saved site templates, custom discussion forums etc.
    Assuming that I have my custom solution deployed on the Prod. which will activate features for those web parts and my custom application page features.
    Is there any issues I can anticipate in PROD.env, if i perform this activity.
    or
    Is this approach not recommended by Microsoft ? if yes , whats the best approach for deploying portal solution in PROD?
    Should I create teh web application, site collections, everything in PROD.from scratch.
    any links regarding this approach and the bext practices / helpful info is appreciated.

    Thanks Trveor for the reply.
    so, I can go ahead and  create the web applns, site collections and  deploy my web parts, item event receivers, appln pages and my timer jobs in UAT and take the  backup of the same and restore it in PROD env.
    But, i ahve a doubt here , as I have few site pages created it in my site template and when i take the backupof this web apppln's content db --- [ i think i can take the backup of web appln content db through power shell] ---- 
    will the site pages also be part of this backup?
    I had some experience in prev.version of SP, wherein i have few site pages and saved site template I have taken the backup of the  web appln and  restore it in another farm and  associate the restored content db to the
    newly created web appln in the targeted farm.
    But when I navigated to thsoe restored site pages, it gave me "resource not found /file not found " error.
     I had  deployed the custom web parts as a custom wsp and added into those site pages.
    and it failed to load those web parts UI.
    I was not sute whether this happened because of backup or restore from source  spfarm to the  targeted sp farm .

  • Best practice for implementing META tags for content items?

    Hello,
    The portal site I'm responsible for managing our content (www.sers.state.pa.us) runs on the following WebCenter products:
    WebCenter Interaction 10.3.0.1
    WebCenter Publisher 6.5
    WebCenter Studio 2.2 MP1
    Content Service 10gR3
    The agency I work for is one of many for the commonwealth of PA, which use this product suite, and I'm encountering some confusion on how to apply META tags to the content items for our site, so we can have effective search results. According to the [W3C site's explanation on META tag standards|http://www.w3schools.com/tags/tag_meta.asp], the tags for description, keywords, etc, should be within the head region of the HTML document. However, with how the WebCenter suite's configuration is set up, the head section of the HTML is closed off by the end of the template code for a common header portlet. I was advised to add fields to our presentation and data entry templates for content, to add these meta fields, however, since they are then placed within the body section of the HTML as a result, these tags fail to have any positive impact on the search results. Instead, many of our content items, when searched for, the description in the search results only shows text that is displayed in the header and left navigation of our template, which come early in the body section of the HTML.
    Please advise as to possible method(s) that would be best to implement usage of META tags so we can get our pages containing content to come up in search results with this relevant data.
    Thanks in advance,
    Brian

    if i remember right the index server will capture meta tags even if they are not in the <head> section. it is not well formed html but I think i remember that we created meta tags down in the body section and the index server still picked them up. you might try this and see if it still works. i believe it worked in 10gR3. Let me know your results.

Maybe you are looking for

  • To change the font of a selected row in a Jtable

    Hello, Is it possible to change the font of a selected row in a jtable? i.e. if all the table is set to a bold font, how would you change the font of the row selected to a normal (not bold) font? thank you.

  • ITunes, sync problem with iTunes 11.1.4

    Hello, I've a problem with iTunes 11.1.4. When I want to add music on my iPodTouch 5g 7.0.4, I can't find how to add the music. The button "Synchronize" is always grey. If I try to make anything I have a mesage on the banner at the top saying "Waitin

  • HP Color LaserJet CP1215 Printer drivers location

    Where on my hard drive has the ljcP1215-HB-pnp-win32-en.exe file from the HP Color LaserJet CP1215 Printer support page installed the needed drivers?

  • Dropped the computer, charger doesn't fit right. Help?

    I recently dropped my MacBook :'( and I didn't think it would have any lasting effects. HOWEVER, now the charger doesn't fit into the socket. I examined it and I THINK that the socket has been distorted by the fall. Help? p.s. I'm actually a middle s

  • Best practice for transparent box (shadow) behind text

    From time to time I have to font / subtitle over a bright background. When a simple drop shadow or outline on the text doesn't work, I put a soft box behind the font. What are some of the methods you use? Easy / quick. I've experimented with: shape m