List Item Best Practice

I have created an unordered list.  Instead of one of the standard CSS pics available, I just used a simple hyphen.  Given that it's text and it's part of the text, will that effect any search results in google or any other search engine?  Should I make the hyphen a picture and use that instead of the simple text?
Thanks for the input!

I agree with Joe, and think that a background image might be best for you:
ul {
    list-style: none;
li {
    background: url(dash.gif);
    padding-left: 10px;
E. Michael Brandt
www.divahtml.com
www.divahtml.com/products/scripts_dreamweaver_extensions.php
Standards-compliant scripts and Dreamweaver Extensions
www.valleywebdesigns.com/vwd_Vdw.asp
JustSo PictureWindow
JustSo PhotoAlbum, et alia

Similar Messages

  • Is there a list of best practices for Azure Cloud Services?

    Hi all;
    I was talking with a Sql Server expert today and learned that Azure Sql Server can take up to a minute to respond to a query that normally takes a fraction of a second. This is one of those things where it's really valuable to learn it when architecting as
    opposed to when we go live.
    Cloud Services are not Sql Server (obviously) but that led to the question - Is there a list of best practices for Azure Cloud Services? If so, what are they?
    We will be placing the cloud services in multiple datacenters and using traffic manager to point people to the right one. The cloud service will set between an IMAP client & server, pretending to be the mail client to the server, and the server to the client.
    Mostly it will pass all requests & responses across from one to the other.
    thanks - dave
    What we did for the last 6 months -
    Made the world's coolest reporting & docgen system even more amazing

    hi dave,
    >>Cloud Services are not Sql Server (obviously) but that led to the question - Is there a list of best practices for Azure Cloud Services? If so, what are they?
    For this issue, I have collected some blogs and document about best practices for azure cloud service, you can view them, but I am not sure they are your need.
    http://msdn.microsoft.com/en-us/library/azure/xx130451.aspx
    http://gauravmantri.com/2013/01/11/some-best-practices-for-building-windows-azure-cloud-applications/
    http://www.hanselman.com/blog/CloudPowerHowToScaleAzureWebsitesGloballyWithTrafficManager.aspx
    http://msdn.microsoft.com/en-us/library/azure/jj717232.aspxhttp://azure.microsoft.com/en-us/documentation/articles/best-practices-performance/
    >>The cloud service will set between an IMAP client & server, pretending to be the mail client to the server, and the server to the client. Mostly it will pass all requests & responses across from one to the other.
    For your scenarioes, If you'd like to communicate with each instances, I recommend you refer to this document (
    http://msdn.microsoft.com/en-us/library/azure/hh180158.aspx ). And generally, if we want connect the client to server on Azure, the service bus is a good choice (http://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-multi-tier-app-using-service-bus-queues/
    If I misunderstood, please let me know.
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Is there a list of best practices for Azure websites?

    Hi all;
    I was talking with a Sql Server expert today and learned that Azure Sql Server can take up to a minute to respond to a query that normally takes a fraction of a second. This is one of those things where it's really valuable to learn it when architecting as
    opposed to when we go live.
    Websites are not Sql Server (obviously) but that led to the question - Is there a list of best practices for Azure Websites? If so, what are they?
    We will be the website in multiple datacenters and using traffic manager to point people to the right one. The website will run as a REST server using Web API 2, mostly for license checks from our app running on corporate Exchange servers. And a small part
    will be for javascript based web pages used for account CRUD.
    thanks - dave
    What we did for the last 6 months -
    Made the world's coolest reporting & docgen system even more amazing

    sorry I was not sure if you were using a web server or what?
    One other idea out of the slow cooker is to use a dedicated web server with SQL server running beside IIS
    If you are using the standard web sites, then the SQL option is about as slow as FTP uploads, very slow
    I have considered time of day too, and it seems Azure is less loaded early in the AM
    Corsair Carbide 300R with TX850V2<br/> Asus M5A99FX PRO R2.0 CFX/SLI<br/> AMD Phenom II 965 C3 Black Edition @ 4.0 GHz<br/> G.SKILL RipjawsX DDR3-2133 8 GB<br/> EVGA GTX 660 Ti FTW Signature 2 (GK104 Kepler)<br/> Asus PA238QR
    IPS LED HDMI DP 1080p<br/> ST2000DM001 &amp; Windows 8.1 Professional x64<br/> Microsoft Wireless Desktop 2000 &amp; Wacom Bamboo CHT470M<br/> <br/> <span style="color:red">Place your rig specifics into your
    signature like I have, makes it 100x easier to understand!</span><br/> <br/> <a href="http://hardcore-games.azurewebsites.net/" target="_blank">Hardcore Games</a> Legendary is the Only Way to Play!

  • Is there a list of best practices for Azure Sql Server?

    I was talking with a Sql Server expert today and learned that Azure Sql Server can take up to a minute to respond to a query that normally takes a fraction of a second. This is one of those things where it's really valuable to learn it when architecting as
    opposed to when we go live.
    Is there a list of best practices for Azure Sql Server? If so, what are they?
    We will be calling the database from multiple instances, both Azure websites and Azure cloud services. The calls will be about 90% selects, 9% updates, and 1% inserts.
    thanks - dave
    What we did for the last 6 months -
    Made the world's coolest reporting & docgen system even more amazing

    Hi,
    Here are some documentation on Azure SQL Database to get you started.
    http://msdn.microsoft.com/en-us/library/azure/ee336279.aspx
    http://msdn.microsoft.com/en-us/library/azure/ff394102.aspx
    http://msdn.microsoft.com/en-us/library/azure/jj156164.aspx
    http://msdn.microsoft.com/en-us/library/azure/ee336282.aspx
    http://msdn.microsoft.com/en-us/library/azure/ee336256.aspx
    Regards,
    Mekh.

  • Templates & Library items - best practice

    I've been having a bit of a heated discussion about how
    templates and library items should be used on medium to large
    websites.
    I'm of the opinion that you should have 1 template (or as few
    as possible) and any differences be managed with Library items.
    My colleague's argument is you should have a different
    template for each section, so for example 1 for the news, 1 for the
    features section (for magazine sites) so that if you have a banner
    section wide you change that template, as opposed to the way I'm
    used to which would be 1 template with multiple library items in
    the editable regions of the template, my reasoning for this is the
    large amount of html used for each template (like rollovers) can
    become inconsistent when there's more than one template. Like if
    you change something site wide you have to remember to change all
    of the templates, if you do this with a simple copy past DW can
    still make corrupt code if you use multiple templates.
    So basically I was wondering if there was a standard or best
    practice way of doing this.
    Thanks

    Your colleague is mistaken IN MY OPINION. I think you are on
    the right
    track *unless* your section pages are *SO* different in
    layout that you
    cannot conveniently include all of the
    alterations/permutations in a single
    template page. Further, my current workflow is to rely
    heavily on
    server-side includes within template pages, as described here
    Templates do make your life much easier. I use the following
    scheme....
    First, I mentally separate the page layout into three
    sections:
    1. Stuff that will not change for the life of the site (i.e.,
    the basic
    structural elements)
    2. Stuff that *could* change from time to time (e.g.,
    navigation elements,
    burst advertisements, section-specific navigation, etc.)
    3. Stuff that *will* change from one page to the next
    Then I create a template containing all class1 elements. Next
    I create
    server-side include files containing all class 2 elements and
    place them on
    the template as needed. Note - some of the class 2 elements
    may be
    "section-specific elements", and their placement on the
    template will be
    subject to the next item. Finally, I insert editable regions
    to cover the
    class 3 items, INCLUDING the section-specific navigation.
    This allows me to just cookie-cut the rest of the site. I
    estimate that
    even for fairly large sites, about 80% of my work goes into
    planning and
    creating this template file.
    For what it's worth, I never use Library items....
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.dreamweavermx-templates.com
    - Template Triage!
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    http://www.macromedia.com/support/search/
    - Macromedia (MM) Technotes
    ==================
    "tefnut" <[email protected]> wrote in
    message
    news:[email protected]...
    > I've been having a bit of a heated discussion about how
    templates and
    > library
    > items should be used on medium to large websites.
    > I'm of the opinion that you should have 1 template (or
    as few as possible)
    > and
    > any differences be managed with Library items.
    > My colleague's argument is you should have a different
    template for each
    > section, so for example 1 for the news, 1 for the
    features section (for
    > magazine sites) so that if you have a banner section
    wide you change that
    > template, as opposed to the way I'm used to which would
    be 1 template with
    > multiple library items in the editable regions of the
    template, my
    > reasoning
    > for this is the large amount of html used for each
    template (like
    > rollovers)
    > can become inconsistent when there's more than one
    template. Like if you
    > change
    > something site wide you have to remember to change all
    of the templates,
    > if you
    > do this with a simple copy past DW can still make
    corrupt code if you use
    > multiple templates.
    >
    > So basically I was wondering if there was a standard or
    best practice way
    > of
    > doing this.
    >
    > Thanks
    >

  • "Source used" item best practice

    Hello,
    How can I configure an item with source used "Always, replacing any..." and source type="Database Column" and STILL not loose the entered value every time the page comes back (items are by default repopulated with DB column values, which make me loose all entered values),
    The reason why the page comes back before a submit is because I have some "Select List with Redirect" which dynamically populate other "select list" (dynamic LOV)
    How can solve/workaround this issue? Remember, I want all entered values NOT be reset by db column values
    Thanks for your help!
    Sam

    Sam,
    I've begun to use AJAX for this type of select list. Solves the caching issues nicely and doesn't cause an entire page refresh as the page isn't submitted until all the data is entered. Carl Backstrom has an example in his samples application - http://apex.oracle.com/pls/otn/f?p=11933:37
    Make that Denes Kubicek's - http://deneskubicek.blogspot.com/2008/04/cascading-select-list-in-tabular-form.html
    Dave
    Message was edited by:
    dsteinich
    Message was edited by:
    dsteinich

  • List immutablity best practice

    Hello,
    I have a singleton bean in my app that basically maintains Lists of all my codes tables in the system. Other objects as well as JSPs can call getter methods on this object to get the list of codes. However, I obviously want to make this object immutable so that a calling object cannot modify a list it gets and thereby affect the entire application and any other object using said list.
    What is the best way to implement immutablity? The way I have it now is to instantiate a new object in the getter like this:
    public List<SelectItem> getJobStatusList() {
    return new ArrayList<SelectItem>(this.jobStatusList);
    }However, a friend of mine suggested this instead:
    public List<SelectItem> getJobStatusList() {
    return Collections.unmodifiableList(this.jobStatusList);
    }Which is the better approach or are they interchangeable? In the 2nd approach, it leaves it up to the calling object to instantiate a new list for modification whereas the first one does this automatically. I'm not a fan of the UnsupportedOperationException being called on the getter, but if that is the preferred approach I will deal with it through javadoc.
    Any suggestions or is this a matter of preference?

    Both options could be used but have down sides.
    In the first case you are taking a mutable copy. This is not the same as an immutable copy but it does means that any changes made to the original don't change your copy.
    In the second case you are returning an immutable facade. This means the collection cannot be changed using this, however if the original is changed, your facade will see those changes.
    The safest option (which may be overkill for your needs is to do both)
    return Collections.unmodifiableList(new ArrayList<SelectItem>(this.jobStatusList));
    {code]
    This returns an independent copy which is immutable.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Best Practices for Setting up a Windows 2012 R2 STD Domain Controller in a Remote Site

    So I'm looking for an article or writeup similar to the "Adding Domain Controllers in Remote Sites" TechNet article but for Windows Server 2012 STD R2.  Here is my scenario:
    1.  I want to setup the domain controller at Site A where the primary domain controller is located.  The primary domain controller is Windows Server 2008 R2. 
    2.  Once the DC is setup I plan on leaving it on our network for a few days before shipping it to remote Site B for installation
    Other key items:
    1.  The remote Site B will have a different IP range than Site A but will be connected to Site A via a single VPN tunnel.  All the DCs that replicate with each other are on the same domain. 
    2.  The 2012 DC that I setup for Site B (same domain in same forest) will be a DHCP, DNS, and WSUS server all replicating to the primary DC at Site A
    Questions:
    1.  What items can I setup while it's at Site A without effecting or conflicting with the existing network and domain controller?  Can I setup a scope once the DHCP role is added? 
    2.  All of our DCs replicate through Sites and Services, do I have to manually add this to our primary DC for the new DC going to remote Site B?  Or when does this happen automatically when I promote the DC? 
    All and all I'm just looking for a list of Best Practices for 2012 or a Step by Step Guide.  Any help would be appreciated. 

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • The best practice for creating reports and dashboard

    Hello guys
    I am trying to put together a list of best practice on how to create reports and dashboards using OBIEE presentation service. I know a lot of those dos and donts are just corporate words that don't apply consistantly in real world environment, but still I'd like to know if Oracle has any officially defined best practice or not.
    the only best practice I can think of when it comes to building reports and dashboards is:
    Each subject area should contain only one star schema that holds data for a specific business information
    Is there anything else?
    Please advice
    Thanks

    Read this book to understand what a Dashboard is, what it should do and look like to be used by the end users. Very enlightentning.
    Information Dashboard Design: The Effective Visual Communication of Data by Stephen Few (There are a couple of other books by Stephen and although I haven't read them yet, I anticipate them to be equally helpful.
    This book was also helpful to me:
    http://www.amazon.com/Performance-Dashboards-Measuring-Monitoring-Managing/dp/0471724173
    I also found this book helpful in Best Practices...
    http://www.biconsultinggroup.com/knowledgebase.asp?CategoryID=337

  • Highest Quality Live Video Streaming | Best Practice Recommendations?

    Hi,
    When using FlashCollab Server, how can we achieve best quality publishing a live stream?
    Can you provide a bullet list for best practice recommendations?
    Our requirement is publishing a single presenter to many viewers (generally around 5 to 50 viewers).
    Also, can it make any difference if the publisher is running Flash Player 10 vs Flash Player 9?
              Thanks,
              g

    Hi Greg,
    For achieving best quality
    a) you should use RTMFP connection instead of RTMPS. RTMFP has a lower latency.
    b) You should the player 10 swc.
    c) If bandwidth is not a restriction for you, you can use higest quality values. WebcamPublisher class has property for setting quality.
    d) You can use a lower keyframeInterval value, which in turn  will send videos in full frames rather than by a video compression algorithm.
    e) You should use Speex Codec. Speex Codec is again provided with player 10 swc.
    These are some suggestions that can improve your quality depending on your requirements.
    Thanks
    Hironmay Basu

  • IVR (Inter-VSAN Routing) - Best practices questions

    Hi there,
    We have a situation where we will have multiple customers hosted on a 9513 and sharing a single storage array.
    We want to keep the logically seperated in their own VSANs, but of course the storage array will need to be zoned to all the customers hosts.
    Now IVR should be the thing to use, but I'm getting resistance from the local team (screams of "Nooooo!!! They're EVILLLLL!!!") ... so I want to find out if there are some best practices around IVR use ?
    Should they be used only for light duty stuff ? (though at present we use them with tape backup, which isn't exactly "light")
    Do they impact performance to a measurable degree ?
    Are they stable ?
    What can go wrong with them ? And does it happen often ?
    Thanks!

    IVR does not impact application I/O performance because all the vsan rewrite and fcid rewrite actions are done in hardware asics. The IVR process on supervisor is responsible for managing the configuration and ensuring the rewrite tables are programmed in the linecards. The process is stable.
    Most of the issues I have seen are in environments with multiple IVR enabled MDS switches ISL'd together or an MDS IVR enabled switch connected to a McData/Brocade in an interop mode.
    Like any feature there have been bugs and it pays to check the SAN-OS release notes when planning installs. For example, a config change on one switch does not get properly pushed to another IVR switch or a forwarding table for an ISL interface does not get correctly programmed. There have also been a fair share of user misconfigurations which could have been avoided if Cisco Fabric Services (CFS) was enabled for IVR. This is done with the 'ivr distribute' command. Without this it is very difficult in large topologies of multiple IVR switches to ensure they have a consistent IVR config. In other cases there have been problems from a mix of IVR enabled switches running different releases of SAN-OS, e.g. mixing 3.0 with 3.2.
    Best practice is to have dual physical fabrics, upgrade one fabric at a time ensuring all IVR switches in a fabric run same SAN-OS release.
    A single IVR switch is much easier to implement. The MDS Configuration Guide has a list of best practices for IVR and one of those is to use the NAT option. Personally I would avoid NAT option where you can as NAT makes any troubleshooting harder trying to figure out the domain ID translations. You would also minimize risk of hitting some NAT related bugs, but you could also avoid most of these by checking the workarounds documented in the Release Notes. And with NAT you need to also configure persistent virtual domains and fcids to cater for AIX and HP-UX systems that cannot handle the FCID of the target changing whenever the exported virtual domain ID changes. To give NAT credit, each vsan is represented by a single virtual domain. In regular non-NAT mode, each switch in a vsan is represented by a virtual domain, meaning you eat up more virtual domain IDs. So in large topologies with many domain IDs there are scalability advantages to using NAT and the IVR updates between switches are more efficient with fewer virtual domain IDs to advertize. Of course, NAT must be used if merging physical fabrics with same domain ID and you cannot afford the downtime to change one of the switch domain IDs.
    However if it is just a single IVR switch I would avoid NAT. To do this all your domain IDs should be statically defined and there must be no overlapping domain IDs between IVR'd vsans. If it is a brand new install you can easily achieve this by specifying unique allowed domain ID ranges per vsan.
    For example, each customer can have their own vsan with say 10 domain IDs and the storage can be in vsan 2. You will only use one domain ID per vsan on day 1. Allowing 10 domain ids per vsan means you can add up to 9 other switches per vsan should you need to in the future. There is a maximum 239 domains per vsan so you could have up to 23 customers on your 9513 working with a range of 10 domain IDs per vsan.
    fcdomain domain 2 static vsan 2
    fcdomain domain 10 static vsan 10
    fcdomain domain 20 static vsan 20
    fcdomain domain 30 static vsan 30
    ..and so on..
    fcdomain domain 230 static vsan 230
    fcdomain allowed 01-09 vsan 2
    fcdomain allowed 10-19 vsan 10
    fcdomain allowed 20-29 vsan 20
    fcdomain allowed 30-39 vsan 30
    ..and so on..
    fcdomain allowed 230-239 vsan 230
    With or without IVR you should still run dual fabrics (e.g. 95xx in each fabric) and host based multipathing for redundancy.
    And don't forget IVR will require an Enterprise license. I have even seen a large outage because the customer forgot to install the license before the 120 day grace period expired.

  • Best Practice: Dynamically changing Item-Level permissions?

    Hi all,
    Can you share your opinion on the best practice for Dynamically changing item permissions?
    For example, given this scenario:
    Item Creator can create an initial item.
    After item creator creates, the item becomes read-only for him. Other users can create, but they can only see their own entries (Created by).
    At any point in time, other users can be given Read access (or any other access) by an Administrator to a specific item.
    The item is then given edit permission to a Reviewer and Approver. Reviewers can only edit, and Approvers can only approve.
    After the item has been reviewed, the item becomes read-only to everyone.
    I read that there is only a specific number of unique permissions for a List / Library before performance issues start to set in. Given the requirements above, it looks like item-level permission is unavoidable.
    Do you have certain ideas how best to go with this?
    Thank you!

    Hi,
    According to your post, my understanding is that you wanted to change item level permission.
    There is no out of the box way to accomplish this with SharePoint.               
    You can create a custom permission level using Visual Studio to allow users to add & view items, but not edit permission.   
    Then create a group with the custom permission level. The users in this group would have the permission of create & add permission, but they could no edit the item.
    In the CodePlex, there is a custom workflow activities, but by default it only have four permission level:
    Full Control , Design ,Contribute and Read.
    You should also customize some permission levels for your scenario. 
    What’s more, when use the SharePoint 2013 designer, you should only use the 2010 platform to create the workflow using this activities,
    https://spdactivities.codeplex.com/wikipage?title=Grant%20Permission%20on%20Item
    Thanks & Regards,
    Jason
    Jason Guo
    TechNet Community Support

  • Best Practice - Long Daily Record List

    Hi,
    I'm new to doing this type of thing in Numbers and I'm struggling with how best to manage a growing journal list. As in:
    - I have a table where I enter a new row every morning. In the row I store values of things like calories, sleep, performance, time (it's mainly to track my cycling) from the day before.
    However, I'm starting to have problems:
    - Now that the table is three months old, it's starting to get really long. What's the best practice here? Sort descending and add a new row at the top every morning? Or is there a way to create a "view" that, for example, only shows the last 14 days?
    - Also, averages that I calculate from a column have become all but meaningless because a single day has so little weight. So what's the best practice here? Is there a way to calculate averages based on, say, the last 14 days only? If I sort descending as above, I guess I could just average the first 14 rows of each column. But I imagine there's a more elegant way to do this that involves date ranges up to the current date (at least, that's how I'd tackle it in FileMaker Pro).
    I suppose this begs the question: is Numbers the right tool? I want to keep this record of my daily cycling stats going forward, hopefully for many years. It would be great to stay with Numbers as it's a pleasure to use and the charts are beautiful. But maybe a database tool is the way to go? Any thoughts? All I really need are a few charts and a few averages. A database does seem like overkill to me.
    Many thanks in advance,
    Pat

    Hi Badunit,
    Thanks for the pointer on extracting the last x values into a new table!  Took me a while to reason it through.
    Slightly more convenient than:
      =OFFSET(Table 1::A$1,ROWS(Table 1::A)−16+ROW(),0)
    May be:
      =INDEX(Data::$A:$B,COUNT(Data::$A)−14+ROW())
    That one doesn't require remembering how to adjust the 14 when only 14 items are wanted.
    It seems COUNT($A) only counts the body rows while ROWS() includes Header and Footer Rows too.
    SG

  • SAP Best Practice for Document Type./Item category/Acc assignment cat.

    What is the Best Practice for the document Type & Item category
    I want to use NB -  Item category  - B & K ( Blanket PO) , D ( Service)  and T( Text) .
    Is sap recommends to use FO Only for the Blanket Purchase Order.
    We want to use service contract (with / without service entry sheet) for all our services.
    We want to buy asset for our office equipments .
    Which is the best one to use NB or FO ?
    Please give me any OSS notes or reference for this
    Thanks
    Nick

    Thank you very much for your response. 
    I hope I can provide some clarity on how the accounting needs to be handle per FERC  Regulations.  The G/L balance on the utility that is selling the assets will be in the following accounts (standard accounts across all FERC Regulated Utilities):
    101 - Acquisition Value for the assets
    108 - Accumulated Depreciation Value for the assets
    For an example, there is Debit $60,000,000 in FERC Account 101 and a credit $30,000,000 in FERC Account 108.  When the purchase occurs, the net book value for the asset will be on our G/L in FERC Account 102.  Once we have FERC Approval to acquire the plant assets, we will need to enter the Acquisition Value and associated Accumulated Depreciation onto our G/L to FERC Account 101 and FERC Account 108 respectively with an offset to FERC Account 102.
    The method that I came up with is to purchase the NBV of the assets to a clearing account.  I then set up account assignments that will track the Acquisition Value and respective Accumulated Depreciation for each asset that is being purchased.  I load the respective asset values using t-code AS91 and then make an entry to the 2 respective accounts with the offset against the clearing account using t-code OASV.  Once my company receives FERC approval, I will transfer the asset to new assets that has the account assignments for FERC Account 101 and FERC Account 108 using t-code ABUMN or FB01.

  • Best Practice Regarding Maintaining Business Views/List of Values

    Hello all,
    I'm still in the learning process of using BOXI to run our Crystal Reports.  I was never familiar with the BO environment before but I have recently learned that every dynamic parameter we create for a report, the Business View/Data Connectors/LOV are created on the Enterprise Repository the moment the Crystal Report is uploaded.
    All of our reports are authored from a SQL Command statement and often times, various reports will use the same field name from the database for different reports.  For example, we have several reports that use the field name "LOCATION" that exists on a good number of tables on the database.
    When looking at the Repository, I've noticed there are several variations of LOCATION, all which I'm assuming belongs to one specific report.  Having said that, I see that it can start to become a nightmare in trying to figure out which variation of LOCATION belongs to what report.  Sooner or later, the Repository will need to be maintained a bit cleaner, and with the rate we author reports, I forsee a huge amount of headache down the road.
    With that being said, what's the best practice in a nutshell when trying to maintain these repository items?  Is it done indirectly on the Crystal Report authoring side where you name your parameter field identifiable to a specific report?  Or is it done directly on the Repository side?
    Thank you.

    Eric, you'll get a faster qualified response if you post to the  Business Objects Enterprise Administration forum as that forum is monitored by qualified support for BOE

Maybe you are looking for

  • Windows Phone 8.1 Update 2

    Microsoft is on a roll.  According to WPCentral it looks like members of the developer preview program will see Update 2 released to their devices around 10/8/2014. List of unconfirmed features: The ability to create groups of applications in the app

  • Time Machine disk and Airport Extreme issues

    I have a portable USB drive I use as my Time Machine disk. It works great, but only when I manually mount the disk. I recently got an Airport Extreme, which allows me to mount a disk off of it. So, I figure I'll mount my USB Time Machine disk there s

  • CD drive stopped working on my Satellite L

    Hi MY CD drive stopped working, but then it recognised a disc which I had inserted, but then stopped working again. I have deleted upper and low filters from device manager. I have also downloaded the dvd ram driver from toshiba and my cd drive then

  • Metadata set, how did I do it?

    It's two years since I created a Metadata Set called Feb 07 and now I can't remember how I did it. Library module, Metadata panel, just to the left of the title word "Metadata" is the small panel containing Default, All, EXIF and so on and there is m

  • Updates for CS5 bridge and raw

    Been using CS5 and have tried to add updates to bridge and/or raw and each time it says to close bridge, to proceed. I closed bridge and still un able to proceed to add updates. What an I missing? Thanks for any help, Rick