WebHelp Distribution - Best Practices

Good Morning (or Afternoon as the case may be)!
This question is less about Robohelp itself, but more about Documentation in general.
Our CEO recently "came up with the idea" that we should begin hosting our application help on our website instead of distributing it with each release version of our software.  Doing so will allow us the flexibilty of making corrections and updating specific topics "on-the-fly" when necessary.
However, in discussions, we realized that we may need to parameterize the path to the hosted help based on the release version of the software that a client is using.  So, for example, a client using release 3.0 will need a path to "help3, whereas a client using release 3.1, will need a path to "help3_1".
Our reasoning is that since we add new features\programs with every release, we needed to keep separate help libraries so as to not confuse clients who have not upgraded to the latest release.  Does that make sense?
Does our basic premise make sense?  Do we need to maintain separate libraries, or is there another way of handling this?
For anyone that centrally hosts their Help System, how do you handle library management for multiple "versions"?
Our current help Merged WebHelp, which works great (thanks Peter G!) and if it matters I am using Robohelp 8 HTML.
Thanks for any insight that may be offered...
jim

You will need a version of the help for each version of the software, each having their own URL.
www.yoursite.com/help1
www.yoursite.com/help2
and so on.
If you were never going to change the Help 1 version, you could carry on updating your project as you would never need to regenerate the Help 1 again. Obviously you would keep a backup of it.
If you are going to make changes to Help 1 as well as Help 2, when you are about to start on Help 2 you take a copy of the project. One copy is just for Help 1 changes while the other is where you rewrite to suit Help 2.
You could maintain just one project and use conditional build tags to produce different outputs but I think that would get very messy and that the two project approach is easier.
See www.grainge.org for RoboHelp and Authoring tips
@petergrainge

Similar Messages

  • Best Practices for Connecting to WebHelp via an application?

    Greetings,
    My first post on these forums, so I appologize if this has already been covered (I've done some limited searching w/o success).  I'm developing a .Net application which is accessing my orginazation's RoboHelp-generated webhelp.  My organization's RoboHelp documentation team is still new with the software and so it's been up to me to chart the course for establishing the workflow for connecting to the help from the application.  I've read up on Peter Grange's 'calling webhelp' section off his blog, but I'm still a bit unclear about what might be the best practices approach for connecting to webhelp.
    To date, my org. has been delayed in letting me know their TopicIDs or MapIDs for their various documented topics.  However, I have been able to acquire the relative paths to those topics (I achieved this by manually browsing their online help and extracting out the paths).  And I've been able to use the strategy of creating the link via constructing a URL (following the strategy of using the following syntax: "<root URL>?#<relative URI path>" alternating with "<root URL>??#<relative URI path>").  It strikes me, however, that this approach is somewhat of a hack - since RoboHelp provides other approaches to linking to their documentation via TopicID and MapID.
    What is the recommended/best-practices approach here?  Are they all equally valid or are there pitfalls I'm missing.  I'm inclined to use the URI methodology that I've established above since it works for my needs so far, but I'm worried that I'm not seeing the forest for the trees...
    Regards,
    Brett
    contractor to the USGS
    Lakewood, CO
    PS: we're using RoboHelp 9.0

    I've been giving this some thought over the weekend and this is the best answer I've come up with from a developer's perspective:
    (1) Connecting via URL is convenient if (#1) you have an established naming convention that works for everyone (as Peter mentioned in his reply above)
    (2) Connecting via URL has the disadvantage that changes to the file names and/or folder structure by the author will break connectivity
    (3) Connecting via TopicID/MapID has the advantage that if there is no naming convention or if it's fluid or under construction, the author can maintain that ID after making changes to his/her file or folder structure and still maintain the application connectivity.  Another approach to solving this problem if you're working with URLs would be to set up a web service that would match file addresses to some identifier utilized by the developer (basically a TopicID/MapID coming from the other direction).
    (4) Connecting via TopicID has an aesthetic appeal in the code since it's easy to provide a more english-readable identifier.  As a .Net developer, I find it easy and convenient to construct an enum that matches my TopicIDs and to utilize that enum to construct my identifier when it comes time to make the documentation call.
    (5) Connecting via URL is more convenient for the author, since he/she doesn't have to worry about maintaining IDs
    (6) Connecting via TopicIDs/MapIDs forces the author to maintain those IDs and allows the documentation to be more easily used into the future by other applications worked by developers who might have their own preference in one direction or another as to how they make their connection.
    Hope that helps for posterity.  I'd be interested if anyone else had thoughts to add.
    -Brett

  • Best practice for Active Directory User Templates regarding Distribution Lists

    Hello All
    I am looking to implement Active Directory User templates for each department in the company to make the process of creating user accounts for new employees easier. Currently when a user is created a current user's Active directory account is copied, but
    this has led to problems with new employees being added to groups which they should not be a part of.
    I have attempted to implement this in the past but ran into an issue regarding Distribution Lists. I would like to set up template users with all group memberships that are needed for the department, including distribution lists. Previously I set this up
    but received complaints from users who would send e-mail to distribution lists the template accounts were members of.
    When sending an e-mail to the distribution list with a member template user, users received an error because the template account does not have an e-mail address.
    What is the best practice regarding template user accounts as it pertains to distribution lists? It seems like I will have to create a mailbox for each template user but I can't help but feel there is a better way to avoid this problem. If a mailbox is created
    for each template user, it will prevent the error messages users were receiving, but messages will simply build up in these mailboxes. I could set a rule for each one that deletes messages, but again I feel like there is a better way which I haven't thought
    of.
    Has anyone come up with a better method of doing this?
    Thank you

    You can just add arbitrary email (not a mailbox) to all your templates and it should solve the problem with errors when sending emails to distribution lists.
    If you want to further simplify your user creation process you can have a look at Adaxes (consider it's a third-party app). If you want to use templates, it gives you a slightly better way to do that (http://www.adaxes.com/tutorials_WebInterfaceCustomization_AllowUsingTemplatesForUserCreation.htm)
    and it also can automatically perform tasks such as mailbox creation for newly created users (http://www.adaxes.com/tutorials_AutomatingDailyTasks_AutomateExchangeMailboxesCreationForNewUsers.htm).
    Alternatively you can abandon templates at all and use customizable condition-based rules to automatically perform all the needed tasks on user creation such as OU allocation, group membership assignment, mailbox creation, home folder creation, etc. based on
    the factors you predefine for them.

  • Best practice for partnership distributions

    I am looking for some assistance on best practice.
    We currently have no process for Partnership Distributions within HFM. Our partnerships we have an equity pickup rule in HFM, and our source accounts are setup to be at a cost basis.
    The issue becomes when we attempt to record the distribution from a 50% owned sub into its parent. Currently we are decreasing the parent's investment in the partnership, but this is becoming messy.
    Dose anyone have any recommendations on how this is been applied elsewhere?

    Hello,
    I'd ask in the Windows forum on Microsoft Community.
    Karl
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or Mark As Answer.
    My Blog:http://unlockpowershell.wordpress.com
    My Book:Windows PowerShell 2.0 Bible
    My E-mail: -join ('6F6C646B61726C40686F746D61696C2E636F6D'-split'(?<=\G.{2})'|%{if($_){[char][int]"0x$_"}})

  • TestStand best practices for distribution

    We're looking for best practices for distributing TestStand
    systems.  I've found the TestStand
    Style Guide but it's a little sparse on how to set up the
    distributed systems.  We're looking for guidelines on where to put
    configuration data, where to put sequence files, how to manage users,
    and similar. 
    We'll be distributing systems to various contract manufacturers in
    China as well as using the systems in multiple locations in-house.
    What have you done with distributions and what problems have you seen?
    Right now we're planning to separate the Deployment Engine from our
    sequence files and putting all our configuration into our distribution
    kit for the Deployment Engine.

    The TestStand Reference manual provides good information on system
    distribution.  Chapter 14 : Deploying TestStand Systems covers the
    necessary information for distributing your TestStand
    application.  I do have a few suggestions and caveats:
    (1)  Make sure you use a Work Space when distributing your
    files.  A Work Space makes it easy to package all of your files
    and dependencies.  Moreover, the distribution wizard provides a
    feature that displays all the files that will be included in the
    installation package into an easy to use Tree View.
    (2)  TestStand currently does not look for embedded DLL
    dependencies.  So, if your code is calling a DLL module that calls
    a DLL module, be sure to include the embedded dependency DLL in your
    WorkSpace.
    (3)  StationGlobals and Custom Data Types are usually missed in
    installation.  Be sure that you are including your
    StationGlobals.ini file and MyTypes.ini file from the Cfg directory
    into your installation workspace.
    If you have more specific questions, please feel free to post them here!
    Good Luck!
    Tyler Tigue
    Applications Engineer
    National Instruments

  • Best Practice for the Service Distribution on multiple servers

    Hi,
    Could you please suggest as per the best practice for the above.
    Requirements : we will use all features in share point ( Powerpivot, Search, Reporting Service, BCS, Excel, Workflow Manager, App Management etc)
    Capacity : We have  12 Servers excluding SQL server.
    Please do not just refer any URL, Suggest as per the requirements.
    Thanks 
    srabon

    How about a link to the MS guidance!
    http://go.microsoft.com/fwlink/p/?LinkId=286957

  • Best practices for making the end result web help printable

    Hi all, using TCS3 Win 7 64 bit.  All patched and up to date.
    I was wondering what the best practices are for the following scenario:
    I am authoring in Frame, link by reference into RH.
    I use Frame to generate PDFs and RH to generate webhelp.
    I have tons of conditional text which ultimately produce four separate versions of PDFs as well as online help - I handle these codes in FM and pull them into RH.
    I use a css on all pages of my RH to make it 'look' right.
    We now need to add the ability for end users to print the webhelp - outside of just CTRL+P because a)that cuts off the larger images and b)it doesn't show header, footer, logo, date, etc. (stuff that is in the master pages in FM).
    My thought is doing the following:
    Adding four sentences (one for each condition) in the FM book on the first page. Each one would be coded for audience A, B, C, or D (each of which require separate PDFs) as well as coded with ONLINE so that they don't show up in my printed PDFs that I generate out of Frame. Once the PDFs are generated, I would add a hyperlink in RH (manually) to each sentence and link the associated PDF (this seems to add the PDF file to the baggage files in RH). Then when I generate my RH webhelp, it would show the link, with the PDF, correctly based on the condition of the user looking at the help.
    My questions are as follows:
    1- This seems more complicated than it needs to be. Is it?
    2- I would have to manually update every single hyperlink each time I update my FM book, because I am single sourcing out of Frame and I am unable (as far as I can tell) to link a PDF within the frame doc. I update the entire book (over 1500 pages) once every 6 weeks so while this wouldn't be a common occurrence it will happen regularly, and it would be manual (as far as I can tell)?
    3- Eventually, I would have countless PDFs inside RH. I assume this will eventually impact performance. So this also doesn't seem ideal?
    If anyone has thoughts/suggestions on a simpler way or better way to do this, I'd certainly appreciate it. I have watched the Adobe TV tutorial on adding a master page but that seems to remove the ability to use a css across all my topics and it also requires the manual addition of a manual hyperlink to the PDF file, so that is what I am proposing above, anyway (not sure the benefit, therefore).
    Thanks in advance,
    Adriana

    Anything other than CTRL + P is going to create a lot of work so perhaps I can comment on what you see as drawbacks to that.
    a)that cuts off the larger images and b)it doesn't show header, footer,
    logo, date, etc. (stuff that is in the master pages in FM).
    Larger images.
    I simply make a point of keeping my image sizes down to a size that works. It's not a problem for me but that doesn't mean it will work for you. Here all I am doing is suggesting you review how big a problem that would be.
    Master Page Details
    I have to preface this with the statement that I don't work with FM. The details you refer to print when they are in RoboHelp master pages. Perhaps one of the FM users here can comment on how to get FM master pages to come through.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • Best practices for network design on WLC 2504 and 5508

    Dear all:
    I'm looking for some recommendations on WLC 2504 and 5508 about the the following:
    Maximum amount of AP per port
    The scenario when to use all ports in both WLC
    Maximum number of clients(users) per port
    Bandwidth comsumption of  management vs data in order to assign one port for management
    I've just found this:
    Cisco 5508 controllers have eight Gigabit Ethernet distribution system ports, through which the controller can manage multiple access points. The 5508-12, 5508-25, 5508-50, 5508-100, and 5508-250 models allow a total of 12, 25, 50, 100, or 250 access points to join the controller. Cisco 5508 controllers have no restrictions on the number of access points per port. However, Cisco recommends using link aggregation (LAG) or configuring dynamic AP-manager interfaces on each Gigabit Ethernet port to automatically balance the load. If more than 100 access points are connected to the 5500 series controller, make sure that more than one gigabit Ethernet interface is connected to the upstream switch.
    http://www.cisco.com/c/en/us/td/docs/wireless/controller/6-0/configuration/guide/Controller60CG/c60mint.html
    Thanks for your help.

    The 5508-12, 5508-25, 5508-50, 5508-100, and 5508-250 models allow a total of 12, 25, 50, 100, or 250 access points to join the controller.
    This is an old document.  5508 can now support up to 500 APs if you run firmware 7.X.  2504 can support up to 75 APs if you run firmware 7.4.X.
    I'm looking for some recommendations on WLC 2504 and 5508 about the the following:
    Best practice and recommendation is to LAG all ports so you will be able to form a link redundancy.  If one link goes down, you have other link to push traffic. 

  • Best practice for creating RFC destination entries for 3rd parties(Biztalk)

    Hi,
    We are on SAP ECC 6 and we have been creating multiple RFC destination entries for the external 3rd party applications such as Biz-talk and others using TCP/IP connection type and sharing the programid.
    The RFC connections with IDOC as data flow have been made using Synchronous mode for time critical ones(few) and majority through asynchronous mode for others. The RFC destination entries have been created for many interfaces which have unique RFC destinations with its corresponding ports defined in SAP. 
    We have both inbound and outbound connectivity.with the large number of RFC destinations being added we wanted to review the same. We wanted to check with others who had encountered similar situation and were keen to learn their experiences.
    We also wanted to know if there are any best practices to optimise on number of RFC destinations.
    Here were a few suggestions we had in mind to tackle the same.
    1. Create unique RFC destinations for every port defined in SAP for external applications as Biztalk for as many connections. ( This would mean one for inbound, one for outbound)
    2. Create one single RFC destination entry for the external host/application and the external application receiving the idoc control record to interpret what action to perform at its end.
    3. Create RFC destinations based on the modules it links with such as materials management, sales and distribution, warehouse management. This would ensure we can limit the number of RFC to be created and make it simple to understand the flow of data.
    I have done checks on SAP best practices website, sap oss notes and help pages but could not get specific information I was after.
    I do understand we can have as unlimited number of RFC destinations and maximum connections using appropriate profile parameters for gateway, RFC, client connections, additional app servers.
    I would appreciate if you can suggest the best architecture or practice to achieve  RFC destinations in an optimized manner.
    Thanks in advance
    Sam

    Not easy to give a perfect answer
    1. Create unique RFC destinations for every port defined in SAP for external applications as Biztalk for as many connections. ( This would mean one for inbound, one for outbound)
    -> be careful if you have multi cllients ( for example in acceptance) RFC's are client independ but ports are not! you could run in to trouble
    2. Create one single RFC destination entry for the external host/application and the external application receiving the idoc control record to interpret what action to perform at its end.
    -> could be the best solution... its easier to create partner profiles and the control record will contain the correct partner.
    3. Create RFC destinations based on the modules it links with such as materials management, sales and distribution, warehouse management. This would ensure we can limit the number of RFC to be created and make it simple to understand the flow of data.
    -> consider this option 2.
    We send to you messagebroker with 1 RFC destination , sending multiple idoctypes, different partners , different ports.

  • Best Practice to generate UUIDs in a Cluster-Server Environment

    Hi all,
    I just need some inputs over the best practices to generate UUIDs in typical internet world where there are multiple servers/JVMs involved for load balancing or traffic distribution etc. I know JAVA is shipped with very efficient UUID generator API.
    But still that doesn't solve the issue in multiple server environment.
    For the discussion sake lets assume I need it to be unique over the setup than a near unique.
    How do you guys approach it?
    Thanks you all in advance.

    codeNombre wrote:
    jverd wrote:
    codeNombre wrote:
    Thanks jverd,
    So adding to the theory of "distinguishing all possible servers" in addition to UUID over each server would be the way to go.If you're unreasonably paranoid, sure.I think its a common problem and there is a big number of folks who might still be bugged about the "relative uniqueness" of UUID in long run. People who don't understand probability and scale, sure.
    Again coming back to my original problem in an "internet world", shouldn't the requirement like unique id between different servers be dealt with generating the UUID's at a layer before entering into the multi-server setup. Where would that be? I don't have the answer..Again, that is the POINT of the UUID class--so that you can generate as many IDs as you want and still be confident that nobody anywhere is in the world has ever generated any of those same IDs. However, if your requirements say UUID is not good enough, then you need to define what is, and that means having a lot of foresight as to how this system will evolve and how long it will live, AND having total control over some aspect of your servers, AND having a process that is so good that it's LESS LIKELY for a human to screw up and re-use a "unique" server ID than the probabilities I presented in my previous post.

  • What is the best practice for package source locations?

    I have several remote servers (about 16) that are being utilized as file servers that have many binaries on them to be used by users and remote site admins for content. Can I have SCCM just use these pre-existing locations as package sources, or is this
    not considered best practice? 
    Or
    Should I create just one package source within close proximity to the Site Server, or on the Site Server itself?
    Thanks

    The primary site server is responsible for grabbing the source data and turning it into packages for Distribution points.  so while you can use ANY UNC to be a source location for content, you should be aware of where that content exists in regards
    to your primary site server.  If your source content is in Montana but your primary server is in California ... there's going to be a WAN hit ... even if the DP it's destined for is also in Montana.
    Second, I strongly recommend locking down your source UNC path so that only the servers and SCCM admins can access it.  This will prevent side-loading of content  as well as any "accidental changing" of folder structure that could cause
    your applications/packages to go crazy.
    Put the two together and I typically recommend you create a DSL (distributed source library) share and slowly migrate all your content into it as you create your packages/applications.  You can then safely create batch installers, manage content versions,
    and other things without fear of someone running something out of context.

  • Best practice for distributing/releasing J2EE applications.

    Hi All,
    We are developing a J2EE application and would like some information on the best
    practices to be followed for distributing/releasing J2EE applications, in general.
    In particular, the dilemma we have is centered around the generation of stub, skeleton
    and additional classes for the application.
    Most App. Servers can generate the required classes while deploying the EJBs in the
    application i.e. at install time. While some ( BEA Weblogic and IBM Websphere are
    two that we are aware of ) allow these classes to be generated before the installation
    time and the .ear file containing the additional classes is the one that is uploaded.
    For instance, say we have assembled the application "myapp.ear" . There are two ways
    in which the classes can be generated. The first is using 'ejbc' ( assume we are
    using BEA Weblogic ), which generates the stub, skeleton and additional classes for
    the application and returns the file, say, "Deployable_myapp.ear" containing all
    the necessary classes and files. This file is the one that is then installed. The
    other option is to install the file "myapp.ear" and let the Weblogic App. server
    itself, generate the required classes at the installation time.
    If the first way, of 'pre-generating' the stubs is followed, does it require us to
    separately generate the stubs for each versions of the App. Server that we support
    ? i.e. if we generate a deployable file having the required classes using the 'ejbc'
    of Weblogic Ver5.1, can the same file be installed on Weblogic Ver6.1 or do we
    have to generate a separate file?
    If the second method, of 'install-time-generation' of stubs is used, what is the
    nature/magnitude of the risk that we are taking in terms of the failure of the installation
    Any links to useful resources as well as comments/suggestions will be appreciated.
    TIA
    Regards,
    Aasif

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best Practice for Master Data Reporting

    Dear SAP-Experts,
    We face a challenge at the moment and we are still trying to find the right approach to it:
    Business requirement is to analyze SAP Material-related Master Data with the BEx Analyzer (Master Data Reporting)
    Questions they want to answer here are for example:
    - How many active Materials/SKUs do we have?
    - Which country/Sales Org has adopted certain Materials?
    - How many Series do we have?
    - How many SKUs below to a specific season
    - How many SKUs are in a certain product lifecycle
    - etc.
    The challenge is, that the Master Data is stored in tables with different keys in the R/3.
    The keys in these tables are on various levels (a selection below):
    - Material
    - Material / Sales Org / Distribution Channel
    - Material / Grid Value
    - Material / Grid Value / Sales Org / Distribution Channel
    - Material / Grid Value / Sales Org / Distribution Channel / Season
    - Material / Plant
    - Material / Plant / Category
    - Material / Sales Org / Category
    etc.
    So even though the information is available on different detail  levels, the business requirement is to have one query/report that combines all the information. We are currently struggeling a bit on deciding, what would be the best approach for this requirement. Did anyone face such a requirement before - and what would be the best practice. We already tried to find any information online, but it seems Master data reporting is not very well documented. Thanks a lot for your valuable contribution to this discussion.
    Best regards
    Lukas

    Pass a reference to the parent into the modal popup. Then you
    can reference anything in the parent scope.
    I haven't done this i 2.0 yet so I can't give you code. I'll
    post if I do.
    Oh, also, you can reference the parent using parentDocument.
    So in the popup you could do:
    parentDocument.myPublicVariable = "whatever";
    Tracy

  • Seaching for Best Practice links that work

    Hi,
    past few years I have been able to access SAP Best Practices documents like SAP Best Practices SAP Best Practices for CP and Wholesale Industries
    (this one still works and guides me to the building block and process overview documents!).
    Recently any link I can find to SAP Industry or Baseline Best Practices ends up with a dead link. See for example trying to get from here SAP Best Practices Baseline packages – SAP Help Portal Page
    to Localized for Netherlands V1.607 SAP Best Practices package further below on that page, results in screen shot attached. I have seen that in many more examples (different countries, or in Industry Best Practice Packages instead of Country Baseline packages....)
    Does any know whether and how SAP redesigned their access to Best Practices documents (Configuration Guides, eCatts, Scenario Process Overviews etc.?
    Thanks for your reply.
    Thijs

    Hi, Thijs,
    There is currently a problem with Best Practices on the Help Portal.  On the home page of the portal (http://help.sap.com/) there is a message that reads "Stay Tuned - There are temporary problems when accessing some content types, for example PDF documents or Best Practices. We are working on a solution."
    Our Wholesale Distribution industry group does not manage the Help Portal pages, so, unfortunately, I don't know the status of the problem or when it might be resolved.
    Lynn

  • Best Practice for Distributing Databases to Customers

    I did a little searching and was surprised to not find a best practice document for how to distribute Microsoft SQL Databases. With other database formats, it's common to distribute them as scripts. It seems that feature is rather limited with the built-in
    tools Microsoft provides. There appear to be limits to the length of the script. We're looking to distribute a database several GBs in size. We could detach the database or provide a backup, but that has its own disadvantages by limiting what versions
    of the SQL Server will accept the database.
    What do you recommend and can you point me to some documentation that handles this practice?
    Thank you.

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

Maybe you are looking for