BEST PRACTICE FOR AN EFFICIENT SEARCH FACILITY
Good Morning,
Whilst in Training, our Trainer said that the most efficiency from the Sharpoint Search would be to install the Search Facility on a separate server (hardware).
Not sure how to have this process done.
Your advice and recommendation would be greatly appreciated.
thanks a mil.
NRH
Hi,
You can
create a dedicated search server that hosts all search components, query and index role, and crawl all on one physical server.
Here are some article for your reference:
Best practices for search in SharePoint Server 2010:
http://technet.microsoft.com/en-us//library/cc850696(v=office.14).aspx
Estimate performance and capacity requirements for SharePoint Server 2010 Search:
http://technet.microsoft.com/en-us/library/gg750251(v=office.14).aspx
Below is a similar post for your reference:
http://social.technet.microsoft.com/Forums/en-US/be5fcccd-d4a3-449e-a945-542d6d917517/setting-up-dedicated-search-and-crawl-servers?forum=sharepointgeneralprevious
Best regards
Wendy Li
TechNet Community Support
Similar Messages
-
Best practices for search service in a sharepont farm
Hi
in a sharepoint web application there is many BI dashboards are deployed and also we have plan to
configure enterprise search for this application.
in our sharepoint 2010 farm we have
2 application server s
2 WFE servers
here one application server is running
c.a + webanalytics service and itself is a domain controller
second application server is for only running secure store service+ Performance point service only
1 - here if we run search server service in second application server can any issues to BI performance and
2 - its best practice to run Performance point service and search service in one server
3 -also is it best practice to run search service in a such a application server where already other services running
and where we have only one share point web application need to be crawled and indexed with below crawl schedule.
here we only run full crawl per week and incremental crawl at midnight daily
adilHi adil,
Based on your description, you want to know the best practices for search service in a SharePoint farm.
Different farms have different search topologies, for the best search performance, I recommend that you follow the guidance for small, medium, and large farms.
The article is about the guidance for different farms.
Search service can run with other services in the same server, if condition permits and you want to have better performance for search service and other services including BI performance, you can deploy search service in dedicated server.
If condition permits, I recommend combining a query component with a front-end Web server to avoid putting crawl components and query components on the same serve.
In your SharePoint farm, you can deploy the query components in a WFE server and the crawl components in an application server.
The articles below describe the best practices for enterprise search.
https://technet.microsoft.com/en-us/library/cc850696(v=office.14).aspx
https://technet.microsoft.com/en-us/library/cc560988(v=office.14).aspx
Best regards
Sara Fan
TechNet Community Support -
Where to find best practices for tuning data warehouse ETL queries?
Hi Everybody,
Where can I find some good educational material on tuning ETL procedures for a data warehouse environment? Everything I've found on the web regarding query tuning seems to be geared only toward OLTP systems. (For example, most of our ETL
queries don't use a WHERE statement, so the vast majority of searches are table scans and index scans, whereas most index tuning sites are striving for index seeks.)
I have read Microsoft's "Best Practices for Data Warehousing with SQL Server 2008R2," but I was only able to glean a few helpful hints that don't also apply to OLTP systems:
often better to recompile stored procedure query plans in order to eliminate variances introduced by parameter sniffing (i.e., better to use the right plan than to save a few seconds and use a cached plan SOMETIMES);
partition tables that are larger than 50 GB;
use minimal logging to load data precisely where you want it as fast as possible;
often better to disable non-clustered indexes before inserting a large number of rows and then rebuild them immdiately afterward (sometimes even for clustered indexes, but test first);
rebuild statistics after every load of a table.
But I still feel like I'm missing some very crucial concepts for performant ETL development.
BTW, our office uses SSIS, but only as a glorified stored procedure execution manager, so I'm not looking for SSIS ETL best practices. Except for a few packages that pull from source systems, the majority of our SSIS packages consist of numerous "Execute
SQL" tasks.
Thanks, and any best practices you could include here would be greatly appreciated.
-EricOnline ETL Solutions are really one of the biggest challenging solutions and to do that efficiently , you can read my blogs for online DWH solutions to know at the end how you can configure online DWH Solution for ETL using Merge command of SQL Server
2008 and also to know some important concepts related to any DWH solutions such as indexing , de-normalization..etc
http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
http://www.sqlserver-performance-tuning.com/apps/blog/show/12927103-data-warehousing-workshop-2-4-
http://www.sqlserver-performance-tuning.com/apps/blog/show/12927173-data-warehousing-workshop-3-4-
http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
Kindly let me know if any further help is needed
Shehap (DB Consultant/DB Architect) Think More deeply of DB Stress Stabilities -
Best practices for ARM - please help!!!
Hi all,
Can you please help with any pointers / links to documents describing best practices for "who should be creating" the GRC request in below workflow of ARM in GRC 10.0??
Create GRC request -> role approver -> risk manager -> security team
options are : end user / Manager / Functional super users / security team.
End user and manager not possible- we can not train so many people. Functional team is refusing since its a lot of work. Please help me with pointers to any best practices documents.
Thanks!!!!In this case, I recommend proposing that the department managers create GRC Access Requests. In order for the managers to comprehend the new process, you should create a separate "Role Catalog" that describes what abilities each role enables. This Role Catalog needs to be taught to the department Managers, and they need to fully understand what tcodes and abilities are inside of each role. From your workflow design, it looks like Role Owners should be brought into these workshops.
You might consider a Role Catalog that the manager could filter on and make selections from. For example, an AP manager could select "Accounts Payable" roles, and then choose from a smaller list of AP-related roles. You could map business functions or tasks to specific technical roles. The design flaw here, of course, is the way your technical roles have been designed.
The point being, GRC AC 10 is not business-user friendly, so using an intuitive "Role Catalog" really helps the managers understand which technical roles they should be selecting in GRC ARs. They can use this catalog to spit out a list of technical role names that they can then search for within the GRC Access Request.
At all costs, avoid having end-users create ARs. They usually select the wrong access, and the process then becomes very long and drawn out because the role owners or security stages need to mix and match the access after the fact. You should choose a Requestor who has the highest chance of requesting the correct access. This is usually the user's Manager, but you need to propose this solution in a way that won't scare off the manager - at the end of the day, they do NOT want to take on more work.
If you are using SAP HR, then you can attempt HR Triggers for New User Access Requests, which automatically fill out and submit the GRC AR upon a specific HR action (New Hire, or Termination). I do not recommend going down this path, however. It is very confusing, time consuming, and difficult to integrate properly.
Good luck!
-Ken -
Best Practices For Household IOS's/Apple IDs
Greetings:
I've been searching support for best practices for sharing primarily apps, music and video among multple iOS's/Apple IDs. If there is a specific article please point me to it.
Here is my situation:
We currently have 3 iPads (2-kids, 1-dad) in the household and one iTunes account on a win computer. I previously had all iPads on single Apple ID/credit card and controlled the kids' downloads thru the Apple ID password that I kept secret. As the kids have grown older, I found myself constantly entering my password as the kids increased there interest in music/apps/video. I like this approach because all content was shared...I dislike because I was constantly asked to input password for all downloads.
So, I recently set up an individual account for them with the allowance feature at iTunes that allows them to download content on their own (I set restrictions on their iPads). Now I have 3 Apple IDs under one household.
My questions:
With the 3 Apple IDs, what is the best way to share apps,music, videos among myself and the kids? Is it multiple accounts on the computer and some sort of sharing?
Thanks in advance...Hi Bonesaw1962,
We've had our staff and students run iOS updates OTA via Settings -> Software Update. In the past, we put a DNS block on Apple's update servers to prevent users from updating iOS (like last fall when iOS 7 was first released). By blocking mesu.apple com, the iPads weren't able to check for or install any iOS software updates. We waited until iOS 7.0.3 was released before we removed the block to mesu.apple.com at which point we told users if they wanted to update to iOS 7 they could do so OTA. We used our MDM to run reports periodically to see how many people updated to iOS 7 and how many stayed on iOS 6. As time went on, just about everyone updated on their own.
If you go this route (depending on the number of devices you have), you may want to take a look at Caching Server 2 to help with the network load https://www.apple.com/osx/server/features/#caching-server . From Apple's website, "When a user on your network downloads new software from Apple, a copy is automatically stored on your server. So the next time other users on your network update or download that same software, they actually access it from inside the network."
I wish there was a way for MDMs to manage iOS updates, but unfortunately Apple hasn't made this feature available to MDM providers. I've given this feedback to our Apple SE, but haven't heard if it is being considered or not. Keeping fingers crossed.
Hope this helps. Let us know what you decide on and keep us posted on the progress. Good luck!!
~Joe -
I posted a few days ago re failing HDD on mid-2007 iMac. Long story short, took it into Apple store, Genius worked on it for 45 mins before decreeing it in need of new HDD. After considering the expenses of adding memory, new drive, hardware and installation costs, I got a brand new iMac entry level (21.5" screen,
2.7 GHz Intel Core i5, 8 GB 1600 MHz DDR3 memory, 1TB HDD running Mavericks). Also got a Superdrive. I am not needing to migrate anything from the old iMac.
I was surprised that a physical disc for the OS was not included. So I am looking for any Best Practices for setting up this iMac, specifically in the area of backup and recovery. Do I need to make a boot DVD? Would that be in addition to making a Time Machine full backup (using external G-drive)? I have searched this community and the Help topics on Apple Support and have not found any "checklist" of recommended actions. I realize the value of everyone's time, so any feedback is very appreciated.OS X has not been officially issued on physical media since OS X 10.6 (arguably 10.7 was issued on some USB drives, but this was a non-standard approach for purchasing and installing it).
To reinstall the OS, your system comes with a recovery partition that can be booted to by holding the Command-R keys immediately after hearing the boot chimes sound. This partition boots to the OS X tools window, where you can select options to restore from backup or reinstall the OS. If you choose the option to reinstall, then the OS installation files will be downloaded from Apple's servers.
If for some reason your entire hard drive is damaged and even the recovery partition is not accessible, then your system supports the ability to use Internet Recovery, which is the same thing except instead of accessing the recovery boot drive from your hard drive, the system will download it as a disk image (again from Apple's servers) and then boot from that image.
Both of these options will require you have broadband internet access, as you will ultimately need to download several gigabytes of installation data to proceed with the reinstallation.
There are some options available for creating your own boot and installation DVD or external hard drive, but for most intents and purposes this is not necessary.
The only "checklist" option I would recommend for anyone with a new Mac system, is to get a 1TB external drive (or a drive that is at least as big as your internal boot drive) and set it up as a Time Machine backup. This will ensure you have a fully restorable backup of your entire system, which you can access via the recovery partition for restoring if needed, or for migrating data to a fresh OS installation. -
Best practices for office 365 SHARED CALENDAR for whole school / organization
hi
we need guidance on best practice for setting up SHARED CALENDAR on Office365 exchange server for entire organization (school)of150 staff.
Requirements
+ all staff should have read only / reviewer permissions on calendar
+handful staff should have editor permissions on calendar
+ the calendar should synchronise custom categories and colors
Current Solution
at the moment we have found that a shared mailbox is the best solution because;
- allusers can add the shared mailbox on outlook 2010as additional mailbox as readonly
- all the categories & colors for the calendarare automatically synchronised because the color categories are stored within this mailbox.
- you can edit calendar permissions in outlook to allow some users as "editor" of the calendar.Problem with Current Solution
the problem however is that the users also need to access this...
This topic first appeared in the Spiceworks CommunityHi Aleksei,
I think Inactive mailboxes in Exchange Online is the feature that you want. This feature makes it possible for you to preserve (store and archive) the contents of deleted mailboxes indefinitely.
A mailbox becomes inactive when an In-Place Hold or a
Litigation Hold is placed on the mailbox before the corresponding Office 365 user account is deleted.
But I'm afraid that it might be impossible to "easily share certain folders or even whole mailbox with people in the company". As can been seen from below articles, this only allows administrators, compliance officers, or records managers
to use the In-Place eDiscovery feature in Exchange Online to access and search the contents of an inactive mailbox:
http://technet.microsoft.com/en-us/library/dn144876(v=exchg.150).aspx
http://blogs.technet.com/b/exchange/archive/2013/03/21/preserve-mailbox-data-for-ediscovery-using-inactive-mailboxes-in-exchange-online.aspx
Anyway, this is the forum to discuss questions and feedback for Microsoft Office client. For more details about your question, I would suggest you post in the dedicated forum of
Exchange Online, where you can get more experienced responses:
https://social.technet.microsoft.com/Forums/msonline/en-US/home?forum=onlineservicesexchange
The reason why we recommend posting appropriately is you will get the most qualified pool of respondents, and other partners who read the forums regularly can either share their knowledge or learn from your interaction with us. Thank you for your understanding.
Regards,
Ethan Hua
TechNet Community Support
It's recommended to download and install
Configuration Analyzer Tool (OffCAT), which is developed by Microsoft Support teams. Once the tool is installed, you can run it at any time to scan for hundreds of known issues in Office
programs. -
Best practices for setting up projects
We recently adopted using Captivate for our WBT modules.
As a former Flash and Director user, I can say it’s
fast and does some great things. Doesn’t play so nice with
others on different occasions, but I’m learning. This forum
has been a great source for search and read on specific topics.
I’m trying to understand best practices for using this
product. We’ve had some problems with file size and
incorporating audio and video into our projects. Fortunately, the
forum has helped a lot with that. What I haven’t found a lot
of information on is good or better ways to set up individual
files, use multiple files and publish projects. We’ve decided
to go the route of putting standalones on our Intranet. My gut says
yuck, but for our situation I have yet to find a better way.
My question for discussion, then is: what are some best
practices for setting up individual files, using multiple files and
publishing projects? Any references or input on this would be
appreciated.Hi,
Here are some of my suggestions:
1) Set up a style guide for all your standard slides. Eg.
Title slide, Index slide, chapter slide, end slide, screen capture,
non-screen capture, quizzes etc. This makes life a lot easier.
2) Create your own buttons and captions. The standard ones
are pretty ordinary, and it's hard to get a slick looking style
happening with the standard captions. They are pretty easy to
create (search for add print button to learn how to create
buttons). There should instructions on how to customise captions
somewhere on this forum. Customising means that you can also use
words, symbols, colours unique to your organisation.
3) Google elearning providers. Most use captivate and will
allow you to open samples or temporarily view selected modules.
This will give you great insight on what not to do and some good
ideas on what works well.
4) Timings: Using the above research, I got others to
complete the sample modules to get a feel for timings. The results
were clear, 10 mins good, 15 mins okay, 20 mins kind of okay, 30
mins bad, bad, bad. It's truly better to have a learner complete
2-3 short modules in 30 mins than one big monster. The other
benefit is that shorter files equal smaller size.
5) Narration: It's best to narrate each slide individually
(particularly for screen capture slides). You are more likely to
get it right on the first take, it's easier to edit and you don't
have to re-record the whole thing if you need to update it in
future. To get a slicker effect, use at least two voices: one male,
one female and use slightly different accents.
6) Screen capture slides: If you are recording filling out
long window based databse pages where the compulsory fields are
marked (eg. with a red asterisk) - you don't need to show how to
fill out every field. It's much easier for the learner (and you) to
show how to fill out the first few fields, then fade the screen
capture out, fade the end of the form in with the instructions on
what to do next. This will reduce your file size. In one of my
forms, this meant the removal of about 18 slides!
7) Auto captions: they are verbose (eg. 'Click on Print
Button' instead of 'Click Print'; 'Select the Print Preview item'
instead of 'Select Print Preview'). You have to edit them.
8) PC training syntax: Buttons and hyperlinks should normally
be 'click'; selections from drop down boxes or file lists are
normally 'select': Captivate sometimes mixes them up. Instructions
should always be written in the correct order: eg. Good: Click
'File', Select 'Print Preview'; Bad: Select 'Print Preview' from
the 'File Menu'. Button names, hyperlinks, selections are normally
written in bold
9) Instruction syntax: should always be written in an active
voice: eg. 'Click Options to open the printer menu' instead of
'When the Options button is clicked on, the printer menu will open'
10) Break all modules into chapters. Frame each chapter with
a chapter slide. It's also a good idea to show the Index page
before each chapter slide with a progress indicator (I use an
animated arrow to flash next to the name of the next chapter), I
use a start button rather a 'next' button for the start of each
chapter. You should always have a module overview with the purpose
of the course and a summary slide which states what was covered and
they have complete the module.
11) Put a transparent click button somewhere on each slide.
Set the properties of the click box to take the learner back to the
start of the current chapter by pressing F2. This allows them to
jump back to the start of their chapter at any time. You can also
do a similar thing on the index pages which jumps them to another
chapter.
12) Recording video capture: best to do it at normal speed
and be concious of where your mouse is. Minimise your clicks. Most
people (until they start working with captivate) are sloppy with
their mouse and you end up with lots of unnecessarily slides that
you have to delete out. The speed will default to how you recorded
it and this will reduce the amount of time you spend on changing
timings.
13) Captions: My rule of thumb is minimum of 4 seconds - and
longer depending on the amount of words. Eg. Click 'Print Preview'
is 4 seconds, a paragraph is longer. If you creating knowledge
based modules, make the timing long (eg. 2-3 minutes) and put in a
next button so that the learner can click when they are ready.
Also, narration means the slides will normally be slightly longer.
14) Be creative: Capitvate is desk bound. There are some
learners that just don't respond no matter how interactive
Captivate can be. Incorporate non-captivate and desk free
activities. Eg. As part of our OHS module, there is an activity
where the learner has to print off the floor plan, and then wander
around the floor marking on th emap key items such as: fire exits;
first aid kit, broom and mop cupboard, stationary cupboard, etc.
Good luck! -
Best Practice for SSL in Apache/WL6.0SP1 configuration?
What is the best practice for eanbling SSL in an Apache/WL6.0SP1
configuration?
Is it:
Browser to Apache: HTTPS
Apache to WL: HTTP
or
Browser to Apache: HTTPS
Apache to WL: HTTPS
The first approach seems more efficient (assuming that Apache and WL are
both in a secure datacenter), but in that case, how does WL know that the
browser requested HTTPS to begin with?
Thanks
AlainA getScheme should return HTTPS if the client is using HTTPS or HTTP if it
is using HTTP.
The option for the plug-in to use HTTP or HTTPS when connecting to Weblogic
is up to you but regardless the scheme of the client will be passed to
WebLogic.
Eric
"Alain" <[email protected]> wrote in message
news:[email protected]..
How should we have the plug-in tell wls the client is using https?
Should we have the plugin talk to wls in HTTP or HTTPS?
Thanks
Alain
"Jong Lee" <[email protected]> wrote in message
news:3b673bab$[email protected]..
The apache plugin tells wls the client is using https and also pass on
the
client
cert if any.
"Alain" <[email protected]> wrote:
What is the best practice for eanbling SSL in an Apache/WL6.0SP1
configuration?
Is it:
Browser to Apache: HTTPS
Apache to WL: HTTP
or
Browser to Apache: HTTPS
Apache to WL: HTTPS
The first approach seems more efficient (assuming that Apache and WL
are
both in a secure datacenter), but in that case, how does WL know that
the
browser requested HTTPS to begin with?
Thanks
Alain -
Best practice for invoice posting inSRM
Dear expert,
what is the best practice for posting an invoice? Is it in SRM system or MM system? In SRM, the system allows one to use approval procedure for invoice posting; wheras in MM, one cannot use the apporval procedure to post an invoice. similar is the case, in SRM-SUS and MM-SUS scenario?
I would appreciate if you can feedback on the industry practice.
thanks and regards,
RanjanRanjan,
As a SAP customer we use invoice entry in ERP using MIRO and FB60 as these are the most efficient for our busines.
1. Invoice entry using MIRO tcode is faster as it can be done with minimal use of the mouse. Invoice entry is slower in SRM as browser response and interactions with the backed ERP system slows the systems reponse to the user input. Repeated use of a mouse in the SRM browser is detrimental to fast input.
2. Not all types of invoices can be handled in SRM e.g. Invoice without PO.
3. We process approx 20,000 invoices per month with multiple input operators, SRM could not handle that sort of load.
4. SRM is really a procurement application and although invoice entry is included it is probably more for users who wish to use SRM in a stand alone scenario. We use extended classic so all our financial transactions take place in the ERP backend.
Your choice also depends upon the number and quality of the invoices you plan to process, and the need for the operators to be trained on one or two systems.
Hope these personal observations assist with your decision
Regards
Allen -
Best Practice for using multiple models
Hi Buddies,
Can u tell me the best practices for using multiple models in single WD application?
Means --> I am using 3 RFCs on single application for my function. Each time i am importing that RFC model under
WD --->Models and i did model binding seperately to Component Controller. Is this is the right way to impliment multiple models in single application ?It very much depends on your design, but One RFC per model is definitely a no no.
Refer to this document to understand how should you use the model in most efficient way.
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/705f2b2e-e77d-2b10-de8a-95f37f4c7022?quicklink=events&overridelayout=true
Thanks
Prashant -
What are best practice for packaging and deploying j2EE apps to iAS?
We've been running a set of J2EE applications on a pair of iAS SP1b for about a year and it has been quite stable.
Recently however we have had a number of LDAP issues, particularly when registering and unregistering applications (registering ear files sometimes fails 1st time but may work 2nd time). Also We've noticed very occasionally that old versions of classes sometimes find their way onto our machines.
What is considered to be best practice in terms of packaging and deployment, specifically:
1) Packaging - using the deployTool that comes with iAS6 SP1b to package is a big manual task, especially when you have 200+ jsp files. Are people out there using this or are they scripting it with a build tool such as Ant?
2) Deploying an existing application to multiple iAS's. Are you guys unregistering old application then reregistering new application? Are you shutting down iAS whilst doing the deployment?
3) Deploying ear files can take 5 to 10 mins, is this normal?
4) In a clustered scenario where HTTPSession is shared what are the consequences of doing deployments to data stored in session?
thanks in asvance for your replies
OwenYou may want to consider upgrading your application server environment to a newer service pack. There are numerous enhancements involving the deployment tool and run time layout of your application that make clear where you're application is loading its files from.
If you've at a long running application server environment, with lots of deployments under your belt, you might start to notice slow downs in deployment and kjs start time. Generally this is due to garbage collecting in your iAS registry.
You can do several things to resolve this. The most complete solution is to reinstall the application server. This will guarantee a clean ldap registry. Of course you've got to restablish your configurations and redeploy your applications. When done, backup your application server install space with the application server and directory server off. You can use this backup to return to a known configuation at some future time.
For the second method: <B>BE CAREFUL - BACKUP FIRST</B>
There is a more exhaustive solution that involves examining your deployed components to determine the active GUIDS. You then search the NameTrans section of the registry searching for Applogic Servlet *, and Bean * entries that represent your previously deployed components but are represented in the set of deployed GUIDs. Record these older GUIDs, remove them from ClassImp and ClassDef. Finally remove the older entries from NameTrans.
Best practices for deployment depend on your particular environmental needs. Many people utilize ANT as a build tool. In later versions of the application server, complete ANT scripts are included that address compiling, assembly and deployment. Ant 1.4 includes iAS specific targets and general J2EE targets. There are iAS specific targets that can be utilized with the 1.3 version. Specialized build targets are not required however to deploy to iAS.
Newer versions of the deployment tool allow you to specify that JSPs are not to be registered automatically. This can be significant if deployment times lag. Registered JSP's however benefit more fully from the services that iAS offers.
2) In general it is better to undeploy then redeploy. However, if you know that you're not changing GUIDs, recreating an existing application with new GUIDs, or removing registered components, you may avoid the undeploy phase.
If you shut down the KJS processes during deployment you can eliminate some addition workload on the LDAP server which really gets pounded during deployment. This is because the KJS processes detect changes and do registry loads to repopulate their caches. This can happen many times during a deployment and does not provide any benefit.
3) Deploying can be a lengthy process. There have been improvements in that performance from service pack to service pack but unfortunately you wont see dramatic drops in deployment times.
One thing you can do to reduce deployment times is to understand the type of deployment. If you have not manipulated your deployment descriptors in any way, then there is no need to deploy. Simply drop your newer bits in to the run time space of the application server. In later service packs this means exploding the package (ear,war, or jar) in to the appropriate subdirectory of the APPS directory.
4) If you've changed the classes of objects that have been placed in HTTPSession, you may find that you can no longer utilize those objects. For that reason, it is suggested that objects placed in session be kept as simple as possible in order to minimize this effect. In general however, is not a good idea to change a web application during the life span of a session. -
Best Practice for Distributing Databases to Customers
I did a little searching and was surprised to not find a best practice document for how to distribute Microsoft SQL Databases. With other database formats, it's common to distribute them as scripts. It seems that feature is rather limited with the built-in
tools Microsoft provides. There appear to be limits to the length of the script. We're looking to distribute a database several GBs in size. We could detach the database or provide a backup, but that has its own disadvantages by limiting what versions
of the SQL Server will accept the database.
What do you recommend and can you point me to some documentation that handles this practice?
Thank you.Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
detach/attach, script generation, Microsoft Sync framework, and a few others.
EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
backup/maintenance routine in your environment.
As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements. -
Best Practice for report output of CRM Notes field data
My company has a requirement to produce a report with variable output, based upon a keyword search of our CRM Request Notes data. Example: The business wants a report return of all Service Requests where the Notes field contains the word "pay" or "payee" or "payment". As part of the report output, the business wants to freely select the output fields meant to accompany the notes data. Can anyone please advise to SAP's Best Practice for meeting a report requirement such as this. Is a custom ABAP application built? Does data get moved to BW for Reporting (how are notes handles)? Is data moved to separate system?
Hi David,
I would leave your query
"Am I doing something wrong and did I miss something that would prevent this problem?"
to the experts/ gurus out here on this forum.
From my end, you can follow
TOP 10 EXCEL TIPS FOR SUCCESS
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
Please follow the Xcelsius Best Practices at
https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
In order to reduce the size of xlf and swf files follow
http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
Hope this helps to certain extent.
Regards
Nikhil -
Best Practice for the Service Distribution on multiple servers
Hi,
Could you please suggest as per the best practice for the above.
Requirements : we will use all features in share point ( Powerpivot, Search, Reporting Service, BCS, Excel, Workflow Manager, App Management etc)
Capacity : We have 12 Servers excluding SQL server.
Please do not just refer any URL, Suggest as per the requirements.
Thanks
srabonHow about a link to the MS guidance!
http://go.microsoft.com/fwlink/p/?LinkId=286957
Maybe you are looking for
-
Customization of New Portal Desktop Framework
Hi All, My requirement is customization of Default Framework Page based on client specifications and i need to assign that new Desktop Framework page to only end users not for all the users. The Below steps i did, I copied the default framework
-
An error has occurred during report processing. (rsProcessingAborted)
Access Web Databases and SSRS Issue: After creating an access web database, such as Contacts Web Database in SharePoint 2010 we receive the following error when trying reports. This only happens on the Address Book, Phone Book, and Contact List repor
-
Hi! My powerbook was damaged & the insurers have returned my hard drive and are supplying me with a new MacBook Pro. The data on the Hard drive is encrypted with FileVault from 10.3.someting..... What's the best way to proceed, with regards to gettin
-
Hi Gurus, I am working on a SAP script... when I am printing the SAP script the last few lines of my text is missing... I got to know that there might be a problem in the page setting... i.e. the page may be set as a A4 and i am printing on letter...
-
Adobe Premiere CC won't install 2nd copy
Had Premiere CC on two computers. Uninstalled from one. Even though I have on only one computer now Adobe CC says I have it on two and will not let me install it on the 2nd one. Thanks, L