Crawler issues

I installed Plumtree content service for windows files (latest SP1 version).
When imported packages and set the crawler to run on a windows folder, crawler runs with error "unrecoverable error when bulk importing cards".
But it imports cards into the system. I can see the cards/properties/files in edit mode but when I go into browser mode, I do not see any files. crawler sis set to approve files automatically, I manually approved the files.
Not sure where the problem is ?

Hi!
It seems as though your Search service is not indexing those files, but they exist in the portal DB.
Is that "unrecoverable error when bulk importing cards" error the only message you receive in the job log?
Thanks

Similar Messages

  • Best Practice to troubleshoot a Crawler Issue after SQL Server restarts

    Hi Everyone
    I am after some general advice or a suggested place to start when troubleshooting crawler issues after an unplanned, out-of-sequence server restart.
    Specifically, the SQL Database in the SharePoint 2010 farm. Better yet, is there a standard practice way of resolving such issues?<o:p></o:p>
    So far articles I have found suggest options from reviewing the crawl logs, creating a new crawl component, right through to using fiddler to monitor the crawler.
    Are these sufficient places to start / methodologies to follow, what else should be considered?
    Any advice greatly appreciated.
    Thanks,
    Mike

    Well, as I said before, there are lots of different potential issues & resolutions for crawlers.  It really depends on the issue.  I would say that the base troubleshooting steps start the same no matter which service/feature you are looking
    at.  So, I'll try to keep this sort of generic, but beyond finding the details of the problem, the SOP or process will vary greatly based on what the error is.  I hope this helps, and sorry if it's not specific enough.
    1 - check the ULS logs
    2 - check the windows application logs
    3 - verify related services are running (get-spserviceinstance), possibly stop/start them to reprovision the instance on the affected server
    4 - clear the config cache (this alone will clear about 30% of your basic problems)
    6 - verify disk space & resource consumption on affected server (& SQL, SQL is always the potential to be the true "affected" server)
    7 - iisreset
    8 - verify connectivity between all servers in the farm and SP
    9 - verify requir3ed features activated
    10- check if any changes were made to environment recently (new hardware, updates to OS or apps, updates to GPOs, new solutions, etc)
    11- check if the issue is reproducible in another environment (only reliable if this is a similar environment, i.e. same patch level test farm or dr farm).  see if it occurs in other site collections within the web app, different web apps, different
    servers/browsers, etc, basically just try to nail down the scope of the problem
    There is a whole slew of thiings you could check from verifying certificates & perms, to rerunning psconfig, to checking registry keys, again I go back to it depends.  hopefully this can get you started though.  in the end ULS is where all
    the real info on the issue is going to be, so make sure you check there.  don't go in with tunnel vision either.  if you see other errors in ULS, check them out, they may or may not be related; SharePoint is an intricate product with way more moving
    parts than most systems.  fix the little quick ones that you know you can handle, this will help to keep the farm clean and healthy, as well as crossing them off the list of potential suspects for your root cause.
    Christopher Webb | MCM: SharePoint 2010 | MCSM: SharePoint Charter | MCT | http://christophermichaelwebb.com

  • Crawling issues with Ajax Module

    Hello all
    i have a crawling issue on the homepage of my website, and robots don't seem to go pass the homepage (for crawling the links) because the data is pulled through a Ajax module. I am trying to figure out a way to make the links crawlable, my approach would be to show an alternate view when Javascript is disabled. I just need validation that the code below would work:
    <!-- start of global-content -->
    <div id="global-content"><!--start of ajax content--->
      <h1>Tourisme Laval - Pleins d'affaires à faire</h1>
      <ul>
        <li><strong>Hotel Laval</strong>: Laval vous propose de nombreux hebergements tout au long de l’année.</li>
        <li><strong>Odysséo de Cavalia - À partir du 14 mai 2013</strong>: Après une année de tournée nord-américaine triomphale, Odysséo revient au Québec, à Laval!</li>
        <li><strong>Mondial Loto-Québec de Laval - 19 juin au 7 juillet 2013</strong>: Le Mondial Loto-Québec de Laval est sans contredit le festival de l'été à Laval avec des centaines de spectacles de musique pour tous les goûts.</li>
      </ul>
    <!--end of ajax content--->
    </div><!-- end of global-content -->
    By placing alternate html content here (that would be replaced on the load of AJAX content), would it be enough for the links to get crawled ?
    thanks for your help !
    Helene

    Thx for your reply, i'm not so sure about the <noscript> tag because i have red posts about reasons why not using the <noscript> tag : http://javascript.about.com/od/reference/a/noscriptnomore.htm. The main reason is :
    The noscript tag is a block level element and therefore can only be used to display entire blocks of content when JavaScript is disabled. It cannot be used inline.
    In other words, from this explanation, i would not be able to add html tags within the <noscript> function.

  • Crawl Issues?

    About 1 week ago I noticed that our incremental crawls were taking an extremely long time to complete. When they do complete, there are usually a ton of errors. I am seeing the following errors in the Event Viewer:
    SharePoint Server Search - 2580
    SharePoint Server Search - 2581
    I have 142k errors in SharePoint. The error message is, Processing this item failed because of an unknown error when trying to parse its contents.
    I have three web applications: Portal (intranet), People (MySites), and Publish (publishing). I've separated the content sources according to the web application. There seems to be an issue with the crawl?
    Has anyone else experienced this issue or have any suggestions?

    What do you see in ULS log regarding this error?
    During the same time, I see the following errors:
    Microsoft.Ceres.ContentEngine.SubmitterComponent.ContentSubmissionDecorator : CreateSession() failed with unexpected exception System.ServiceModel.FaultException`1[System.String]: The creator of this fault did not specify a Reason. (Fault Detail is equal
    to Specified flow: Microsoft.CrawlerFlow could not be started. Unable to start a flow instance on any of the available nodes.).
    <CompanyName>Session::Init: m_CSSConnection.Service.CreateSession threw exception (System.ServiceModel.CommunicationException), message = Exception type is System.ServiceModel.FaultException`1[System.String] message = The creator of this fault did
    not specify a Reason.;, CSSNode = net.tcp://<computername>/5DC9BD/ContentProcessingComponent1/ContentSubmissionServices/content, Flow = Microsoft.CrawlerFlow

  • Major crawler issues - removal of trailing slashes from URLs and (seemingly) poor regex implementation

    I've been suffering from two issues when setting up a crawl of an intranet website hosted on a proprietary web CMS using SharePoint 2010 Search. The issues are:
    1: The crawler appears to remove trailing slashes from the end of URLs.
    In other words, it turns http://mysite.local/path/to/page/ into http://mysite.local/path/to/page. The CMS can cope with this as it automatically forwards requests for http://mysite.local/path/to/page to http://mysite.local/path/to/page/ -
    but every time it does this it generates a warning in the crawl - one of the "URL was permanently moved" warnings. The URL hasn't been moved, though, the crawler has just failed to record it properly. I've seen a few posts about this in various places,
    all of which seem to end without resolution, which is particularly disappointing given that URLs in this format (with a trailing slash) are pretty common. (Microsoft's own main website has a few in links on its homepage - http://www.microsoft.com/en-gb/mobile/
    for instance).
    The upshot of this is that a crawl of the site, which has about 50,000 pages, is generating upwards of 250,000 warnings in the index, all apparently for no reason other than the crawler changes the address of all the pages it crawls? Is there a fix for this?
    2: The Regex implementation for crawl rules does not allow you to escape a question mark
    ... despite what it says
    here: I've tried and tested escaping a question mark in a url pattern by surrounding it in square brackets in the regex, i.e.: [?] but regardless of doing so, any URL with a question mark in just falls right through the rule. As soon as you remove the 'escaped'
    (i.e. seemingly not actually escaped at all) question mark from the rule, and from the test URL pattern, the rule catches, so it'd definitely not escaping it properly using the square brackets. The more typical regex escape pattern (a backslash before the
    character in question) doesn't seem to work, either. Plus neither the official documentation on regex patterns I've been able to find, nor the book I've got about SP2010 search, mention escaping characters in SP2010 crawl rule regexes at all. Could it be that
    MS have released a regex implementation for matching URL patterns that doesn't account for the fact that ? is a special character in both regex and in URLs?
    Now I could just be missing something obvious and would be delighted to be made to look stupid by someone giving me an easy answer that I've missed, but after banging my head against this for a couple of days, I really am coming to the conclusion that Microsoft
    have released a regex implementation for a crawler that doesn't work with URL patterns that contain a question mark. If this is indeed the case, then that's pretty bad, isn't it? And we ought to expect a patch of some sort? I believe MS are supporting SP2010
    until 2020? (I'd imagine these issues are fixed in 2013 Search, but my client won't be upgrading to that for at least another year or two, if at all).
    Both these issues mean that the crawl of my client's website is taking much longer, and generating much more data, than necessary. (I haven't actually managed to get to the end of a full crawl yet because of it... I left one running overnight and I just
    hope it has got to the end. Plus the size of the crawl db was 11GB and counting, most of that data being pointless warnings that the crawler appeared to be generating itself because it wasn't recording URLs properly). This generation of pointless mess
    is also going to make the index MUCH harder to maintain. 
    I'm more familiar with maintaining crawls in Google systems which have much better regex implementations - indeed (again, after two days of banging my head against this) I'd almost think that the regex implementation in 2010 search crawl rules was cobbled
    together at the last minute just because the Google Search Appliance has one. If so (and if I genuinely haven't missed something obvious - which I really hope I have) I'd say it wasn't worth Microsoft bothering because the one they have released appears to
    be entirely unfit for purpose?
    I'm sure I'm not the first person to struggle with these problems and I hope there's an answer out there somewhere (that isn't just "upgrade to 2013")?
    I should also point out that my client is an organisation with over 3000 staff, so heaven knows how much they are paying MS for their Enterprise Agreement. Plus I pay MS over a grand a year in MSDN sub fees etc, so (unless I'm just being a numpty) I would
    expect a higher standard of product than the one I'm having to wrestle with at the moment.

    Hi David,
    as i know in sharepoint 2010 crawl there is a rule to include or exclude the URL that using '?' characters, if i may know have you implemented the rule?
    In the Crawl Configuration section, select one of the following options:
    Exclude all items in this path. Select this option if you want to exclude all items in the specified path from crawls. If you select this option, you can refine the exclusion by selecting the following:
    Exclude complex URLs (URLs that contain question marks (?)). Select this option if you want to exclude URLs that contain parameters that use the question mark (?) notation.
    Include all items in this path. Select this option if you want all items in the path to be crawled. If you select this option, you can further refine the inclusion by selecting any combination of the following:
    Follow links on the URL without crawling the URL itself. Select this option if you want to crawl links contained within the URL, but not the starting URL itself.
    Crawl complex URLs (URLs that contain a question mark (?)). Select this option if you want to crawl URLs that contain parameters that use the question mark (?) notation.
    Crawl SharePoint content as HTTP pages. Normally, SharePoint sites are crawled by using a special protocol. Select this option if you want SharePoint sites to be crawled as HTTP pages instead. When the content is crawled by using the HTTP
    protocol, item permissions are not stored.
    for the trailing slash issue, may i have your latest cumulative update or your sharepoint database number? as i remember there was a fix on SP1 + june update regarding trailing slash, but not quite sure if the symptoms are the same with your environment.
    Sharepoint may use sharepoint connector regarding regex, but older regex may not able to have a capability to filter out the parameters so, a modification regarding the trailing slash may happend.
    please let us know your feedback.
    Regards,
    Aries
    Microsoft Online Community Support
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Crawl issue

    Hello everyone,
    Could you help me solve the issues below?:
    1 - I can't "Start Full Crawl" in CA of Sharepoint 2013. Reason why I can't start it by myself simple - menu is disabled, which control (Start,Stop,Paus or Delete) processes for more details see the screenshot:
    http://social.technet.microsoft.com/Forums/getfile/527186
    2 - I have created a new site "A" in Sharepoint 2013 and try to search on it some documents but  it is doesn't work at all. As for my home page, search works fine like expected. I have done re-indexed site
    "A" after created it, and waited a day but it doesn't help me too.
    My suggestion that it my be cause I had issue #1.
    Any idea?
    Thank you in advance.

    Hello,
    The TechNet Sandbox forum is designed for users to try out the new forums functionality. Please be respectful of others, and do not expect replies to questions asked here.
    As it's off-topic here, I am moving the question to the
    Where is the forum for... forum.
    Karl
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or Mark As Answer.
    My Blog: Unlock PowerShell
    My Book:
    Windows PowerShell 2.0 Bible
    My E-mail: -join ('6F6C646B61726C406F75746C6F6F6B2E636F6D'-split'(?<=\G.{2})'|%{if($_){[char][int]"0x$_"}})

  • Sharepoint 2013 site having NTML Crawl issue while crawling Sharepoint 2010 sites having FBA authentication

    Hi,
    We have Sharepoint 2013 search center site which is claim based with NTLM authentication set. we have Sharepoint 2010 farm also running which are FBA authenticated.
    While crawling Sharepoint 2010 sites having FBA authentication from SP 2013 search center having NTLM auth. it does not give proper result.
    Can you please help me what can be done here?
    Thanks,
    Prashant

    Hi Prashant,
    According to your description, my understanding is that the search cannot work correctly when crawling the SharePoint site which is form-based authentication.
    Per my knowledge, the crawl component requires NTLM to access content. At least one zone must be configured to use NTLM authentication. If NTLM authentication is not configured on the default zone, the crawl component can use a different
    zone that is configured to use NTLM authentication.
    However, if crawling a non-default zone of the web application, URLs of results will always be relative to the non-default zone that was crawled for queries from any zone, and this can cause unexpected or problematic behavior.
    I recommend to make sure that the default zone of the SharePoint 2010 web application to use NTLM authentication.
    More references:
    http://technet.microsoft.com/en-us/library/dn535606(v=office.15).aspx
    http://technet.microsoft.com/en-us/library/cc262350.aspx#planzone
    Best regards.
    Thanks
    Victoria Xia
    TechNet Community Support

  • Crawl issue - warning: "This URL is part of a host header SharePoint deployment and the search application is not configured to crawl individual host header sites. This will be crawled as a part of ....a start address"

    Hello all,
    I have a multi-tenant environment in SharePoint 2010 server. Different tenants are hosted by one web app.
    One content source has been created to crawl the all shared environment. (the only URL added in the content source settings = URL of the
     web app)
    Crawl everything under the hostname for each start address has been selected when creating this content source.
    Now I have created a new tenant hosted by the same web app. For this new tenant I want to have a different crawl schedule. Simple , I just create a new content source and add the host URL of this tenant. With the same settings has the other content source.
    After having started a full crawl I get 0 success and 1 warning : “This URL is part of a host header SharePoint deployment and the search application is not configured to crawl
    individual host header sites. This will be crawled as a part of the host header Web application if configured as a start address.”
    The first content source is well crawling the new tenant freshly created. Could you tell me where I’m wrong?
    Thanks in advance
    Regards
    Baldo

    Baldo,
    In the configuration that you described you now have 2 content sources set to crawl the same content. If you have a content source with a start address of http://servername/ then it is going to crawl everything past that address. If you are now changing
    the crawl schedule based on the individual site collections, you will need to move your start addresses further down the URL.
    For Example:
    1st Content Source:
    http://servername/ as a start address would now become
    http://servername/site1
    http://servername/site2
    2nd Content Source:
    http://servername/site3
    Also remember that all crawling must happen on the default zone. If you are trying to crawl a zone that the web application has been extended into, that will not work.
    ScottC - Microsoft SharePoint Support

  • 4.5WS SP2 Crawler Issue

    We have redeployed our portal to new hardware as well as upgrade the portal from 4.0SP1 to 4.5 then to 4.5WSSP2. We have a issue with crawlers duplicating cards in the catalog. Because we deployed on new hardware we have a new file server the stores the file, we used SQL scripts to change this url in Plumdb (ptobjectproperties & ptcardproperties).
    Anyone else seen this problem??

    If I recall correctly, GetSession was a function call that was in utility.asp (which is in the common folder.) but might have previously (prior to 4.5WS) resided in one of the gadget utility files. Regardless, GetSession was an integral functional call for a lot of gadgets. Your gadget code is failing because it can not locate this function. This is likely because it was once included by virtue of including another file. I sure miss the good ole days.
    Any who. to correct, find the GetSession function in the PTUI and move it to somewhere that your code can find it.
    Phil Orion | [email protected] | www.orionsmith.com

  • Crawling issue

    Dear friends,
    We have EP 6.0 SP 16 and TREX 6.1 SP 16 installed on a linux box. We are using our portal as a public website. When I try to do a web crawling on our portal, I just get one result. I also tried crawling the portal using a third party software and it also returns only 1 result. Do I need to enable some configuration on portal so that crawling the portal itself will work? The crawler does work fine with other web sites.
    Any help is appreciated,
    Thanks,
    Mandar

    Hi Mandar
    How is the behaviour when you use the light framework page instead , check this weblog
    Coming Soon: the External Facing Portal (EP6 SPS14)
    , however you will have to forego some feature , if you are looking at this an an option.
    Technically it is possible build a indexable website using EP Eg:SDN , but this involves lot of development and moving away from lot of standard features.
    Regards
    Pran

  • Moss 2007: Full Crawl TraceLog shows errors

    When I   full crawled my MOSS 2007, TraceLog says "....... value is not   available" in every place many many times.
    I found the error log at   TopSite, pages, subsites, Lists, Document and FileSharedLibrary(by Doc Ave)
    The error is like that.
    DocFormat value is not   available for url:...................
    CLSID value is not available   for url: ...................
    RedirectedURL value is not   available for url:...................
    BindToStream is not available   for url:...................
    FileName value is not   available for url:...................
    CSTS3Accessor::GetItemSecurityDescriptor:   Return error to caller, hr=80041211...................
    There are no more   chunks,...................
    Does anyone know what these   errors mean and they are critical error or not?
    I tried to search these errors but I couldn't find any information would help me.

    I think you dont have mysite as sts3 stands for mysite
    Here is a exact issue
    http://webcache.googleusercontent.com/search?q=cache:VMcltdpjfm0J:share-point.blogspot.com/2008/11/moss2007-crawl-issue-after-installation.html+&cd=2&hl=en&ct=clnk&gl=in
    Perform the steps mentioned in the KB article below to fix this issue:
    http://support.microsoft.com/kb/896861
    If this helped you resolve your issue, please mark it Answered. You can reach me through http://freeit-support.com/

  • Anyway to transfer maintain or transfer iWeb Link

    OK, here's my problem.  My wife runs a Junior Cotillion, www.californiajuniorcotillion.com and www.nevadajuniorcotillion.com and
    we hosted them on MobileMe.  We also had A-Plus redirect the MM adresses such as web.me.com/grahamkent/CJC/  back onto
    www.californiajuniorcotillion.com. But the messed up with www.nevadajuniorcotillion.com (and we didn't catch it) and it points to
    web.me.com/grahamkent/NJC/  OK, the Google'bots have index it to that MM address and now that it's closed down, the DANG
    Google doesn't see  www.nevadajuniorcotillion.com (but it's a valid adress if you old school and just type in the address)—and
    now we have a broken link.  From the A-Plus comntrol panel I sent notifiocations to all web search enginers that  www.nevadajuniorcotillion.com
    is now up and working, but HOW LONG DOES THAT TAKE? Weeks? and I suspect there's now way to forward from my old
    dead MM link, i.e., web.me.com/grahamkent/NJC/
    Anyhow that's my buzz kill right now... any ways or tricks to get Da Google to see www.nevadajuniorcotillion.com more quickly?
    www.californiajuniorcotillion.com is just fine; it was linked properly from the get go!
    Thanks!!!!!

    OK ... I went through all of the steps I think — including puting the .html file on the home website, through
    Google and checked that it all seemed kosher. It is now 3 weeks since I've done that and alas the google crawler has not figured it out and well it will start hurting my wife's business if it goes on much longer.  I kinda though about 3 weeks was max? Any other way to check to ensure that it's not a google crawl issue (i.e., google
    hasn't crawled over it yet)...
    MANY THANKS,
    Graham

  • Random disconnects on Infinity using an HH3 type r...

    My apologies in advance if this topic has been covered before, but a search of the forums didn't come up with anything that matched.
    Is anyone seeing odd problems with HH3 Type A router disconnects/hangs?
    When I got Infinity last year, BT supplied a HH3 type B unit, and I was affected by the 'slowing to a crawl' issue that's been covered in the forums here in the past. After it happened so many times I contacted BT and they replaced the HH3 type B with an HH3 type A, which cured the slowing issue. All has been fine until about ~6 weeks ago, and now I'm seeing a problem on the type A router.
    Typically, I start to notice lengthy DNS resolution issues, then the blue broadband light turns orange on the HH3, and I loose Internet access. If I connect to the HH3 with a web browser I see a 'Connecting' message next to a spinning blue circle. Any attempt to disconnect and reconnect via the browser does nothing (as in the web browser session hangs), so I have to power cycle the HH3, and when it's re-booted everything is back to normal. I haven't touched the white OpenReach at all since I installed the HH3 type A. Whenever I get one of these hangs the OpenReach box lights look normal.
    This HH3 behaviour seems different to the odd short disconnect and automatic re-connect that I've seen once or twice during the last 11 or so months, because once one of these hangs happens the HH3 never seems to reconnect on its own. During any outage my phone line seems fine, although of course that's no guarantee that everything's OK.
    I do have a USB flash drive plugged into the HH3, but whenever one of these router hangs occurs it wasn't being accessed, so I don't think that that's relevant. From what I can tell the HH3 has the latest firmware: 4.7.5.1.83.8.57.1.3. My throughput seems about right for an Infinity1 connection (36 down, 8 up), so I have no real idea what this might be. Like many customers, I'm loathe to contact BT Broadband Support unless I really have to.
    So, with the above in mind, could the collective (ah, Borg) offer me some advice please?
    -- Is it an HH3 issue?
    -- Is it worth replacing the HH3 with something else? A couple of my colleagues run their Infinity connections that way. I wouldn't want to spend the earth on a replacement router, but if that is an idea, what should I go for? How easy are third party devices to set-up with Infinity?
    -- Is it a line issue? TBH, I don't think so. Every time that it's happened thus far power-cycling the HH3 has fixed things.
    Don't get me wrong -- it's not a big problem generally, but if you're in the middle of a big download, streaming stuff to watch on TV or end-up with your wife bending your ear because things don't work, then it does get to be a bit of an issue.
    Any advice or suggestions would be most gratefully received and appreciated.
    Cheers
    Clem

    Hihi,
    I would completely reset the HH, then re-enter details and see if the issue still arises. Since you haven't touched your white modem maybe turnign it off and on could help (then never touch it again :-)  ).
    If it happens again after that it might be worth a call to support as they can tell you if they can see any issues on your line. Then you will have more weight when you ask for a new HH when they say its fine :-)
    If the HH meets your needs then try for a replacement. There was a number in the HH box for replacements. You might be annoyed if you bought a new router only for it too have the same issues :-/ So a pain yes, but for the greater good.
    Good luck,
    Shaun (1 of 4)

  • Search User Profiles without MySites

    We upgraded from 2007 to 2010 last year. In 2010 we did not set up a MySite web app or top level site collection. We did set up the User Profile Service and the profiles are syncing data from AD correctly. I have two issues that I would like help resolving.
    First, I would like to set up a search where we can search for people and get their profile information. I want to use this like a company directory. I don't want to use the site collection user list, I want the profile information instead because the site
    collection user list does not contain all of our users. If I don't have MySite, what do I set the crawl to?  Where are the profiles stored?
    Second, in a site collection, when we click on the name of the user under "created by" or "modified by", we get a page not found.  I'm not sure if that is supposed to link to the profile (which I think is person.aspx) or the site
    collection user (userdisp.aspx). 
    Are both of these issues caused by not having a MySite application set up?  Is there anyway to have and use profiles without having them in the MySite app and site collection?  I would like users to be able to modify their profile and we'd like
    to search them.  I'd also like those links under created/modified by to link to something.  I've read about 50 articles and books and can't find a definitive answer on needing/not needing MySite. Any help is much appreciated.
    Milissa Hartwell

    I was able to resolve my crawl issue. I set up a separate content type (not sure if a separate one was needed, but that's how I did it). The crawl was set for sps3://[myservername].  Once the crawl was complete, I was able to search for people in my
    search center.
    My userdisp and person.aspx are still not coming up.  When I click on a user in the site collection, I'm expecting to see their info in the userdisp.aspx page but I still get a page not found.  And after I search for people, if I hover over
    their name, the URL shows
    http://[myserver]/my/person.aspx?accountname=[accountname] but I also get a page not found.
    Does anyone know how to get userdisp and/or person.aspx to render?  I looked in my 14 hive on the server and the userdisp.aspx page does exist so I'm not sure why it won't come up at least.
    Also still wondering if users need a mySite to edit their Profile.
    Milissa Hartwell

  • Content editor webpart error "cannot retrieve properties at this time"

    Hi,
    Everytime I try to editor content within the content editor web part I receive the error message "cannot retrieve properties at this time".
    I can edit the content within the web part when I use http. The error only seems to occur to when using https and SSL.
    I've looked at the following web site and KB articles to implement a workaround however still get the same error.
    http://support.microsoft.com/default.aspx/kb/912060
    http://www.sharepointblogs.com/agoodwin/archive/2007/09/03/content-editor-webpart-error-quot-cannot-retrieve-properties-at-this-time-quot.aspx
    http://support.microsoft.com/kb/830342
    Can anyone else suggest anything else?

    My symptoms
    1.  CEWP & Sharepoint Designer could not retrieve data.
    2.  Event id 5000 "ulsexception12,p1 w3wp.exe ..."
    Problem
    1. Upgraded Small Farm WSS and SPS 2007 to SP2.
    2. Unhandled exceptions and access denieds int the uls logs no specifics to run down.
    3. Crawl issues and had to reset the crawl index.
    Solution
    1. ASPNet user account had certain rights stripped by gpo, check your permission with Tech Net "IIS and Built-in Accounts 6.0"  or I found a much easier product to use Mircrosofts "Authentication & Access Control Diagnostics 1.0"
    2. Ran the "stsadm -o sync -listolddatabases 0" to check for unsyn'd databases  then ran the "stsadm -o sync deleteolddatabases 0"  to clear those databases.
    Havn't seen any further issues, this was a tuff nut to crack hope this helps someone else.jellywood

Maybe you are looking for

  • No Pro because Player won't install

    I've just bought QuickTime Pro. Unfortunately I hadn't installed the QuickTime player before purchasing. Now I find that the player won't install: it looks for the QuickTime.msi file so that it can uninstall a previous version of the player, but the

  • Some thoughts on battery

    It is just me? When talking about battery life, people are comparing it with the PowerBook. I don't think you can do that considering the difference between the MBP and the PB. For starter, my 12" PB can go for about 3-4 hours of "limited" use (note

  • Using LMS 3.2 can it possible to monitor device Utilization

    Hi, Running LMS 3.2. Can we able to see the utilization reports related CPU,memory,link,interface etc ?? Help me out!!! Regards, Krishna

  • PO approver name & PO approval date in an report

    Hi, I am aware that there is no standard SAP report which gives the information of PO approver name and PO approval date. If there is any please let me know. However we want to develop such report which gives this information and we are bulding logic

  • Validation message

    Hi I have a code segment like this - <messageTextInput name="WlCensusNonTreatPrimary" promptAndAccessKey="& " shortDesc="Enter the number of applicants" data:readOnly="isViewing@jheadstart:ServiceDelivery" onBlur="check()"> <onBlurValidater> <decimal