4.5WS SP2 Crawler Issue

We have redeployed our portal to new hardware as well as upgrade the portal from 4.0SP1 to 4.5 then to 4.5WSSP2. We have a issue with crawlers duplicating cards in the catalog. Because we deployed on new hardware we have a new file server the stores the file, we used SQL scripts to change this url in Plumdb (ptobjectproperties & ptcardproperties).
Anyone else seen this problem??

If I recall correctly, GetSession was a function call that was in utility.asp (which is in the common folder.) but might have previously (prior to 4.5WS) resided in one of the gadget utility files. Regardless, GetSession was an integral functional call for a lot of gadgets. Your gadget code is failing because it can not locate this function. This is likely because it was once included by virtue of including another file. I sure miss the good ole days.
Any who. to correct, find the GetSession function in the PTUI and move it to somewhere that your code can find it.
Phil Orion | [email protected] | www.orionsmith.com

Similar Messages

  • Best Practice to troubleshoot a Crawler Issue after SQL Server restarts

    Hi Everyone
    I am after some general advice or a suggested place to start when troubleshooting crawler issues after an unplanned, out-of-sequence server restart.
    Specifically, the SQL Database in the SharePoint 2010 farm. Better yet, is there a standard practice way of resolving such issues?<o:p></o:p>
    So far articles I have found suggest options from reviewing the crawl logs, creating a new crawl component, right through to using fiddler to monitor the crawler.
    Are these sufficient places to start / methodologies to follow, what else should be considered?
    Any advice greatly appreciated.
    Thanks,
    Mike

    Well, as I said before, there are lots of different potential issues & resolutions for crawlers.  It really depends on the issue.  I would say that the base troubleshooting steps start the same no matter which service/feature you are looking
    at.  So, I'll try to keep this sort of generic, but beyond finding the details of the problem, the SOP or process will vary greatly based on what the error is.  I hope this helps, and sorry if it's not specific enough.
    1 - check the ULS logs
    2 - check the windows application logs
    3 - verify related services are running (get-spserviceinstance), possibly stop/start them to reprovision the instance on the affected server
    4 - clear the config cache (this alone will clear about 30% of your basic problems)
    6 - verify disk space & resource consumption on affected server (& SQL, SQL is always the potential to be the true "affected" server)
    7 - iisreset
    8 - verify connectivity between all servers in the farm and SP
    9 - verify requir3ed features activated
    10- check if any changes were made to environment recently (new hardware, updates to OS or apps, updates to GPOs, new solutions, etc)
    11- check if the issue is reproducible in another environment (only reliable if this is a similar environment, i.e. same patch level test farm or dr farm).  see if it occurs in other site collections within the web app, different web apps, different
    servers/browsers, etc, basically just try to nail down the scope of the problem
    There is a whole slew of thiings you could check from verifying certificates & perms, to rerunning psconfig, to checking registry keys, again I go back to it depends.  hopefully this can get you started though.  in the end ULS is where all
    the real info on the issue is going to be, so make sure you check there.  don't go in with tunnel vision either.  if you see other errors in ULS, check them out, they may or may not be related; SharePoint is an intricate product with way more moving
    parts than most systems.  fix the little quick ones that you know you can handle, this will help to keep the farm clean and healthy, as well as crossing them off the list of potential suspects for your root cause.
    Christopher Webb | MCM: SharePoint 2010 | MCSM: SharePoint Charter | MCT | http://christophermichaelwebb.com

  • Crawling issues with Ajax Module

    Hello all
    i have a crawling issue on the homepage of my website, and robots don't seem to go pass the homepage (for crawling the links) because the data is pulled through a Ajax module. I am trying to figure out a way to make the links crawlable, my approach would be to show an alternate view when Javascript is disabled. I just need validation that the code below would work:
    <!-- start of global-content -->
    <div id="global-content"><!--start of ajax content--->
      <h1>Tourisme Laval - Pleins d'affaires à faire</h1>
      <ul>
        <li><strong>Hotel Laval</strong>: Laval vous propose de nombreux hebergements tout au long de l’année.</li>
        <li><strong>Odysséo de Cavalia - À partir du 14 mai 2013</strong>: Après une année de tournée nord-américaine triomphale, Odysséo revient au Québec, à Laval!</li>
        <li><strong>Mondial Loto-Québec de Laval - 19 juin au 7 juillet 2013</strong>: Le Mondial Loto-Québec de Laval est sans contredit le festival de l'été à Laval avec des centaines de spectacles de musique pour tous les goûts.</li>
      </ul>
    <!--end of ajax content--->
    </div><!-- end of global-content -->
    By placing alternate html content here (that would be replaced on the load of AJAX content), would it be enough for the links to get crawled ?
    thanks for your help !
    Helene

    Thx for your reply, i'm not so sure about the <noscript> tag because i have red posts about reasons why not using the <noscript> tag : http://javascript.about.com/od/reference/a/noscriptnomore.htm. The main reason is :
    The noscript tag is a block level element and therefore can only be used to display entire blocks of content when JavaScript is disabled. It cannot be used inline.
    In other words, from this explanation, i would not be able to add html tags within the <noscript> function.

  • Crystal Reports Visual Studio 2010 SP2 Fixed issues

    Hi All,
    Here is the list with fixed issues in Crystal Reports Visual Studio 2010 SP2
    http://www.crystaladvice.com/crystalreports/crystal-reports-2010-sp2
    The following list with issues is fixed:
    1566763 - CRVS2010 WPF Viewer error; "System.NullReferenceException was unhandled" PageControl.OnMouseMove
    1540637 - Error: External component has thrown an exception. Launching the Database Expert in the embedded Crystal Reports designer in Visual Studio
    1544675 - Error; 'Object reference not set to an instance of an object' when using the CR WPF viewer in VS2010
    1578823 - CRVS2010; "Load Report Failed" error when Report is open in any full version of the CR Designer
    1638191 - Using RTL language (Arabic) the CR viewer Group Tree Panel does not display RTL
    1631283 - Error; 'Failed to load database information' when reporting off of file system data
    1553469 - How to enable Database logging in Crystal Reports for Visual Studio 2010
    1299185 - Error: Operation not yet implemented or Failed to Export, when exporting a Crystal Reports to Text format
    1451960 - Null or empty values are not surrounded with delimiter when exported to CSV format
    1659185 - The special Crystal Reports field, 'File Name and Path' shows temp path and temp name when viewing a report
    1452648 - Dynamic Cascading Parameter prompts two times when using the Crystal Reports in VS .NET
    1580338 - When refreshing a report that contains a linked subreport report takes long time to execute
    1659111 - GCHandle left in memory for each open and close of a Crystal Report in VS2010 application
    1356672 - Crystal Reports special field "File Path and Name" displays incorrect information in a Visual Studio .NET application
    1593658 - Impersonation of database Log On credentials failure in report generation when set at runtime in a .NET application
    1661239 - Summary fields are converted to String fields
    1661276 - Using the Crystal Reports SetTableLocation method in VS .NET causes long report processing delays
    1661200 - Not able to copy text from report objects using the Crystal Reports WinForm viewer for VS .NET
    1631722 - Date function in a Selection Formula is removed when running a report in a VS .NET application
    1525822 - Exception "Object reference not set to an instance of an object." thrown when clicking on Parameter Panel button in Crystal Report .NET Windows Form Viewer
    1603082 - Cross-tab background colors not retained when exporting a report to PDF format
    1603154 - Shared variable display the incorrect value in Crystal Reports
    1427747 - Why does a CR .NET SDK web app have problems running on IIS 7 in integrated pipeline mode?
    1545536 - Alignment set to Justify causes broken underline
    Source: Resolved Issues in Service Pack 2 for Crystal Reports for Visual Studio 2010
    With kind regards,
    Pieter Jong
    Crystal Advice
    http://www.crystaladvice.com

    Many thanks for the link Pieter.
    I recently created a wiki that lists all of the fixes, their tracking numbers, associated Kbase numbers, Kbase titles and links to the kbases:
    http://wiki.sdn.sap.com/wiki/x/tgK3Dw
    It's 90% complete. I think I have a few Kbases to do to complete it.
    Now that I think about it, I'll also add the link to the [sticky thread|SP 2 for Crystal Reports for Visual Studio 2010 released!; re. SP2 release.
    - Ludek

  • App-V 5.0 SP2 RunVirtual Issue

    Hello,
    in App-V 5.0 SP2 when using the RunVirtual feature, on adding iexplore.exe as a registry key entry with the corresponding package and versions ID's for a sequenced version of Silverlight, when launching Internet Explorer it does not open in the
    virtual environment. However, if I launch iexplore.exe from the command line and specify the /appvve:<GUID>_<GUID> switch, iexplore.exe does open within the virtual environment and Silverlight pages load correctly. I was wondering if anyone had
    experienced this issue before? Thanks.
    Regards.
    http://www.b4z.co.uk

    That looks OK to me, not sure why it wouldn't be working.
    Try a trace with ProcMon when launching IE with the RunVirtual key enabled. That may indicate whether the key is being read or not.
    Please remember to click "Mark as Answer" or "Vote as Helpful" on the post that answers your question (or click "Unmark as Answer" if a marked post does not actually
    answer your question). This can be beneficial to other community members reading the thread.
    This forum post is my own opinion and does not necessarily reflect the opinion or view of my employer, Microsoft, its employees, or other MVPs.
    Twitter:
    @stealthpuppy | Blog:
    stealthpuppy.com |
    The Definitive Guide to Delivering Microsoft Office with App-V

  • Crawl Issues?

    About 1 week ago I noticed that our incremental crawls were taking an extremely long time to complete. When they do complete, there are usually a ton of errors. I am seeing the following errors in the Event Viewer:
    SharePoint Server Search - 2580
    SharePoint Server Search - 2581
    I have 142k errors in SharePoint. The error message is, Processing this item failed because of an unknown error when trying to parse its contents.
    I have three web applications: Portal (intranet), People (MySites), and Publish (publishing). I've separated the content sources according to the web application. There seems to be an issue with the crawl?
    Has anyone else experienced this issue or have any suggestions?

    What do you see in ULS log regarding this error?
    During the same time, I see the following errors:
    Microsoft.Ceres.ContentEngine.SubmitterComponent.ContentSubmissionDecorator : CreateSession() failed with unexpected exception System.ServiceModel.FaultException`1[System.String]: The creator of this fault did not specify a Reason. (Fault Detail is equal
    to Specified flow: Microsoft.CrawlerFlow could not be started. Unable to start a flow instance on any of the available nodes.).
    <CompanyName>Session::Init: m_CSSConnection.Service.CreateSession threw exception (System.ServiceModel.CommunicationException), message = Exception type is System.ServiceModel.FaultException`1[System.String] message = The creator of this fault did
    not specify a Reason.;, CSSNode = net.tcp://<computername>/5DC9BD/ContentProcessingComponent1/ContentSubmissionServices/content, Flow = Microsoft.CrawlerFlow

  • App-V 5 SP2 install issue with Intel NIC Driver 11.16.96.0

    When deploying App-V 5 SP2 using SCCM 2007 it will install with no problem on the machine.  However, when the user reboots after the install the machine will blue screen.  The user then reboots again and machine comes up to the desktop.  However,
    the App-V service is not running and is still set to automatic.
    Looking at the Blue Screen dmp file the common theme looks to be a conflict with an Intel NIC driver version 11.16.96.0.  Machines running an older version of the driver do not experience the issue with the blue screen after reboot and the App-V service
    starts with no issues.
    Has anyone seen this issue before?

    Version 19 appears to be the latest: https://downloadcenter.intel.com/Detail_Desc.aspx?DwnldID=18713&lang=eng&ProdId=3025
    Have you tested with this version instead?
    Please remember to click "Mark as Answer" or "Vote as Helpful" on the post that answers your question (or click "Unmark as Answer" if a marked post does not actually
    answer your question). This can be beneficial to other community members reading the thread.
    This forum post is my own opinion and does not necessarily reflect the opinion or view of my employer, Microsoft, its employees, or other MVPs.
    Twitter:
    @stealthpuppy | Blog:
    stealthpuppy.com |
    The Definitive Guide to Delivering Microsoft Office with App-V

  • Major crawler issues - removal of trailing slashes from URLs and (seemingly) poor regex implementation

    I've been suffering from two issues when setting up a crawl of an intranet website hosted on a proprietary web CMS using SharePoint 2010 Search. The issues are:
    1: The crawler appears to remove trailing slashes from the end of URLs.
    In other words, it turns http://mysite.local/path/to/page/ into http://mysite.local/path/to/page. The CMS can cope with this as it automatically forwards requests for http://mysite.local/path/to/page to http://mysite.local/path/to/page/ -
    but every time it does this it generates a warning in the crawl - one of the "URL was permanently moved" warnings. The URL hasn't been moved, though, the crawler has just failed to record it properly. I've seen a few posts about this in various places,
    all of which seem to end without resolution, which is particularly disappointing given that URLs in this format (with a trailing slash) are pretty common. (Microsoft's own main website has a few in links on its homepage - http://www.microsoft.com/en-gb/mobile/
    for instance).
    The upshot of this is that a crawl of the site, which has about 50,000 pages, is generating upwards of 250,000 warnings in the index, all apparently for no reason other than the crawler changes the address of all the pages it crawls? Is there a fix for this?
    2: The Regex implementation for crawl rules does not allow you to escape a question mark
    ... despite what it says
    here: I've tried and tested escaping a question mark in a url pattern by surrounding it in square brackets in the regex, i.e.: [?] but regardless of doing so, any URL with a question mark in just falls right through the rule. As soon as you remove the 'escaped'
    (i.e. seemingly not actually escaped at all) question mark from the rule, and from the test URL pattern, the rule catches, so it'd definitely not escaping it properly using the square brackets. The more typical regex escape pattern (a backslash before the
    character in question) doesn't seem to work, either. Plus neither the official documentation on regex patterns I've been able to find, nor the book I've got about SP2010 search, mention escaping characters in SP2010 crawl rule regexes at all. Could it be that
    MS have released a regex implementation for matching URL patterns that doesn't account for the fact that ? is a special character in both regex and in URLs?
    Now I could just be missing something obvious and would be delighted to be made to look stupid by someone giving me an easy answer that I've missed, but after banging my head against this for a couple of days, I really am coming to the conclusion that Microsoft
    have released a regex implementation for a crawler that doesn't work with URL patterns that contain a question mark. If this is indeed the case, then that's pretty bad, isn't it? And we ought to expect a patch of some sort? I believe MS are supporting SP2010
    until 2020? (I'd imagine these issues are fixed in 2013 Search, but my client won't be upgrading to that for at least another year or two, if at all).
    Both these issues mean that the crawl of my client's website is taking much longer, and generating much more data, than necessary. (I haven't actually managed to get to the end of a full crawl yet because of it... I left one running overnight and I just
    hope it has got to the end. Plus the size of the crawl db was 11GB and counting, most of that data being pointless warnings that the crawler appeared to be generating itself because it wasn't recording URLs properly). This generation of pointless mess
    is also going to make the index MUCH harder to maintain. 
    I'm more familiar with maintaining crawls in Google systems which have much better regex implementations - indeed (again, after two days of banging my head against this) I'd almost think that the regex implementation in 2010 search crawl rules was cobbled
    together at the last minute just because the Google Search Appliance has one. If so (and if I genuinely haven't missed something obvious - which I really hope I have) I'd say it wasn't worth Microsoft bothering because the one they have released appears to
    be entirely unfit for purpose?
    I'm sure I'm not the first person to struggle with these problems and I hope there's an answer out there somewhere (that isn't just "upgrade to 2013")?
    I should also point out that my client is an organisation with over 3000 staff, so heaven knows how much they are paying MS for their Enterprise Agreement. Plus I pay MS over a grand a year in MSDN sub fees etc, so (unless I'm just being a numpty) I would
    expect a higher standard of product than the one I'm having to wrestle with at the moment.

    Hi David,
    as i know in sharepoint 2010 crawl there is a rule to include or exclude the URL that using '?' characters, if i may know have you implemented the rule?
    In the Crawl Configuration section, select one of the following options:
    Exclude all items in this path. Select this option if you want to exclude all items in the specified path from crawls. If you select this option, you can refine the exclusion by selecting the following:
    Exclude complex URLs (URLs that contain question marks (?)). Select this option if you want to exclude URLs that contain parameters that use the question mark (?) notation.
    Include all items in this path. Select this option if you want all items in the path to be crawled. If you select this option, you can further refine the inclusion by selecting any combination of the following:
    Follow links on the URL without crawling the URL itself. Select this option if you want to crawl links contained within the URL, but not the starting URL itself.
    Crawl complex URLs (URLs that contain a question mark (?)). Select this option if you want to crawl URLs that contain parameters that use the question mark (?) notation.
    Crawl SharePoint content as HTTP pages. Normally, SharePoint sites are crawled by using a special protocol. Select this option if you want SharePoint sites to be crawled as HTTP pages instead. When the content is crawled by using the HTTP
    protocol, item permissions are not stored.
    for the trailing slash issue, may i have your latest cumulative update or your sharepoint database number? as i remember there was a fix on SP1 + june update regarding trailing slash, but not quite sure if the symptoms are the same with your environment.
    Sharepoint may use sharepoint connector regarding regex, but older regex may not able to have a capability to filter out the parameters so, a modification regarding the trailing slash may happend.
    please let us know your feedback.
    Regards,
    Aries
    Microsoft Online Community Support
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Crawl issue

    Hello everyone,
    Could you help me solve the issues below?:
    1 - I can't "Start Full Crawl" in CA of Sharepoint 2013. Reason why I can't start it by myself simple - menu is disabled, which control (Start,Stop,Paus or Delete) processes for more details see the screenshot:
    http://social.technet.microsoft.com/Forums/getfile/527186
    2 - I have created a new site "A" in Sharepoint 2013 and try to search on it some documents but  it is doesn't work at all. As for my home page, search works fine like expected. I have done re-indexed site
    "A" after created it, and waited a day but it doesn't help me too.
    My suggestion that it my be cause I had issue #1.
    Any idea?
    Thank you in advance.

    Hello,
    The TechNet Sandbox forum is designed for users to try out the new forums functionality. Please be respectful of others, and do not expect replies to questions asked here.
    As it's off-topic here, I am moving the question to the
    Where is the forum for... forum.
    Karl
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or Mark As Answer.
    My Blog: Unlock PowerShell
    My Book:
    Windows PowerShell 2.0 Bible
    My E-mail: -join ('6F6C646B61726C406F75746C6F6F6B2E636F6D'-split'(?<=\G.{2})'|%{if($_){[char][int]"0x$_"}})

  • BES 5.0.1 & GW SP2 HP1 issues?

    I keep hearing that there are "issues" relating relating to the newer BES
    and the SP2 HP1.
    Not wanting to jinx myself but I'm running such and at this time, I don't
    believe I'm having any major issues, what "issues" are there? My users are
    little slow at times in reporting problems!

    This also happens on our BES 5.0.1 box.
    On 11/29/2010 at 11:34 AM, Al Hidalgo<[email protected]> wrote:
    The main issue we have is that appointments deleted in the GW client do not delete on our BlackBerry devices until the trash is emptied.
    test that....
    We are on GW 8SP2 HP1 and BES 4.1.7.
    Thanks,
    Al
    On 11/29/2010 at 8:10 AM, Michael Rae<[email protected]> wrote:
    I keep hearing that there are "issues" relating relating to the newer BES
    and the SP2 HP1.
    Not wanting to jinx myself but I'm running such and at this time, I don't
    believe I'm having any major issues, what "issues" are there? My users are
    little slow at times in reporting problems!

  • Sharepoint 2013 site having NTML Crawl issue while crawling Sharepoint 2010 sites having FBA authentication

    Hi,
    We have Sharepoint 2013 search center site which is claim based with NTLM authentication set. we have Sharepoint 2010 farm also running which are FBA authenticated.
    While crawling Sharepoint 2010 sites having FBA authentication from SP 2013 search center having NTLM auth. it does not give proper result.
    Can you please help me what can be done here?
    Thanks,
    Prashant

    Hi Prashant,
    According to your description, my understanding is that the search cannot work correctly when crawling the SharePoint site which is form-based authentication.
    Per my knowledge, the crawl component requires NTLM to access content. At least one zone must be configured to use NTLM authentication. If NTLM authentication is not configured on the default zone, the crawl component can use a different
    zone that is configured to use NTLM authentication.
    However, if crawling a non-default zone of the web application, URLs of results will always be relative to the non-default zone that was crawled for queries from any zone, and this can cause unexpected or problematic behavior.
    I recommend to make sure that the default zone of the SharePoint 2010 web application to use NTLM authentication.
    More references:
    http://technet.microsoft.com/en-us/library/dn535606(v=office.15).aspx
    http://technet.microsoft.com/en-us/library/cc262350.aspx#planzone
    Best regards.
    Thanks
    Victoria Xia
    TechNet Community Support

  • Crawler issues

    I installed Plumtree content service for windows files (latest SP1 version).
    When imported packages and set the crawler to run on a windows folder, crawler runs with error "unrecoverable error when bulk importing cards".
    But it imports cards into the system. I can see the cards/properties/files in edit mode but when I go into browser mode, I do not see any files. crawler sis set to approve files automatically, I manually approved the files.
    Not sure where the problem is ?

    Hi!
    It seems as though your Search service is not indexing those files, but they exist in the portal DB.
    Is that "unrecoverable error when bulk importing cards" error the only message you receive in the job log?
    Thanks

  • Exchange server 2010 sp2 slow issue

    My Exchange server 2010 sp2 performance very slow in morning time 9 to 10 AM and evening time 4 to 5 pm only and other time working fine

    Hello,
    Forums for Exchange versions prior to Exchange 2013 are over here:
    http://social.technet.microsoft.com/Forums/exchange/en-US/home?category=exchangeserverlegacy&filter=alltypes&sort=lastpostdesc
    Karl
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or Mark As Answer.
    My Blog: Unlock PowerShell
    My Book: Windows PowerShell 2.0 Bible
    My E-mail: -join ('6F6C646B61726C40686F746D61696C2E636F6D'-split'(?<=\G.{2})'|%{if($_){[char][int]"0x$_"}})

  • Crawl issue - warning: "This URL is part of a host header SharePoint deployment and the search application is not configured to crawl individual host header sites. This will be crawled as a part of ....a start address"

    Hello all,
    I have a multi-tenant environment in SharePoint 2010 server. Different tenants are hosted by one web app.
    One content source has been created to crawl the all shared environment. (the only URL added in the content source settings = URL of the
     web app)
    Crawl everything under the hostname for each start address has been selected when creating this content source.
    Now I have created a new tenant hosted by the same web app. For this new tenant I want to have a different crawl schedule. Simple , I just create a new content source and add the host URL of this tenant. With the same settings has the other content source.
    After having started a full crawl I get 0 success and 1 warning : “This URL is part of a host header SharePoint deployment and the search application is not configured to crawl
    individual host header sites. This will be crawled as a part of the host header Web application if configured as a start address.”
    The first content source is well crawling the new tenant freshly created. Could you tell me where I’m wrong?
    Thanks in advance
    Regards
    Baldo

    Baldo,
    In the configuration that you described you now have 2 content sources set to crawl the same content. If you have a content source with a start address of http://servername/ then it is going to crawl everything past that address. If you are now changing
    the crawl schedule based on the individual site collections, you will need to move your start addresses further down the URL.
    For Example:
    1st Content Source:
    http://servername/ as a start address would now become
    http://servername/site1
    http://servername/site2
    2nd Content Source:
    http://servername/site3
    Also remember that all crawling must happen on the default zone. If you are trying to crawl a zone that the web application has been extended into, that will not work.
    ScottC - Microsoft SharePoint Support

  • Crawling issue

    Dear friends,
    We have EP 6.0 SP 16 and TREX 6.1 SP 16 installed on a linux box. We are using our portal as a public website. When I try to do a web crawling on our portal, I just get one result. I also tried crawling the portal using a third party software and it also returns only 1 result. Do I need to enable some configuration on portal so that crawling the portal itself will work? The crawler does work fine with other web sites.
    Any help is appreciated,
    Thanks,
    Mandar

    Hi Mandar
    How is the behaviour when you use the light framework page instead , check this weblog
    Coming Soon: the External Facing Portal (EP6 SPS14)
    , however you will have to forego some feature , if you are looking at this an an option.
    Technically it is possible build a indexable website using EP Eg:SDN , but this involves lot of development and moving away from lot of standard features.
    Regards
    Pran

Maybe you are looking for

  • Satellite L505-13W shows Blue screen error

    My laptop SATELLITE L505-13W Shows blue screen error. The os gets booted up and after some time it shows a blue screen error and restarts. On restart it asks for repair, even after succesfull repair it showed the same error. So i tried to recover usi

  • Which tuner? ExpressCar​d? or USB???

    Just ordered my new laptop; the HDX 18T; should be here soon; comes without, however, the optional integrated TV tuner. So I wish to get some tuner. But I see there is this HP USB TV tuner .. and there also is the ExpressCard thing that goes in the s

  • Issue in fnd_submit.submit_program

    Hi All, I have a requirement of submitting sequest sets dynamically depending upon the user input parameter. So I wrote one shell script in which I will find which set needs to launched. After that in a pl sql block I wrote one cursor to take all pro

  • Dynamic Queries with JSP

    I am developing a search page which should first bring the results sorted by some particular field & then the user should have the option of sorting the results by other field names which are provided in a form list. For generating dynamic requests,

  • Can't upgrade catalog from Elements 8 -thinks photos are on backup drive...

    I've just bought LR3 (and upgraded to latest 3.2 release), and now I want to import all my photos from my Elements 5 collection (just over 10,000 photos - mostly RAW). All the photos are actually stored on a network share from my Windows Home Server