Database Crawler-Follow URL

Hi,
I am trying to crawl an external SQL Server database using SharePoint 2013 Search engine.
The database has a table that holds URL of documents published in a SharePoint 2010 site. I would like to know if SharePoint 2013 crawler will be able to follow the document urls in the DB table columns to crawl the document contents stored in SharePoint
document library?
Thanks,
Thanks, Bivsworld

Hi Bivsworld,
According to your description, I did a test as the followings:
Create a table in a SQL database, and create a column to store the URLs of some documents stored in SharePoint libraries.
Create an External Content Type using the table.
Create a content source for the external content type.
Crawl the content source.
I used the following article as a reference:
http://www.sharepointinspiration.com/Lists/Posts/Post.aspx?ID=5 .
After crawling, I checked the Crawl log of the content source, only the items in the SQL database table was crawled, the content of the documents were not crawled.
So, for your question, SharePoint 2013 crawler don’t follow the URLs of documents stored in DB table column to crawl the content of these documents.
Best Regards,
Wendy
Wendy Li
TechNet Community Support

Similar Messages

  • Database Crawler Setup

    Does anyone have some step by step instructions for setting up the .Net Database crawler sample? I don't have admin access to our portal so I can't import the pte. So far I just get an error when I try to create the Datasource using the webservice.
    "There was an error communicating with the Service Configuration ....."
    Thanks,

    Here are the settings from the pte file:
    Conainer URL: http://@REMOTE_SERVER@/DatabaseRecordViewer/ContainerProviderSoapBinding.asmx
    Document URL: [url=http://@REMOTE_SERVER@/DatabaseRecordViewer/DocumentProviderSoapBinding.asmx]http://@REMOTE_SERVER@/DatabaseRecordViewer/DocumentProviderSoapBinding.asmx
    Upload URL: http://@REMOTE_SERVER@/
    Gateway URL Prefixes: http://@REMOTE_SERVER@/.
    Service Configuration URL: [url=http://@REMOTE_SERVER@/DatabaseRecordViewer/XUIService.asmx]http://@REMOTE_SERVER@/DatabaseRecordViewer/XUIService.asmx
    Administration Configuration URL: http://@REMOTE_SERVER@/
    User Configuration URL: http://@REMOTE_SERVER@/
    Basic Authentication info sent to Web Service: Use Remote Server Basic Authentication Information
    Settings: None
    SOAP Encoding Style: Document/Literal
    It looks like the important parts match up with your settings. Do you see the service endpoints when you go to [url=http://@remote_server@/DatabaseRecordViewer/XUIService.asmx][url=http://@REMOTE_SERVER@/DatabaseRecordViewer/XUIService.asmx]http://@REMOTE_SERVER@/DatabaseRecordViewer/XUIService.asmx ?
    Have you tried tracing the crawl to see what comes back from the remote server when you create the datasource? e.g with something like TcpTrace?

  • Database Crawler - what constitutes a modification?

    Hi
    It appears that the database crawler determines that an object has changed when the last modification date changes. In our schema we have a parent-child relationship situation such that we need to have a child object re-indexed when a particular column of the parent object changes. So the last modified date of the parent object changes, but not that of the child object. The SQL for our crawl of the child object references the parent object column and stores it as a search attribute of the child object. So I was expecting the crawl of the child object to notice the change in the parent column and re-index the child object.
    Is there some way I can cause the child objects to be re-indexed other than forcing the last modified date to change?
    Thank you

    Not sure if this helps, but we have a process that involves using a stored procedure that runs every night that refreshes a staging table that SES indexes. The procedure compares the "current" live system data with the "indexed" SES data, deletes any SES staging rows where the data has changed, and then inserts the "fresh" rows in the SES table using the SYSDATE as the last modified date. Thus only the newly inserted data is indexed by SES on the next crawl.
    We use a staging table as opposed to real-time SQL due to the many custom functions we perform on the search attributes to get the data into the right format.
    Oracle.... Any plans to have SES support crawlers calling stored procedures to return indexed data? This would be very powerful!

  • Database crawler won't doc open problem

    I am using the database record viewer and have crawled in some databases but when I try to apply a stylesheet to them and you click on the file it does not open in the browser it wants to save the file. If you do save it then open it the file displays fine. Here is a doc properties..
    Open Document URL
    http://edk/portal/server.pt/gateway/PTARGS_0_1_155
    55_0_0_18/D%3B\Temp\Customers.ALFKI.html
    URL
    databaserecordviewer/docfetch?path=Northwind%7Cdbo
    %2CCustomers%7CCompanyName%7CAddress%7Cwebapps%2FR
    OOT%2Fcompanies.xsl%7CCustomerID%7C1%7CALFKI&local
    e=en&contentType=http%3A%2F%2Fwww.plumtree.com%2Fd
    tm%2Fmime&signature=&IEFile=D%3A%5CTemp%5CCustomer
    s.ALFKI.html
    This is using Plumtree 5.02

    I experience too a strange issue.
    I use this sample crawler and when i click on the link it's displaying this : PlumPIDxxxx with an incremental number each time i refresh the page.
    When i use trace in the DocFetch method GetDocument, i send the right path (d:\temp\<file>.xml), and i tried before sending back to read the stream and it's correct...
    My stream is sent like binary?
    Is it a gateway problem?
    Thanks for your help.

  • Display all database results with URL variable?

    Hi,
    I am using PHP MySQL
    I have a database that is Alphabetically ordered. I am trying to create a dynamic page that will only show A, B, C, ... and so on. I do this by Creating links named after the letters in the Alphabet. I have a recordset that filters the table using a URL variable that uses Type. Unfortunately when I click on the link it only displays the first record that contains an A. Not all the records that contain  A.
    For example:
    I have a link that is just "A". Its link is ?Type=A. In the database table their are 3 records that contain A under Type, but when I click on it only one record containing A is displayed. I hope you are following this.
    If you want a direct reference to what I am doing go to:
    http://cwhazzoo.com/gamelinktesting.php
    This is how my database is designed:
    ID (primary key)
    Name
    Letter
    0001
    Example A
    A
    I want to be able to show only the selected letter of records using only one page. Thanks!

    >Should I use the repeat region?
    Yep. That's what it's for.

  • Query not running in sql developer, neither connecting to database. Following error - java.util.UnknownFormatConversionException: Conversion = '0'. Please reply soon.

    when i try to connect to oracle 11g rdbms, following error occurs -->
    1. sql developer version - 4.0.3.16 (jdk - jdk1.7.0_51 externally installed) -- newly installed, giving following error when try to connect to oracle 11g rdbms.
    2. sql developer version - 3.1.07.42 (jre1.6.0 included) -- used to run earlier.
    java.util.UnknownFormatConversionException: Conversion = '0'
      at java.util.Formatter.checkText(Formatter.java:2547)
      at java.util.Formatter.parse(Formatter.java:2533)
      at java.util.Formatter.format(Formatter.java:2469)
      at java.util.Formatter.format(Formatter.java:2423)
      at java.lang.String.format(String.java:2797)
      at oracle.dbtools.raptor.backgroundTask.internal.SimpleRaptorTaskUI.getFormattedTime(SimpleRaptorTaskUI.java:288)
      at oracle.dbtools.raptor.backgroundTask.internal.RaptorTaskUI.setState(RaptorTaskUI.java:43)
      at oracle.dbtools.raptor.backgroundTask.internal.SimpleRaptorTaskUI.<init>(SimpleRaptorTaskUI.java:63)
      at oracle.dbtools.raptor.backgroundTask.internal.RaptorTaskUI.<init>(RaptorTaskUI.java:36)
      at oracle.dbtools.raptor.backgroundTask.ui.TaskProgressViewer$4.<init>(TaskProgressViewer.java:346)
      at oracle.dbtools.raptor.backgroundTask.ui.TaskProgressViewer.createTaskUI(TaskProgressViewer.java:346)
      at oracle.dbtools.raptor.backgroundTask.RaptorTaskManager.initViewers(RaptorTaskManager.java:373)
      at oracle.dbtools.raptor.backgroundTask.RaptorTaskManager.access$400(RaptorTaskManager.java:45)
      at oracle.dbtools.raptor.backgroundTask.RaptorTaskManager$4.run(RaptorTaskManager.java:299)
      at oracle.dbtools.raptor.backgroundTask.RaptorTaskManager.invokeInDispatchThreadIfNeeded(RaptorTaskManager.java:313)
      at oracle.dbtools.raptor.backgroundTask.RaptorTaskManager.addTask(RaptorTaskManager.java:302)
      at oracle.dbtools.raptor.backgroundTask.RaptorTaskManager.addTask(RaptorTaskManager.java:200)
      at oracle.dbtools.raptor.backgroundTask.RaptorTaskManager.addTask(RaptorTaskManager.java:161)
      at oracle.dbtools.worksheet.editor.OpenWorksheetWizard.invoke(OpenWorksheetWizard.java:425)
      at oracle.ide.wizard.WizardManager.invokeWizard(WizardManager.java:446)
      at oracle.ide.wizard.WizardManager.invokeWizard(WizardManager.java:390)
      at oracle.dbtools.worksheet.editor.WorksheetOpenController$1.run(WorksheetOpenController.java:84)
      at oracle.dbtools.worksheet.editor.WorksheetOpenController.openWorksheetWizard(WorksheetOpenController.java:90)
      at oracle.dbtools.worksheet.editor.WorksheetOpenController.handleEvent(WorksheetOpenController.java:49)
      at oracle.ide.controller.IdeAction$ControllerDelegatingController.handleEvent(IdeAction.java:1482)
      at oracle.ide.controller.IdeAction.performAction(IdeAction.java:663)
      at oracle.ide.controller.IdeAction.actionPerformedImpl(IdeAction.java:1153)
      at oracle.ide.controller.IdeAction.actionPerformed(IdeAction.java:618)
      at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:2018)
      at javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2341)
      at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:402)
      at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:259)
      at javax.swing.AbstractButton.doClick(AbstractButton.java:376)
      at javax.swing.plaf.basic.BasicMenuItemUI.doClick(BasicMenuItemUI.java:833)
      at javax.swing.plaf.basic.BasicMenuItemUI$Handler.mouseReleased(BasicMenuItemUI.java:877)
      at java.awt.Component.processMouseEvent(Component.java:6505)
      at javax.swing.JComponent.processMouseEvent(JComponent.java:3320)
      at java.awt.Component.processEvent(Component.java:6270)
      at java.awt.Container.processEvent(Container.java:2229)
      at java.awt.Component.dispatchEventImpl(Component.java:4861)
      at java.awt.Container.dispatchEventImpl(Container.java:2287)
      at java.awt.Component.dispatchEvent(Component.java:4687)
      at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4832)
      at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4492)
      at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4422)
      at java.awt.Container.dispatchEventImpl(Container.java:2273)
      at java.awt.Window.dispatchEventImpl(Window.java:2719)
      at java.awt.Component.dispatchEvent(Component.java:4687)
      at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:735)
      at java.awt.EventQueue.access$200(EventQueue.java:103)
      at java.awt.EventQueue$3.run(EventQueue.java:694)
      at java.awt.EventQueue$3.run(EventQueue.java:692)
      at java.security.AccessController.doPrivileged(Native Method)
      at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)
      at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:87)
      at java.awt.EventQueue$4.run(EventQueue.java:708)
      at java.awt.EventQueue$4.run(EventQueue.java:706)
      at java.security.AccessController.doPrivileged(Native Method)
      at java.security.ProtectionDomain$1.doIntersectionPrivilege(ProtectionDomain.java:76)
      at java.awt.EventQueue.dispatchEvent(EventQueue.java:705)
      at oracle.javatools.internal.ui.EventQueueWrapper._dispatchEvent(EventQueueWrapper.java:169)
      at oracle.javatools.internal.ui.EventQueueWrapper.dispatchEvent(EventQueueWrapper.java:151)
      at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:242)
      at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:161)
      at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:150)
      at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:146)
      at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:138)
      at java.awt.EventDispatchThread.run(EventDispatchThread.java:91)

    Around a month ago, a similar question appeared on Jeff's block... look for checkText in http://www.thatjeffsmith.com/ask-a-question/
    There was no resolution there, so (as the SQL Developer code in that area has not changed) I would tend to think it might have something to do with
    1. Your Locale affecting the Formatter class's parsing of that pattern; or
    2. A bug in the jdk version in use, probably also related to the Locale.
    I recommend upgrading to the latest jdk1.7.0_xx update.  If that does not solve the issue, then try changing the Locale.

  • Database growth following index key compression in Oracle 11g

    Hi,
    We have recently implemented index key compression in our sap R3 environments, but unexpectedly this has not resulted in any reduction of index growth rates.
    What I mean by this is that while the indexes have compressed on average 3 fold (over the entire DB), we are not seeing this with the DB growth going forward.
    ie We were experiencing ~15GB/month growth in our database prior to compression, but this figure doesnt seem to have changed much in the 2-3months that we have implemented in our production environments.
    Our trial with ACO compression seemed to yield reduction of table growth rates that corresponded to the compression ratio (ie table data growth rates dropped to a third after compression), but we havent seen this with index compression.
    Does anyone know if a rebuild with index key compression  will it compress any future records inserted into the tables once compression is enabled (as I assumed) or does it only compress whats there already?
    Cheers
    Theo

    Hello Theo,
    Does anyone know if a rebuild with index key compression will it compress any future records inserted into the tables once compression is enabled (as I assumed) or does it only compress whats there already?
    I wrote a blog about index key compression internals long time ago ([Oracle] Index key compression), but now i noticed that one important statement is missing. Yes future entries are compressed too - index key compression is a "live compression" feature.
    We were experiencing ~15GB/month growth in our database prior to compression, but this figure doesnt seem to have changed much in the 2-3months that we have implemented in our production environments.
    Do you mean that your DB size still increases ~15GB per month overall or just the index segments? Depending on the segment type growth - maybe indexes are only a small part of your system at all.
    If you have enabled compression and perform a reorg of them, you can run into one-time effects like 50/50 block splits due to fully packed blocks, etc. It also depends on the way the data is inserted/updated and which indexes are compressed.
    Regards
    Stefan

  • Setting up "in-document" hyperlinks based on database fields, not URLs

    Hi there,
    Does anyone know whether or not it is possible to set up a Crystal report so that certain fields are presented as hyperlinks to nominated fields?
    This is what I want to do:
    In an index-style document, I have a top level value that is accompanied by several related values. For the related values, I want hyperlinks to be displayed so that the person browsing the report can jump to the top level value entry for the related value.... like this...
    Top level value: DOGS
    Related values...
    Broader Value: Mammals (hyperlinked to jump to entry for "MAMMALS")
    Narrower value: Poodles (hyperlinked to jump to entry for "POODLES")
    So, where the above entry for "DOGS" appears, I want Crystal to put in a hyperlink on Mammals and Poodles so that the reader can go straight to the entries for these related terms, for example, if someone clicks on the hyperlink for Poodles.... they will be taken to the top level entry for Poodles... etc. etc.
    Top level value: POODLES
    Related values...
    Broader value: Dogs (hyperlinked to jump to entry for "DOGS")
    Narrower value: Miniature Poodles (hyperlinked to jump to entry for "MINIATURE POODLES")
    etc. etc. and so on...
    Basically I am hoping and imagining there is a way to do this without sitting down and manually bookmarking these entries. This is something that I could do reasonably easily in Word using Styles, however that would require first exporting the data... and I really want to be able to do it straight out of the database, in Crystal.
    Anyone know the answer to this?
    cheers
    -karenb

    I think you can try two ways
    1) create a group on the field DOGS and place the related values in the details section.
    Now right click on the detail section and go to section expert and write the suppress condition like this
    DrillDownGroupLevel=0
    Now when you refresh the report it will show the data like this
    DOGS
    MAMMALS
    POODLES
    When you double click on Dogs it shows the related details of DOGS in a seperate tab.
    2) Try to create a subreport that shows the related details and insert this subreport as an on-demand subreport in the main report.This subreport should be placed in the group headers of DOGS field and change the caption of the subreport by right clicking the subreport and go to format subreport and in subreport tab add the field (DOGS) corresponds to caption by clicking X+2. Now link the subreport and main report with the grouped field in the main report using chnage subreport links option
    Hope this helps!
    Raghavendra

  • When i tried to insert data in a database the following error is displaying, error : String or binary data would be truncated

    Hi,
         can any anyone give me a solution for this.

    It is pretty old /famous error
    http://dimantdatabasesolutions.blogspot.co.il/2008/08/string-or-binary-data-would-be.html
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • To view the file, click the link or copy and paste the following URL into your web browser.

    The file does not display. When I click on the link, I do see the file anme and a comment box. But not the file, which is a one page InDesign file.
    This is the link...
    http://adobe.ly/1ooe2uw
    I have entered a comment and it does appear.

    Where did you find this link?

  • Native Webservices - ERROR 501: Not Implemented ... why?

    I'm running Oracle xe 11G.
    Recently I've tried to setup Native Web Services. I set up the database and user accordingly, defined a testfunction and accessed the database with following URL:
    http://localhost:8080/orawsv/GUT/TESTWS?wsdl
    This worked.
    I did another test at another database, where I set up a defined user for webservices and granted this user execution privileges on the same testfunction. Accessing this function via SQL works OK. But when I issue a similar URL (http://localhost:8080/wwwsss/WSBRAN/TESTWS?wsdl) I get:
    "ERROR 501: Not Implemented."
    I did not find an error in my setup yet - so maybe somebody can hint me on the deeper meaning of this error! Where can I look to fix this?
    Thanks a lot,
    Stephan

    Oh, I should have called the servlet "orawsv" as it is described. Corrected this and now it works!

  • Major crawler issues - removal of trailing slashes from URLs and (seemingly) poor regex implementation

    I've been suffering from two issues when setting up a crawl of an intranet website hosted on a proprietary web CMS using SharePoint 2010 Search. The issues are:
    1: The crawler appears to remove trailing slashes from the end of URLs.
    In other words, it turns http://mysite.local/path/to/page/ into http://mysite.local/path/to/page. The CMS can cope with this as it automatically forwards requests for http://mysite.local/path/to/page to http://mysite.local/path/to/page/ -
    but every time it does this it generates a warning in the crawl - one of the "URL was permanently moved" warnings. The URL hasn't been moved, though, the crawler has just failed to record it properly. I've seen a few posts about this in various places,
    all of which seem to end without resolution, which is particularly disappointing given that URLs in this format (with a trailing slash) are pretty common. (Microsoft's own main website has a few in links on its homepage - http://www.microsoft.com/en-gb/mobile/
    for instance).
    The upshot of this is that a crawl of the site, which has about 50,000 pages, is generating upwards of 250,000 warnings in the index, all apparently for no reason other than the crawler changes the address of all the pages it crawls? Is there a fix for this?
    2: The Regex implementation for crawl rules does not allow you to escape a question mark
    ... despite what it says
    here: I've tried and tested escaping a question mark in a url pattern by surrounding it in square brackets in the regex, i.e.: [?] but regardless of doing so, any URL with a question mark in just falls right through the rule. As soon as you remove the 'escaped'
    (i.e. seemingly not actually escaped at all) question mark from the rule, and from the test URL pattern, the rule catches, so it'd definitely not escaping it properly using the square brackets. The more typical regex escape pattern (a backslash before the
    character in question) doesn't seem to work, either. Plus neither the official documentation on regex patterns I've been able to find, nor the book I've got about SP2010 search, mention escaping characters in SP2010 crawl rule regexes at all. Could it be that
    MS have released a regex implementation for matching URL patterns that doesn't account for the fact that ? is a special character in both regex and in URLs?
    Now I could just be missing something obvious and would be delighted to be made to look stupid by someone giving me an easy answer that I've missed, but after banging my head against this for a couple of days, I really am coming to the conclusion that Microsoft
    have released a regex implementation for a crawler that doesn't work with URL patterns that contain a question mark. If this is indeed the case, then that's pretty bad, isn't it? And we ought to expect a patch of some sort? I believe MS are supporting SP2010
    until 2020? (I'd imagine these issues are fixed in 2013 Search, but my client won't be upgrading to that for at least another year or two, if at all).
    Both these issues mean that the crawl of my client's website is taking much longer, and generating much more data, than necessary. (I haven't actually managed to get to the end of a full crawl yet because of it... I left one running overnight and I just
    hope it has got to the end. Plus the size of the crawl db was 11GB and counting, most of that data being pointless warnings that the crawler appeared to be generating itself because it wasn't recording URLs properly). This generation of pointless mess
    is also going to make the index MUCH harder to maintain. 
    I'm more familiar with maintaining crawls in Google systems which have much better regex implementations - indeed (again, after two days of banging my head against this) I'd almost think that the regex implementation in 2010 search crawl rules was cobbled
    together at the last minute just because the Google Search Appliance has one. If so (and if I genuinely haven't missed something obvious - which I really hope I have) I'd say it wasn't worth Microsoft bothering because the one they have released appears to
    be entirely unfit for purpose?
    I'm sure I'm not the first person to struggle with these problems and I hope there's an answer out there somewhere (that isn't just "upgrade to 2013")?
    I should also point out that my client is an organisation with over 3000 staff, so heaven knows how much they are paying MS for their Enterprise Agreement. Plus I pay MS over a grand a year in MSDN sub fees etc, so (unless I'm just being a numpty) I would
    expect a higher standard of product than the one I'm having to wrestle with at the moment.

    Hi David,
    as i know in sharepoint 2010 crawl there is a rule to include or exclude the URL that using '?' characters, if i may know have you implemented the rule?
    In the Crawl Configuration section, select one of the following options:
    Exclude all items in this path. Select this option if you want to exclude all items in the specified path from crawls. If you select this option, you can refine the exclusion by selecting the following:
    Exclude complex URLs (URLs that contain question marks (?)). Select this option if you want to exclude URLs that contain parameters that use the question mark (?) notation.
    Include all items in this path. Select this option if you want all items in the path to be crawled. If you select this option, you can further refine the inclusion by selecting any combination of the following:
    Follow links on the URL without crawling the URL itself. Select this option if you want to crawl links contained within the URL, but not the starting URL itself.
    Crawl complex URLs (URLs that contain a question mark (?)). Select this option if you want to crawl URLs that contain parameters that use the question mark (?) notation.
    Crawl SharePoint content as HTTP pages. Normally, SharePoint sites are crawled by using a special protocol. Select this option if you want SharePoint sites to be crawled as HTTP pages instead. When the content is crawled by using the HTTP
    protocol, item permissions are not stored.
    for the trailing slash issue, may i have your latest cumulative update or your sharepoint database number? as i remember there was a fix on SP1 + june update regarding trailing slash, but not quite sure if the symptoms are the same with your environment.
    Sharepoint may use sharepoint connector regarding regex, but older regex may not able to have a capability to filter out the parameters so, a modification regarding the trailing slash may happend.
    please let us know your feedback.
    Regards,
    Aries
    Microsoft Online Community Support
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Database control url

    Hy,
    i have installed oracle 10g release 2 and I try to connect to database contro with url http://tlvrg4:1158/em but when i submit recive this message (in all browser)
    The page cannot be displayed
    Why? How can I solve this problem?
    I try to lanch tnsping from command line and the result it's ok in the same way if i connet to sqlplus from command ling it work but if i try to connect with browser i recive the same message:
    The page cannot be displayed
    Why? How can I solve this problem?
    Thank's

    I believe the default port number is 5500, unless you changed it.
    http://tlvrg4:5500/em
    also verify that enterprise manager is up and running.
    From a command prompt
    emctl status dbconsole
    this will show you the status of em
    if it is down, you can start it with
    emctl start dbconsole
    Also I have had issues before with a server (tlvrg4 in your case) not being in a HOST file in the OS. You can also supplement the IP address of the server in for the name.

  • While creating RAC database on IBM AIX 5.3 following errors occur.

    Hi,
    I am creating a Oracle database on following configuration IBM AIX 5.3 box:
    OS: IBM AIX 5.3
    DB: Fresh installed Oracle 9.2.0.1 upgraded to Oracle 9.2.0.7 (Patch)
    While creating a RAC database on the above server i got following errors:
    ORA:00603 : ORACLE SERVER SESSION TERMINATED BY FATAL ERROR
    ORA-27504 : IPC error 380 OSD context
    ORA-27300 : OS system dependent operation : if_not_found failed with status:0
    ORA-27301 : OS failure mesage: error 0
    ORA-27302 : failure occured at:skgxpbaddr9
    Licence high water mark = 0
    Can any 1 help me out.Are the above error related to OS or something is wrong with the OS cluster / RAC. I tried creating single instance database on either of the server which are in cluster, no success.
    Also 1 more thing i would like to add is that iam able to create single instance database on server which are not in cluster.
    Any help regarding the above error is appreciated.

    This is a bug (for patchsets 9.2.0.6 and higher), see metalink:
    UNABLE TO START RAC INSTANCE AFTER APPLYING PATCHSET 9.2.0.6 ON AIX
    Note:300218.1
    Werner

  • Crawling database

    Hi all,
    We have couple of databases like MS SQL, Oracle and Sybase which holds customer related information belonging to three different groups. We need to create a portlet that display user information to the users. Accessing all three databases at the run time seems to have performance bottlenecks and we are looking for the unified approach where we bring in all the data on the nightly basis and store in some repository and access this repository for customer data. This customer data is a regular relational data, not the documents.
    Is there a way to I can achieve this using crawlers.
    What I am looking for is a possibility of a crawler to crawl all the different databases on a nightly basis and bring in data into portal database which will be queried by the portlets.
    Thanks!!
    Reddy

    heh - "with great power comes great responsibility"
    All of that is up to you and your code (read: all you :). The portal provides a framework for you to develop those as custom components. There's entirely too much variety beyond that, so that's why they give you the PWS and CWS approach so you can say... implement a lotus notes crawler, an nt filesystem crawler, a database crawler. Those are all very, very different beasts, but using the ALUI framework it is at least a starting point so you don't have to recreate the underlying integration plumbing.
    These are not to be taken lightly. IMO they can be some of the most complex things to implement if only because it's all custom based on your needs. If you need help I'd suggest contacting your BEA sales rep and setting up some time with their professional services team - or, if you'd like to go the BEA partner route there are some really good (really good) groups out there like BDG (www.bdg-online.com) who know their stuff inside and out. Either one of them (for a fee) can help you build this.
    I will say this - CWS are certainly more complex than PWS (IMO). PWS is a relatively straight shot property mapping once you get your remote application to give you data. CWS get...well...powerful. Which can mean complex. They have a concept of parent child relationships, metadata exposure, ACL handling, document click-through. It can be overwhelming if you don't consider all of that. Heck - even the sample code for those makes my head spin.

Maybe you are looking for

  • Print Price in PO

    Hi All, In purchase order -item details- Condition Control Tab one Print Price box is there. Here always tick mark is coming. I want to to know where we are setting this default value. Waiting for favourable response. Regards Indrashsish

  • Enter your system pin code this mac

    MacBook Pro Model:A1425 Serial: C0*****R54

  • Battery Replacement Question

    How long does it take for Apple to send the iPod after the battery has been replaced? I just got mine, but I'm just curious for when I do need to get it done. Thanks! Mat

  • Reg: Capturing Cost at WBS Level

    Hi I have create two activities(one internal and other External) under one WBS Element. I have complete the process of confirmation(for Internal activity) and Purchase process for external activity. Now i can see both the planned and actual costs of

  • Runtime exceptions from listeners

              When a servlet context listener throws a runtime exception, does the container           catch it? It seems that           the container just ignores the exception and tries to go on with life as usual.           It does not even log the