Database Crawler Setup

Does anyone have some step by step instructions for setting up the .Net Database crawler sample? I don't have admin access to our portal so I can't import the pte. So far I just get an error when I try to create the Datasource using the webservice.
"There was an error communicating with the Service Configuration ....."
Thanks,

Here are the settings from the pte file:
Conainer URL: http://@REMOTE_SERVER@/DatabaseRecordViewer/ContainerProviderSoapBinding.asmx
Document URL: [url=http://@REMOTE_SERVER@/DatabaseRecordViewer/DocumentProviderSoapBinding.asmx]http://@REMOTE_SERVER@/DatabaseRecordViewer/DocumentProviderSoapBinding.asmx
Upload URL: http://@REMOTE_SERVER@/
Gateway URL Prefixes: http://@REMOTE_SERVER@/.
Service Configuration URL: [url=http://@REMOTE_SERVER@/DatabaseRecordViewer/XUIService.asmx]http://@REMOTE_SERVER@/DatabaseRecordViewer/XUIService.asmx
Administration Configuration URL: http://@REMOTE_SERVER@/
User Configuration URL: http://@REMOTE_SERVER@/
Basic Authentication info sent to Web Service: Use Remote Server Basic Authentication Information
Settings: None
SOAP Encoding Style: Document/Literal
It looks like the important parts match up with your settings. Do you see the service endpoints when you go to [url=http://@remote_server@/DatabaseRecordViewer/XUIService.asmx][url=http://@REMOTE_SERVER@/DatabaseRecordViewer/XUIService.asmx]http://@REMOTE_SERVER@/DatabaseRecordViewer/XUIService.asmx ?
Have you tried tracing the crawl to see what comes back from the remote server when you create the datasource? e.g with something like TcpTrace?

Similar Messages

  • Database Crawler - what constitutes a modification?

    Hi
    It appears that the database crawler determines that an object has changed when the last modification date changes. In our schema we have a parent-child relationship situation such that we need to have a child object re-indexed when a particular column of the parent object changes. So the last modified date of the parent object changes, but not that of the child object. The SQL for our crawl of the child object references the parent object column and stores it as a search attribute of the child object. So I was expecting the crawl of the child object to notice the change in the parent column and re-index the child object.
    Is there some way I can cause the child objects to be re-indexed other than forcing the last modified date to change?
    Thank you

    Not sure if this helps, but we have a process that involves using a stored procedure that runs every night that refreshes a staging table that SES indexes. The procedure compares the "current" live system data with the "indexed" SES data, deletes any SES staging rows where the data has changed, and then inserts the "fresh" rows in the SES table using the SYSDATE as the last modified date. Thus only the newly inserted data is indexed by SES on the next crawl.
    We use a staging table as opposed to real-time SQL due to the many custom functions we perform on the search attributes to get the data into the right format.
    Oracle.... Any plans to have SES support crawlers calling stored procedures to return indexed data? This would be very powerful!

  • Database crawler won't doc open problem

    I am using the database record viewer and have crawled in some databases but when I try to apply a stylesheet to them and you click on the file it does not open in the browser it wants to save the file. If you do save it then open it the file displays fine. Here is a doc properties..
    Open Document URL
    http://edk/portal/server.pt/gateway/PTARGS_0_1_155
    55_0_0_18/D%3B\Temp\Customers.ALFKI.html
    URL
    databaserecordviewer/docfetch?path=Northwind%7Cdbo
    %2CCustomers%7CCompanyName%7CAddress%7Cwebapps%2FR
    OOT%2Fcompanies.xsl%7CCustomerID%7C1%7CALFKI&local
    e=en&contentType=http%3A%2F%2Fwww.plumtree.com%2Fd
    tm%2Fmime&signature=&IEFile=D%3A%5CTemp%5CCustomer
    s.ALFKI.html
    This is using Plumtree 5.02

    I experience too a strange issue.
    I use this sample crawler and when i click on the link it's displaying this : PlumPIDxxxx with an incremental number each time i refresh the page.
    When i use trace in the DocFetch method GetDocument, i send the right path (d:\temp\<file>.xml), and i tried before sending back to read the stream and it's correct...
    My stream is sent like binary?
    Is it a gateway problem?
    Thanks for your help.

  • Database Crawler-Follow URL

    Hi,
    I am trying to crawl an external SQL Server database using SharePoint 2013 Search engine.
    The database has a table that holds URL of documents published in a SharePoint 2010 site. I would like to know if SharePoint 2013 crawler will be able to follow the document urls in the DB table columns to crawl the document contents stored in SharePoint
    document library?
    Thanks,
    Thanks, Bivsworld

    Hi Bivsworld,
    According to your description, I did a test as the followings:
    Create a table in a SQL database, and create a column to store the URLs of some documents stored in SharePoint libraries.
    Create an External Content Type using the table.
    Create a content source for the external content type.
    Crawl the content source.
    I used the following article as a reference:
    http://www.sharepointinspiration.com/Lists/Posts/Post.aspx?ID=5 .
    After crawling, I checked the Crawl log of the content source, only the items in the SQL database table was crawled, the content of the documents were not crawled.
    So, for your question, SharePoint 2013 crawler don’t follow the URLs of documents stored in DB table column to crawl the content of these documents.
    Best Regards,
    Wendy
    Wendy Li
    TechNet Community Support

  • Database Mirroring setup

    Hi all,
    Has anyone ever experienced the error below when attempting to install SQL 2014 mirroring.
    The following error has occurred:
    Unable to open Windows Installer File 'E:\redist\VisualStudioShell\VC10SP1\x64vc_redi.msi''. 
    Windows Installer error message: The system cannot open the device or file specified.
    Screenshot attached. Any feed would be greatly appreaciated. Thanks!

    I guess you mean to say you are installing SQL Server not Database mirroring. PLease download the file again and extract it on local drive using WinRaR
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • 2개의 DATABASE 사용 시 LISTENER SETUP 방법

    제품 : SQL*NET
    작성날짜 : 1997-10-10
    안녕하십니까? ORACLE_HOME은 공유를 하는 PMS란 DB와 VOY란 DB를 사용하는
    경우로 예를 들어보겠습니다.
    1)LISTENER 하나를 이용하는 방법
    <$ORACLE_HOME/network/admin/listener.ora>
    LISTENER =
    (ADDRESS_LIST =
    (ADDRESS =
    (PROTOCOL = tcp)
    (HOST = 152.69.30.100)
    (PORT = 1521)
    SID_LIST_LISTENER =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = PMS)
    (ORACLE_HOME=/oracle2/ora73/app/oracle/product/7.3.2)
    (SID_DESC =
    (SID_NAME = VOY)
    (ORACLE_HOME=/oracle2/ora73/app/oracle/product/7.3.2)
    <$ORACLE_HOME/network/admin/tnsnames.ora>
    PMS =
    (DESCRIPTION =
    (ADDRESS =
    (PROTOCOL = tcp)
    (host = 152.68.1.29)
    (port = 1521)
    (CONNECT_DATA =
    (SID = PMS))
    VOY =
    (DESCRIPTION =
    (ADDRESS =
    (PROTOCOL = tcp)
    (host = 152.68.1.29)
    (port = 1521)
    (CONNECT_DATA =
    (SID = VOY))
    <initPMS.ora의 mts 파라미터>
    mts_dispatchers="tcp,3"
    mts_max_dispatchers=10
    mts_servers=2
    mts_max_servers=10
    mts_service=PMS
    mts_listener_address="(ADDRESS=(PROTOCOL=tcp)(PORT=1521)(HOST=152.68.30.100))"
    <initVOY.ora의 mts 파라미터>
    mts_dispatchers="tcp,3"
    mts_max_dispatchers=10
    mts_servers=2
    mts_max_servers=10
    mts_service=VOY
    mts_listener_address="(ADDRESS=(PROTOCOL=tcp)(PORT=1521)(HOST=152.68.30.100))"
    이렇게 세팅한 다음
    $lsnrctl start
    하면 됩니다.
    그리고 CLIENT에서 접속하게 된다면 위에서 만든 tnsnames.ora 를 client
    에서 가져다가 사용하시면 됩니다. (CLIENT의 $ORACLE_HOME/network/admin
    에 copy 하면됨). 혹시 CLIENT에서 접속이 안되면 CLIENT의
    $ORACLE_HOME/network/admim에 있는 sqlnet.ora 화일을 삭제하고 테스트해
    보시기 바랍니다.
    2)LISTENER를 DB마다 각자 이용하는 방법
    이 경우는 LISTENER이름을 두개로 하고 각각 다른 포트를 쓰게 됩니다.
    1521과 1522를 사용하는 예로 하겠습니다.
    <$ORACLE_HOME/network/admin/listener.ora>
    LISTENER_PMS =
    (ADDRESS_LIST =
    (ADDRESS =
    (PROTOCOL = tcp)
    (HOST = 152.69.30.100)
    (PORT = 1521)
    LISTENER_VOY =
    (ADDRESS_LIST =
    (ADDRESS =
    (PROTOCOL = tcp)
    (HOST = 152.69.30.100)
    (PORT = 1522)
    SID_LIST_LISTENER_PMS =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = PMS)
    (ORACLE_HOME=/oracle2/ora73/app/oracle/product/7.3.2)
    SID_LIST_LISTENER_VOY =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = VOY)
    (ORACLE_HOME=/oracle2/ora73/app/oracle/product/7.3.2)
    <$ORACLE_HOME/network/admin/tnsnames.ora>
    PMS =
    (DESCRIPTION =
    (ADDRESS =
    (PROTOCOL = tcp)
    (host = 152.68.1.29)
    (port = 1521)
    (CONNECT_DATA =
    (SID = PMS))
    VOY =
    (DESCRIPTION =
    (ADDRESS =
    (PROTOCOL = tcp)
    (host = 152.68.1.29)
    (port = 1522)
    (CONNECT_DATA =
    (SID = VOY))
    <initPMS.ora의 mts 파라미터>
    mts_dispatchers="tcp,3"
    mts_max_dispatchers=10
    mts_servers=2
    mts_max_servers=10
    mts_service=PMS
    mts_listener_address="(ADDRESS=(PROTOCOL=tcp)(PORT=1521)(HOST=152.68.30.100))"
    <initVOY.ora의 mts 파라미터>
    mts_dispatchers="tcp,3"
    mts_max_dispatchers=10
    mts_servers=2
    mts_max_servers=10
    mts_service=VOY
    mts_listener_address="(ADDRESS=(PROTOCOL=tcp)(PORT=1522)(HOST=152.68.30.100))"
    이렇게 세팅한 다음
    $lsnrctl start LISTENER_PMS
    $lsnrctl start LISTENER_VOY
    하면 됩니다.

  • Crawling database

    Hi all,
    We have couple of databases like MS SQL, Oracle and Sybase which holds customer related information belonging to three different groups. We need to create a portlet that display user information to the users. Accessing all three databases at the run time seems to have performance bottlenecks and we are looking for the unified approach where we bring in all the data on the nightly basis and store in some repository and access this repository for customer data. This customer data is a regular relational data, not the documents.
    Is there a way to I can achieve this using crawlers.
    What I am looking for is a possibility of a crawler to crawl all the different databases on a nightly basis and bring in data into portal database which will be queried by the portlets.
    Thanks!!
    Reddy

    heh - "with great power comes great responsibility"
    All of that is up to you and your code (read: all you :). The portal provides a framework for you to develop those as custom components. There's entirely too much variety beyond that, so that's why they give you the PWS and CWS approach so you can say... implement a lotus notes crawler, an nt filesystem crawler, a database crawler. Those are all very, very different beasts, but using the ALUI framework it is at least a starting point so you don't have to recreate the underlying integration plumbing.
    These are not to be taken lightly. IMO they can be some of the most complex things to implement if only because it's all custom based on your needs. If you need help I'd suggest contacting your BEA sales rep and setting up some time with their professional services team - or, if you'd like to go the BEA partner route there are some really good (really good) groups out there like BDG (www.bdg-online.com) who know their stuff inside and out. Either one of them (for a fee) can help you build this.
    I will say this - CWS are certainly more complex than PWS (IMO). PWS is a relatively straight shot property mapping once you get your remote application to give you data. CWS get...well...powerful. Which can mean complex. They have a concept of parent child relationships, metadata exposure, ACL handling, document click-through. It can be overwhelming if you don't consider all of that. Heck - even the sample code for those makes my head spin.

  • Need Help on using CAS Incremental Crawl with JDBC data source

    Hi,
    As part of one of the e-commerce implementations, we are implementing a delta pipeline which reads the data from the database views. We plan to crawl the data with the help of CAS JDBC data source. In order to optimize the data reads, we want to use the CAS capabilities of incremental crawl. We rely on CAS to be able to identify the updates available in the subsequent incremental crawl and only read the updated data for any given crawl unless we force a full crawl.
         We have tried implementing the above setup using JDBC data source. CAS reads from the database and stores it in the record store. The Full crawl works fine however; we observed some unexpected behavior during the incremental crawl run. Even when there is no change in the database, crawl metrics show that certain number of records have been updated in the record store and the number of updates differ in every subsequent run.
    Any pointers what can be the issue ? Does CAS has the incremental crawl capability using JDBC data source ?
    Regards,
    Nitin Malhotra

    Hi Ravi,
    Generic Extraction is used to extract data from COPA tables. And delta method used to extract delta records (records created after initilization) is timestamp.
    What is this timestamp?
    Assume that, you have posted 10 records one after the other to the COPA tables. And we do initilization to move all these 10 records to BW system. Later another 5 records are added to COPA tables. How do you think the system identifies these new 5 records (delta records).?
    It identifies based on a timestamp field (eg : Document created on ,a 16 digit decimal field).
    Assume that, in our previous initilzation, say  "document created on" field for the last (or the latest) record is 14/11/2006, 18.00 hrs. and  timestamp is set to the 14/11/2006, 18.00hrs.  then when you do delta update, the system treats all the records with "document created on" field is greater than 14/11/2006, 18.00 hrs as delta records. This is how new 5 records are extracted to bw system and timestamp is again set to the new value besed in field in the records.(say 14/11/2006, 20.00hrs)
    Assume that, you have two records with "document created on" field as 14/11/2006, 17.55hrs and 14/11/2006, 17.57hrs. and they were updated to the COPA table after 14/11/2006,20.00hrs (at this time last delta load is done)  due to some reason. How can you bring these two records? For this, purpose we can reset the timestamp at KEB5. In this eg, we can reset as 14/11/2006, 17.50 hrs) and do the delta again. So, the system picks all the records which are posted after 14/11/2006,17.50hrs again.  But remember that, doing this, you sending some of the records again (duplicate records). So, make sure that you sending ODS object. otherwise you end up with inconsistant data due to duplicate records.
    Hope this helps!!
    Vj

  • Problem rename sharepoint 2010 search service application admin database

    Hi all,
    i have a problem that hopefully someone has an answer to.  i am not too familiar with sharepoint so please excuse my ignorance.
    we have sharepoint 2010 on a windows 2008r2 server.  everything seems to work fine.  but as you know, the default database names are horrendous.  i have managed to rename all of them, except for the "search service application" admin
    database.
    the default is: Search_Service_Application_DB_<guid>
    the other 2 databases (crawl and property) were renamed without a problem.
    we are following the article from technet on how to rename the search service admin db (http://technet.microsoft.com/en-nz/library/ff851878%28en-us%29.aspx).  it says to enter the following command:
    $searchapp | Set-SPEnterpriseSearchServiceApplication -DatabaseName "new database name" -DatabaseServer "dbserver"
    however, i get an error about identity being null.  no big deal, i add the -Identity switch and the name of my search service application.  but the real problem comes the error it throws:
    Set-SPEnterpriseSearchServiceApplication : The requested database move was aborted as the associated search application is not paused.
    At line:1 char:54
    + $searchapp | Set-SPEnterpriseSearchServiceApplication <<<<  -Identity "Search Service Application" -DatabaseName "SharePoint2010_Search_Service_Application_DB" -DatabaseServer "dbserver"
        + CategoryInfo          : InvalidData: (Microsoft.Offic...viceApplication:
       SetSearchServiceApplication) [Set-SPEnterpriseSearchServiceApplication], I
      nvalidOperationException
        + FullyQualifiedErrorId : Microsoft.Office.Server.Search.Cmdlet.SetSearchS
       erviceApplication
    when i look at the crawling content sources, i see "Local SharePoint Sites" and it's status is Idle.  i even looked at this article on how to pause the search to no avail.  (http://technet.microsoft.com/en-us/library/ee808864.aspx)
    does someone know how i can rename my Search Service Applcation Admin database properly?  or at least "pause" that service so i can rename it?
    thank you all in advanced

    If you want to have no guids for your search admin db, i recommend you check out this script :)
    just delete your search service application (assuming you have just started)
    Alpesh Nakar's Blog
    Alpesh
    Just SharePoint Just SharePoint Updates
    SharePoint Conference Southeast Asia
    Oct 26-27 2010 Contributing Author
    SharePoint 2010 Unleashed
    MCTS: SharePoint 2010 Configuration
    MCITP: SharePoint 2010 Administrator

  • Crawl ifs using ultrasearch with results via portal

    I would like to be able to crawl ifs and have results via portal.
    The only thing I can find concerning making ultrasearch crawl ifs is this, http://otn.oracle.com/products/ultrasearch/htdocs/FAQ.html, which has the statement "you can crawl Oracle Files with Ultra Search through HTTP - it is just another web data source - or database crawling. However, only public documents can be crawled through HTTP."
    So, I set up a web datasource using the url to ifs login page.
    Does not work. Does anyone know how to do this? Is there a document I am missing?
    Thanks for any ideas.

    You are correct. This is what I found out:
    "Crawling of iFS is not supported in ultrasearch 9.0.2. It is a new feature of 9.0.4"
    HOWEVER:
    "ultrasearch 9.0.4 only runs with iAS 9.0.4 so you would have to upgrade both. According to my records these products have not been released yet and are due sometime later in the year."

  • Class not found Exception when accessing database through java using ASP

    I am trying to access the database through ASP via java:
    the database is setup as a system database..
    the java class works fine if i try to run it as a stand alone.. but as soon as I run it through ASP.. it does not get passed the driver line.. is there anything in particular I need to do ?
    JAVA and ASP and the output code looks like as follows:
    java code looks like this:
    import java.sql.*;
    public class testDB{
    private Connection connection;
    private Statement statement;
    public static void main(String[] args){
    public String getDriver(){
    String x = "start of program ";     
    try{
              Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
              x = x + " Got Driver";
              connection = java.sql.DriverManager.getConnection("jdbc:odbc:database", "sunny", "jassal");
              x = x + " After connection statement";
              statement = connection.createStatement();
              x = x + " After statement statement";
              statement.execute("Insert into name (name, age) values ('hello', 33)");
              x = x + " after insert";
              connection.close();
         } catch (Exception sqlex){x = x + sqlex.getMessage() + " " ;
              x = x + sqlex.toString() + " " ;
              x = x + sqlex.getLocalizedMessage();}
    return x;
    and the asp looks like this:
    <html>
    <%
         set javaobject = GetObject("java:testDB")
         response.write "after getting object"
         response.write javaobject.getDriver()
         set javaobject = nothing
    %>
    </html>
    after getting objectstart of program sun/jdbc/odbc/JdbcOdbcDriver java.lang.ClassNotFoundException: sun/jdbc/odbc/JdbcOdbcDriver sun/jdbc/odbc/JdbcOdbcDriver

    what would I set the classpath to be .. and I am sorry I am new to this.. but how would I be able to check the CLASSPATH .. or even how do I set the CLASSPATH.. to which directory would i need to set the classpath to
    I know I can set the classpath in dos by set CLASSPATH=
    but I dont know how to set it so it would read the drivers

  • PeopleTools 8.49 PeopleSoft Database Cannot Connect with MS SQL Server 2005

    Folks,
    Hello. I am working on PeopleSoft PIA. My system is Windows Server 2003, MS SQL Server Express 2005 Standard, PeopleTools 8.49. I am setting up PeopleSoft Database \PsMpDbInstall\setup.exe and got the errors as follows:
    Error: Setup fails to connect to SQL Server. The default settings SQL Server does not allow remote connection.
    Then I type in "sqlcmd" in Command Prompt and got the same error as follows:
    Name Pipe Provider: Could not open a connection to SQL Server. MS SQL Native Client: the default settings SQL Server does not allow remote connection.
    In order to fix SQL Server 2005 remote connection problem, I have done the 3 steps as follows:
    Step 1: In SQL Server Surface Area Connection, enable local and remote connection, TCP/IP and Name Pipes, SQL Server browser Startup type "automatic" and status "running".
    Step 2: Create exception for SQL Server 2005 and SQL Server browser Service in Windows Firewall.
    Step 3: Add the default SQL Server Port 1433 into Windows Firewall.
    But the above 3 steps cannot resolve SQL Server Express 2005 remote connection problem. Can any folks help to resolve this problem ?
    Thanks.

    Folks,
    Management Studio can connect with SQL Server 2005 successfully.
    I have turned off Windows Server 2003 firewall completely. But "sqlcmd" in Command Prompt still connect with SQL Server 2005 because its remote connection is not allowed.
    Is the "not allow remote connection" problem caused by Windows Firewall or SQL Server 2005 itself ?
    Thanks.

  • Using Oracle as an IP Address Managment Database

    Can an Oracle database be setup to document and manage IP
    Adresses. Our Software team built a database for my group to
    document addresses but, it does not perform any math features
    based on subnet mask. Any suggetions on where to look for IP
    Address Database creation with an Oracle database?

    Dear Stefan
    (Please send the King of Iceland my regards)
    We have just undertaken a similar project - but in reverse - i.e. synchronising an LDAP server from Oracle.
    Our challenge was a little simpler in that we could use triggers to instigate an LDAP call on update/delete/insert.
    We originally tried using the Oracle JVM in conjunction with the Netscape's LDAP JSDK, but this proved too flakey, and we have reverted to calling an extproc which inturn calls some java code to update LDAP.
    Working the other way would be more complex and I'm not sure how you could get LDAP to invoke a command to update Oracle.
    However, if you managed the process from within Oracle you could maybe use LDAP groups to maintain a list of those users that were in sync, and those that required synchronising.
    Good luck
    Andrew

  • Issue with Oracle 10g database connectivity

    Hi,
    Oracle 10g Express edition is been installed in my machine at the location C:\Oracle10g.
    When i tried to connect the same from toad it is working fine.
    Visual studio-2008 is been installed in machine at loc C:\Program files(x86)\
    Problem I am facing is unable to connect Oracle 10g from vb.net application.I am connecting usig Oracle Provider for OLE DB but the program directly coming to exception block
    with out connecting to database.
    My OS is Windows7 and I am thinking oracle 10g Express will not support completely to this OS.Please suggest me to resolve this issue and comapatable Oracle DB for the same.
    Edited by: 1909 on Apr 25, 2011 12:17 AM

    Hi,
    Try uninstalling and installing VS at the path which does not include brackets. Have a look at my thread.
    Re: Database engine setup for 10.2.0.5
    Thanks,
    Jignesh

  • The Database of SunMc 4.0 didn't start, on Solaris 10 X86

    Hello, happy new year, i am installing the three layers of SunMC 4.0 (server, agent and console), under Solaris 10 X86.
    But, after setting the Web Server Security Key, the Web Server Port Selection, snmp Port,etc, the installation fails to start the DB used by the SunMC 4.0
    This is the Log
    The system memory to be compared has mapped to : 512
    Skipping the Disable SNMP panel as a port other than 161 is specified as the agent port
    The web server organization is given as : elektra
    The web server location is given as : unefon
    The Setup script will be executed as : /var/opt/SUNWsymon/install/guibased-setup.1199486648293.sh
    SYMON_JAVAHOME is set to: /usr/java
    JDK version is: "1.5.0_12-b04"
    This script will help you to setup Sun Management Center 4.0.
    Following layer[s] are installed:SERVER,AGENT,CONSOLE
    None of the layers is setup.
    Following layer[s] will get setup: SERVER,AGENT,CONSOLE.
    Database will be setup.
    Following Addon[s] are installed:
    Advanced System Monitoring,Service Availability Manager,Performance Reporting Manager,Solaris Container Manager,System Reliability Manager,Generic X86/X64 Config Reader.
    Checking memory available...
    Configuring Sun Management Center DB...
    Initializing SunMC database.
    Successfully enabled service sunmc-database
    Successfully disabled service sunmc-database
    Initiating setup for Sun Management Center Server Component.
    This part of the setup process does the Sun Management Center Server Component setup.
    Creating the group esadm that contains Sun Management Center Admin Users.
    Creating the group esdomadm that contains Sun Management Center Domain Admin Users.
    Creating the group esops that contains Sun Management Center Operator Users.
    You need to setup a user as a Sun Management Center administrator.
    This person will be added to the esadm and esdomadm groups.
    You must enter a valid username: : root
    Generating server security keys.
    Setting up web server...
    An encrypted security key is needed for the Sun Management Center web server.
    The key is generated based on the organization and location you provide.
    Enter the name of your organization : : elektra
    Enter the geographical location of this host : : unefon
    Configure Sun Management Center web server port. Default port is 8080
    Press RETURN to force default port.
    Enter port you would like to use [ 1100 to 65535 ]: : 8080
    The sunmcweb web application has been successfully deployed.
    Using /var/opt/SUNWsymon/cfg/console-tools.cfg
    /var/opt/SUNWsymon/cfg/tools-extension-j.x updated.
    Starting Java Webstart based Sun Management Center
    Java Console setup. This will take 2 minutes. Please wait...
    Completing Sun Management Center Server Component setup.
    Initiating setup for Sun Management Center Agent Component.
    This part of the setup process does the Sun Management Center Agent Component setup.
    Copying snmpd.conf file into /var/opt/SUNWsymon/cfg
    Server component also installed locally.
    Using this machine as the Sun Management Center server.
    WARNING It appears that agent.snmpPort 161 is already in use.
    Sun Management Center 4.0 agent may not be able to run due to this conflict.
    There are two ways to correct this conflict:
    1. Reconfigure the port that Sun Management Center 4.0 uses.
    2. Stop the process that is using the port.
    Press RETURN to force default port.
    Enter port you would like to use [ 1100 to 65535 ]: : 1161
    Updating /var/opt/SUNWsymon/cfg/domain-config.x with new port number.
    Generating agent security keys.
    PKCS11 Utilities package(SUNWcsl) was found.
    Encrypted SNMP Communication is supported.
    Setup of Agent component is successful.
    **Starting Sun Management Center database setup...**
    **verifyDatabaseDown: instance is not executing**
    **Database consistency information missing.**
    **Configuring database initialization parameter file**
    **Stopping metadata component**
    **Stopping cfgserver component**
    **Stopping topology component**
    **Stopping event component**
    **Stopping grouping service**
    **Stopping trap component**
    **Stopping java server**
    **Stopping webserver**
    **Stopping agent component**
    **Stopping platform component**
    **Failed to enable service sunmc-database**
    **Database setup failed : db-start failed**
    **Updating registry...**
    **As database is not setup, Marking server setup as failed in Registry.**
    **None of the base layers are setup.**
    **No Addon is setup.**
    **Following Addons are not yet setup: Advanced System Monitoring,Service Availability Manager,Performance Reporting Manager,Solaris Container Manager,System Reliability Manager,Generic X86/X64 Config Reader**
    **Could not finish requested task.**
    **stty: : Invalid argument**
    **stty: : Invalid argument**
    **stty: : Invalid argument**
    **stty: : Invalid argument**
    **stty: : Invalid argument**
    **opping agent component**
    **Successfully disabled service sunmc-agent**
    **Stopping platform component**
    **Successfully disabled service sunmc-platform**
    **verifyDatabaseDown: instance is not executing**
    **Failed to enable service sunmc-database**
    **Database setup failed : db-start failed**
    **Updating registry...**
    **As database is not setup, Marking server setup as failed in Registry.**
    **None of the base layers are setup.**
    **No Addon is setup.**
    **Following Addons are not yet setup: Advanced System Monitoring,Service Availability Manager,Performance Reporting Manager,Solaris Container Manager,System Reliability Manager,Generic X86/X64 Config Reader**
    **Could not finish requested task.**
    The lay-out of my FileSystems is:
    root@petersteiner install $ df -h
    Filesystem size used avail capacity Mounted on
    /dev/dsk/c0d0s0 4.2G 3.5G 721M 84% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 780M 856K 779M 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    /usr/lib/libc/libc_hwcap1.so.1
    4.2G 3.5G 721M 84% /lib/libc.so.1
    fd 0K 0K 0K 0% /dev/fd
    swap 780M 396K 779M 1% /tmp
    swap 779M 36K 779M 1% /var/run
    /dev/dsk/c0d0s7 10G 10M 9.9G 1% /export/home
    I hope, that someone can help me. pLZ

    ... and ... before running es-inst you have to add the PostgreSQL packages from the Solaris Media (5.10 U5 in my case)
    Step order again:
    - kill all postmaster processes (PostgreSQL Procs)
    - if applicable svcadm disable all sunmc services
    - /opt/SUNWsymon/sbin/es-uninst -X
    - remove all SUNWpostgr* and SUNWpg* packages
    - install these Packages - to be downloaded from http://www.sunfreeware.com/indexsparc10.html:
    libgcc-3.4.6-sol10-sparc-local.gz , zlib-1.2.3-sol10-sparc-local.gz , readline-5.2-sol10-sparc-local.gz
    or - the x64 respectively
    - edit /etc/project to have this line for "default" : default:3::::project.max-shm-memory=(priv,2684354456,deny)
    the value have to be calculated by one third of your physical memory.
    - add the original postgreSQL packages from the Solaris install media :
    SUNWpostgr-libs
    SUNWpostgr
    SUNWpostgr-contrib
    SUNWpostgr-devel
    SUNWpostgr-server-data
    SUNWpostgr-server
    SUNWpostgr-jdbc
    SUNWpostgr-pl
    SUNWpostgr-tcl
    SUNWpostgr-docs
    - run es-inst again from your install image path

Maybe you are looking for

  • Report Version 4 since Leopard Update...I can not use Ichat anymore

    Hi, It seems that there is many pb with Ichat since Leopard, does anybody know what is wrong, I think I tried everything - turn off sharing internet - change the ports - change the bandwith - add Ichat into the security etc... Any help will be apprec

  • Classpath/Datatype restriction/Output XML to a file

    Hi, I was trying to use a webservice but because the service could not identify the classes needed to run the service it is giving me an exception.If only i give the directory(temp staging directory-which is in my d drive but not within the default d

  • Laptop configuratio for Oracle Apps R12 installation

    hi, I would like to buy a laptop, which should support Oracle Apps R12 and Linux also, any body have idea about required configuration of laptop which support above requirement, please share it. Thanks ven

  • TOP of PAGE  using ABAP oo with single CUSTOM CONTROL

    Can anybody please tell me how to handle TOP_OF_PAGE using ABAP OBJECTS with a SINGLE CUSTOM CONTROL and not with  SPLIT CONTAINER(i.e. using single CL_GUI_CUSTOM_CONTAINER and single grid CL_GUI_ALV_GRID  ). Is it possible if so Please help me out?

  • How to use HTML in SQL

    hi guys,, i've a procedure to send emails to the employee,,, i want to format the text of the email ,, e.g i want to change the color of the employee name to be blue or any color,, Can i use an HTML code to do certain changes if NOT how can i do ?,,