Endeca data source

Hello All,
We have one mdex (INDX) built and 3 Data sources (DS1, DS2 and DS3) configured in studio based on source systems. DS1 corresponds to source system 1, DS2 to Source system 2, etc...Additionally default data source has all of mdex without any filter
In studio, we have 3 pages corresponding to each of the sources. Page 1 to DS1, page 2 to DS 2, etc...
Now we want to add navigation for few attributes which are common across 3 data sources such that filter on one page is carried over to other pages.
I understand we need to define parent child relations such that DS1, DS2 and DS3 are children of default. Then define second guided navigation pointing to default data source. This achieves the purpose with one drawback.
when on first page, after applying filter from guided navigation of default, if we apply filters from guided navigation of child ex. DS1- it gets carried over too to other pages. I want to avoid filters from DS1 being carried over to DS2.
Per documentation, this is valid. I want to know if there is a way to avoid filters from DS1 to be carried over to DS2 with parent child relationship defined.
Thanks

Back when I was in the services group, we used to have a custom state manager called the "Selective Refinements State Manager" (SRSM) that would help with this kind of problem. This state manager is not distributed by Oracle, but I know that there is a feature request for its functionality. You may be able to get it if you engaged Oracle services or if you found a friend that has a copy of it. If you were interested in building your own custom state manager, I can give you some ideas for it.
1. Initialize the state manager with a list of attributes that are shared between each data source. Depending on your need, you can do this in a couple ways. Maybe you have shared attributes that are the same for all data sources. Then you just need one list. If you need a different list for each data source combination, you might configure it all in one file or maybe integrate the lists into your data source configuration. The SRSM went a step further, querying the data source on initialization to automatically find shared attributes between data sources.
2. Extend the DefaultMDEXStateManager, overriding handleStateMerge to use your configured list of attributes to determine whether or not each function in the mdexState should be merged into a given data source's queryState. I never got deep into the details of this, but I believe that handleStateUpdate is used to push ALL updated filters up to the parent (or ancestor) and then handleStateMerge is used by each individual data source to merge filters from the parent into its own state. It's at this time that you have the opportunity to choose what gets merged.
The initialization step would have to go in handleStateMerge as well (create the attribute list if it doesn't exist already). I think that the SRSM allowed its internal attribute list to be reset when the data source was reset in the control panel. Resetting is important if your data source introduces new shared attributes that aren't already initialized.
Anyhow, that's an overview. If you did want to tackle this, you're sure to run into more subtle problems that I didn't mention here. I might be able to provide some guidance.
Dave

Similar Messages

  • How to use web service as data source for forge or endeca ?

    hi,
    is there any way to use webservice as data source in pipeline ?
    i have requirment to get data from web service and dump it in endeca. for this i need some idea on how can be this achieved. is it possible to do so?

    Crawling is part of the CAS, which is kind of the data ingest process in Endeca as you would know. There are a bunch of OOTB Crawlers available like FileSystem based, XML Crawlers, etc., and there could be a case where you have to write your own and thats what I am suggesting because the XML crawl thats OOTB expects a specific format which isnt really mentioned in the Documentation. Please refer to CASDevGuide for more information. CAS, PlatformServices and ToolsAndFrameworks should be running when you start off with Endeca, you can see that from
    ps -ef | grep java
    on your machine

  • Multiple data sources in endeca

    Hi,
    I have multiple data sources in endeca like web crawler, xml,database, txt or pdf etc.
    Can all be configured through dev studio or all through cas or some from cas and some from dev studio?
    Pls let me know..Thanks!

    You will most probable need both CAS as well as dev studio. Web crawls and ingesting docs will have to be configured through CAS. Database and txt files (if they are delimited) can be ingested through pipeline.
    You can send the data from CAS (record stored) directly to forge as input OR you can send it to the pipeline if you need further modifications to it.
    HTH,
    Pankaj.

  • Is it possible to determine which user queried data in endeca agaisnt a particular data source?

    Hi,
    We have a an application that uses multiple data sources in endeca. Is it possible to determine:
    1. How many times a particular data source is accessed - i believe this is available in performance metrics
    2. Which users were responsile for the data sources in 1, above being accessed?
    Thanks

    Hi Matthew,
    DataFileHeaderAccess is one way to approach this, another could be to use the DataFinder API. In case the file(s) of interest are in a search area of DataFinder you can access all descriptive information such as number of channels or channel names without loading the file into the data portal.
    You can either search for whole files, groups within a file or even channels which match certain criterias (such as having the same name).
    The attched snippet search for channels with a specific name prefix in file(s) with a given file name and load it into the data portal.
    Greetings from sunny Aachen
    Stefan
    Option Explicit  'Forces the
    explicit declaration of all the variables in a script.
    ' Connect to the DataFinder
    Dim oMyDataFinder, oMyQuery, oMyResults, oMyConditions
    Set oMyDataFinder = Navigator.ConnectDataFinder("My DataFinder")
    ' Create a query to search for channels with a specific name prefix in a a specific file
    Set oMyQuery = oMyDataFinder.CreateQuery(eAdvancedQuery)
    oMyQuery.ReturnType = eSearchChannel
    Set oMyConditions = oMyQuery.Conditions
    Call oMyConditions.Add(eSearchFile,"fileName","=","Example.tdm")
    Call oMyConditions.Add(eSearchChannel,"name","=","Noise*")
    Call oMyDataFinder.Search(oMyQuery)
    ' Load results
    Navigator.LoadData(oMyDataFinder.Results)

  • Adding data source-JDBC

    Hi Guys,
    I have another problem on adding data source on EID. When I try adding a data source on EID as a JDBC source I get error that says "Could not establish a connection"..Do you tell me step by step what I have to do when I add a data source on Endeca? My data has based on SQL 2008.

    The Studio Administration and Customization Guide includes the information on creating and managing data sources.
    Adding and editing data sources in the Data Source Library
    I don't know anything about the specific format of a connection URL for SQL 2008. That should be fairly standard, though.

  • Data Source error in Information Discovery

    I created a new data source and when I try and look at a data explorer I am seeing the following error:
    com.endeca.portal.data.DataSourceException: An error occurred while executing query: Could not send Message.
    I am able to view the data through a Results table, but any other components seem to return an error. The data source is defined and looking at the correct port.

    not answered but closing the thread.

  • Data source filter in studio

    is data source filter still supported in studio in version 3.1? I don't see it working in my case when I put the filter in endeca server connection. please advise

    See answer at data domain filter in endeca studio .

  • Trouble with negative refinement filters and data source on OID 2.3

    Hi
    I'm having trouble using the negative refinement filters on a data source with Information Discovery 2.3
    Here's the data source configuration
    "baseFunctions": [
    "attributeKey": "COMPANY",
    "attributeValue": "PROMO",
    "class": "com.endeca.portal.data.functions.NegativeRefinementFilter"
    "class": "com.endeca.portal.data.functions.RecordFilter",
    "recordFilter": "OR(OID_RECORD_TYPE:RES,OID_RECORD_TYPE:DEF)"
    "datastoreName": "datastorename",
    "name": "ABC",
    "port": "7770",
    "server": "XYZ"
    When I reach a Dashboard page where all the portlets are using this datasource, the record filter is applied as expected. However, for the negative refinement filter, while it does appear in the breadcrumb portlet, it has no effect on the data set. The guided navigation still shows the value as a possible dimension and I can see in my results table that the data was not removed by the negative refinement filter. I tried to restart Tomcat, with no effect.
    If I go in the guided navigation and define negatively this value, on the COMPANY dimension, the data gets filtered out and I get 2 negative refinements to show up in the breadcrumb for the same value (!)
    Is there something I'm missing to configure the negative refinement filter?
    Thank you

    Your configuration looks correct.
    I can think of two possible issues but we're really just shooting in the dark:
    1) "Rogue" whitespace.
    - If your value is not actually "PROMO" but is "PROMO ", then your filter would not exclude anything. It would also explain why you could then add a negative refinement for the "true value". To validate this, use a browser with some debugging capabilities (i.e. Chrome), have your negative refinement applied by default and then "manually apply" it in Studio.
    Then, open up your browser's debug view (in Chrome, right-click on the value you just added and click "Inspect Element".). Hopefully, the HTML will show slightly differently and you can see the whitespace in the manually added refinement.
    2) Would this happen to be a managed attribute? I believe managed attribute values can have display names and real names and the real name should be used for filtering purposes. I doubt this is the problem but it's the only other thing I could think of...
    Regards,
    Patrick Rafferty
    Branchbird

  • Error in Creation of BW data source

    Hi All,
    While creating a data source, i am selecting the Origin type Business ware house cube & in the  RFC destination field i can able to select the BW syste. The problem is when i go to the next filed BW report and try to get the relevant BW report from F4 the system shows a log error as "System error in System" Message no. CRM_MKTTG_DS_MISC033.
    In second case
    I have taken an existing BW Data source and tried to recreate it manually filling all the fields with existing data source, RFC destination, BW report, Business partner and Function module.
    In this case the system doesnu2019t throws any error, it is working fine if I create manually.
    I am in a dilema why the system behaving like this. could you please throw some light on this issue.
    Thanks in advance,
    Vijay.

    Hi,
      See the link.
    http://help.sap.com/saphelp_crm50/helpdata/en/00/dc54384ac9a81be10000009b38f8cf/frameset.htm
    regards
    Srinu

  • Unable to capture the Data Source into a Transport Request

    Hi All,
    We have a product hierarchy and we are using the data source :4R_PRODH_D_LGEN_HIER for the hierarchy.
    Now we need to transport this structure to the quality environment but we were not able to capture the datasource:4R_PRODH_D_LGEN_HIER into a transport request.
    When ever we activate the data source:4R_PRODH_D_LGEN_HIER it is asking for the Package and the Transport Request Number.If we give these details and save it, data source is not getting captured in the request, only the "bject Directory Entry" is getting captured.
    Can someone please guide me on how to capture the datasource under "Data Sources in BW" in a transport request.
    Regards,
    Sachin Dehey.

    Hi Sachin,
    Hierarachy datasource is not captured as Attributes and Text Datasource. So what ever you have done is correct.
    What ever is captured in Object Directory Entry is correct. So go ahead with your transports, once transport is done check the Hierarchy Infopackage with Available OLTP hierarchies and load the data.
    Most important thing first see that the all Master & Transactional Datasources are transported in R/3 Dev to QA to PRD
    In BW, datasources are not transported, only their replica is transported.
    Transportation of Datasource is done in R/3. Only their replica is transported in BW.
    So wht ever you have done till now is correct. So go ahead.
    While attaching Hierarchy Datasource it is captured only in "Object Directory Entry"
    Regards,
    Vishnu.

  • Using a SQL data source and XML data source in the same template

    I am trying to develop a template for the Request for Quote report generated in Apps 11.5.10. I have loaded the data from the XML output into the template, but I am missing one field - I need the org_id from the po_headers table. Is it possible to use a sql data source (i.e., "select org_id from po_headers_all where po_header_id = [insert header_id from xml data]...") in addition to the xml data source to populate the template at runtime? When you use the Insert > SQL functionality is it static at the time the template is created, or does it call to the database at runtime? I've looked through all the docs I could find, but this isn't clear.
    Thanks for any help or suggestions you may have.
    Rhonda

    Hi Pablo
    Thats a tough one ... if you go custom with a data template you will at least get support on the data template functionality ie you have a problem when you try and build one. You will not get support on the query inside the data template as you might have gotten with the Oracle Report, well you could at least log a bug against development for a bad query.
    Eventually that Oracle Report will be converted by development anyway, theres an R12 project going on right now to switch the shipped OReports to data templates. AT this point you'll be fully supported again but:
    1. You have to have R12 and
    2. You'll need to wait for the patch
    On reflection, if you are confident enough in the query then Oracle will support you on its implementation within a data template. Going forward you may be able to swap out your DT and out in the Oracle one without too much effort.
    Regards, Tim

  • "Open sql" report no data sourced defined in windows

    when I want to import data from sql database, the Open sql window report error
    "There are no data sourced defined. Please create one to continue",
    but I have defined the odbc data source using windows 's odbc administrator, why the error?
    chuliang

    A couple of possibilities.
    First are you licensed for Sql interface? If not, you might get that message.
    Second, where did you create the ODBC connection on your computer or the server. It has to be on the server.
    Third, did you create the odbc as a system driver? it needs to be so
    Fourth, what operating system? If it's unix or AIX there is a hole bunch more you need to do to get odbc connections working. If it's windows, it's pretty easy.
    Fifth, did you test the connection before trying it in Essbase?

  • Excel-2007 cannot connect to Oracle ODBC data source, Control Panel can.

    <p>
    I cannot make an ODBC connection from Exce-2007 to Oracle work. I am an expert Excel and VBA user (since 1994) and I have frequently used Excel to access ODBC databases, including Oracle (I have done this with Excel-2003 both with worksheet queries and have written VBA ADO-connection routines). And even though in Excel-2007 a worksheet ODBC query is supposed to be easier to create than in previous version of Excel, my connection fails. Any suggestions and all help are welcome and much appreciated.
    DETAILS
    </p>
    <p>
    1) <strong>What is my system?</strong> I am using Excel-2007 on Windows Vista x64 and Oracle server v.11g on my computer (all this is on my computer, no network issues).
    2)<strong> Why use Excel with Oracle at all?</strong> I use Excel-2007 to access Oracle rather than Access-2007 (or any other application like TOAD, etc.) because I do engineering calculations with the data stored in Oracle. These calculations are easier done in Excel (I suppose that one alternative to this could be to use some sql or Access to get the data from the database, then store it as plain vanilla CSV file, then open this file in Excel, then do the math (the math involves complex optimisation algorithms), then save the results as CSV, then use some sql or Access to put the data back into the database. Howwever this does not strike me as a quick or neat solution. And after all Excel has been designed to access ODBC databases, so why not use it?)
    3) <strong>What do I do in Excel-2007 that won't work?</strong> I create an ODBC link to Oracle that does not work. In Excel-2007 this is straightforward:
    </p>
    <ul>
         <li>define an ODBC connection (Data tab --&gt; From other sources --&gt; From data connection wizard);</li>
         <li>define a query on the worksheet -- that's it, this is all!</li>
    </ul>
    <p>
    I start with creating an ODBC connection:
    a) I choose an ODBC data source type: <strong><em>ODBC DSN</em></strong>
    b) Excel-2007 displays the list of the available ODBC data sources. I see in it <strong><em>my Oracle database name</em></strong> and I select it.
    c) Excel-2007 displays the Data Link Properties:
    - the "Provider" has a list of the OLE DB drivers with preselected "<strong><em>Microsoft OLE DB Provider for ODBC Drivers</em></strong>". I keep this default selection.
    - the "Connection" tab has a connection string "<strong><em>DSN=&lt;my database name&gt;</em></strong>" which I keep, it also has fields for the <strong><em>user name</em></strong> and the <strong><em>password</em></strong>, which I fill with the correct credentials.
    - Finally there is a button "Test Connection", which when I click produces the following error message:
    <strong><font color="#ff0000">"Test connection failed because of an error in initializing provider.</font></strong><strong><br />
    </strong><strong><font color="#ff0000">Unespecified error"</font></strong><strong>
    </strong>
    4) <strong>Additional food for thought:</strong>
    a) In the above walk-through the only data, which I type, are the user name and password, everything else is selected from lists offered by Excel-2007, hence any possibility of typos being the cause of the problem can safely be discarded.
    b) I can test the ODBC driver in the Control Panel and it shows that it can connect to the Oracle database:
    - in <strong><em>Control Panel --&gt; Admin Tools --&gt; Data Sources (ODBC)</em></strong> on the "User DSN" tab I can see the list of the available ODBC data sources (same list as in Excel-2007, point 3b above) with the name of my database in it;
    - selecting the name of my database from the list of the sources and clicking "Configure" button opens a tab with <strong><em>Data Source Name</em></strong> (same as in Excel-2007), TNS Service Name and User ID. I enter <strong><em>&lt;user name&gt;/&lt;password&gt;</em></strong> and click "Test Connection" button. A message "<strong><em>Connection successful</em></strong>" appears (just for a test I enter <u>incorrect user credentials</u> and "<strong><em>Unable to connect</em></strong>" message appears)
    BOTTOM LINE
    </p>
    <p>
    The procedure for using an ODBC connection from Excel is very simple, in the past I have created and used such connections numerous times with Excel-2003 and earlier on Win-XP and earlier. But now on Excel-2007 and Vista-x64 I cannot make it work.
    Also, testing an ODBC connection driver is really easy and simple to be done in the Control Panel. There testing the same ODBC connection, which fails in Excel-2007, results in success.
    I am frustrated by the simplicity of the problem and yet the persistant error. I have lost now two full days in failed attempts to make the simple procedure work and in searching the internet for answers.
    All help is highly appreciated <img class="emoticon" src="images/emoticons/happy.gif" border="0" alt="" width="16" height="16" />,
    Plamen
    </p>

    Did you find the solution to your problem?
    If not, I think I may know.
    Excel 2007 is a 32-bit application.
    When installing 32-bit applications in a 64-bit environment, the "default" location is:
    C:\Program files (x86)\...
    Excel then launches from this location.
    However, when it connects to Oracle and passes the name of the calling program, Oracle attempts to "interpret" the value of (x86) as if
    the value within the parenthesis are being passed as a reference. Of course, iOracle doesn't find anything, so the result is (),
    and then it cannot return the connection info back to the calling application.
    I corrected it by installing Excel in C:\Program Files\, and once launched from that location, it works the same as on the 32-bit machines.
    However, at my location, they are FORCING Excel to be installed in the(x86) location.
    What I'm trying to discover now is:
    Is it possible to flag Oracle to NOT process the embedded variables?
    Or, is it possible to assign a variable in Oracle such that x86 = "(x86)", so that the end result is viable?
    Have you had any luck with your installation?
    thanks,
    Paul

  • Refreshing the Data Source View in Analysis Services

    I have added columns to the SQL Database table that is used as a dimension in an Analysis Services Cube.  The new columns will be used as additional Property Fields for the dimension.  When I attempted to refresh the Data Source view so that the additional columns are present, I am given the following error:
    System.Data
    Property not accessible because 'Parent Columns and Child Columns don't have type-matching columns'
    I have done nothing to the columns used for the parent of child and the error message provides nothing to gon on. Does anyone have any ideas on this?
    Gary

    Olga,
    Thanks for your response.  I will try and answer your questions
    1) I have not tried removing the columns yet.  I will try that this afternoon but have limited hope.  The two columns I added are simple text columns that will be used as attributes in the dimension.  I have made no change to the parent or child columns.
    2) The table I modified is the source table for a parent-child dimension.
    3) The reference to the "check list" does not take me to any kind of check list.
    4) The parent-child dimensions I am trying to modify have been in use for months and the parent and child columns do have the dame data types.
    5) I have also check the data types between the dimension table and the fact table.  they use the same data types (small int).
    6) I have not made a collection for the parent key, it is a single column. The remainder of your last paragraph is not clear to me. Can you give me an example.
    I am fairly inexperienced with Analysis Services, please talk slow and use small words  :-)
    Thanks again for your help!
    Gary

  • OSB: Cannot acquire data source error while using JCA DBAdapter in OSB

    Hi All,
    I've entered 'Cannot acquire data source' error while using JCA DBAdapter in OSB.
    Error infor are as follows:
    The invocation resulted in an error: Invoke JCA outbound service failed with application error, exception: com.bea.wli.sb.transports.jca.JCATransportException: oracle.tip.adapter.sa.api.JCABindingException: oracle.tip.adapter.sa.impl.fw.ext.org.collaxa.thirdparty.apache.wsif.WSIFException: servicebus:/WSDL/DBAdapter1/RetrievePersonService [ RetrievePersonService_ptt::RetrievePersonServiceSelect(RetrievePersonServiceSelect_inputParameters,PersonTCollection) ] - WSIF JCA Execute of operation 'RetrievePersonServiceSelect' failed due to: Could not create/access the TopLink Session.
    This session is used to connect to the datastore.
    Caused by Exception [EclipseLink-7060] (Eclipse Persistence Services - 2.0.2.v20100323-r6872): org.eclipse.persistence.exceptions.ValidationException
    Exception Description: Cannot acquire data source [jdbc/soademoDatabase].
    Internal Exception: javax.naming.NameNotFoundException: Unable to resolve 'jdbc.soademoDatabase'. Resolved 'jdbc'; remaining name 'soademoDatabase'.
    ; nested exception is:
    BINDING.JCA-11622
    Could not create/access the TopLink Session.
    This session is used to connect to the datastore.
    JNDI Name for the Database pool: eis/DB/soademoDatabase
    JNDI Name for the Data source: jdbc/soademoDatabase
    I created a basic DBAdapter in JDeveloper, got the xsd file, wsdl file, .jca file and the topLink mapping file imported them into OSB project.
    Then I used the .jca file to generate a business service, and tested, then the error occurs as described above.
    Login info in RetrievePersonService-or-mappings.xml
    <login xsi:type="database-login">
    <platform-class>org.eclipse.persistence.platform.database.oracle.Oracle9Platform</platform-class>
    <user-name></user-name>
    <connection-url></connection-url>
    </login>
    jca file content are as follows:
    <adapter-config name="RetrievePersonService" adapter="Database Adapter" wsdlLocation="RetrievePersonService.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata">
    <connection-factory location="eis/DB/soademoDatabase" UIConnectionName="Connection1" adapterRef=""/>
    <endpoint-interaction portType="RetrievePersonService_ptt" operation="RetrievePersonServiceSelect">
    <interaction-spec className="oracle.tip.adapter.db.DBReadInteractionSpec">
    <property name="DescriptorName" value="RetrievePersonService.PersonT"/>
    <property name="QueryName" value="RetrievePersonServiceSelect"/>
    <property name="MappingsMetaDataURL" value="RetrievePersonService-or-mappings.xml"/>
    <property name="ReturnSingleResultSet" value="false"/>
    <property name="GetActiveUnitOfWork" value="false"/>
    </interaction-spec>
    </endpoint-interaction>
    </adapter-config>
    RetrievePersonService_db.wsdl are as follows:
    <?xml version="1.0" encoding="UTF-8"?>
    <WL5G3N0:definitions name="RetrievePersonService-concrete" targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/db/KnowledeMgmtSOAApplication/AdapterJDevProject/RetrievePersonService" xmlns:WL5G3N0="http://schemas.xmlsoap.org/wsdl/" xmlns:WL5G3N1="http://xmlns.oracle.com/pcbpel/adapter/db/KnowledeMgmtSOAApplication/AdapterJDevProject/RetrievePersonService" xmlns:WL5G3N2="http://schemas.xmlsoap.org/wsdl/soap/">
    <WL5G3N0:import location="RetrievePersonService.wsdl" namespace="http://xmlns.oracle.com/pcbpel/adapter/db/KnowledeMgmtSOAApplication/AdapterJDevProject/RetrievePersonService"/>
    <WL5G3N0:binding name="RetrievePersonService_ptt-binding" type="WL5G3N1:RetrievePersonService_ptt">
    <WL5G3N2:binding style="document" transport="http://www.bea.com/transport/2007/05/jca"/>
    <WL5G3N0:operation name="RetrievePersonServiceSelect">
    <WL5G3N2:operation soapAction="RetrievePersonServiceSelect"/>
    <WL5G3N0:input>
    <WL5G3N2:body use="literal"/>
    </WL5G3N0:input>
    <WL5G3N0:output>
    <WL5G3N2:body use="literal"/>
    </WL5G3N0:output>
    </WL5G3N0:operation>
    </WL5G3N0:binding>
    <WL5G3N0:service name="RetrievePersonService_ptt-bindingQSService">
    <WL5G3N0:port binding="WL5G3N1:RetrievePersonService_ptt-binding" name="RetrievePersonService_ptt-bindingQSPort">
    <WL5G3N2:address location="jca://eis/DB/soademoDatabase"/>
    </WL5G3N0:port>
    </WL5G3N0:service>
    </WL5G3N0:definitions>
    Any suggestion is appricated .
    Thanks in advance!
    Edited by: user11262117 on Jan 26, 2011 5:28 PM

    Hi Anuj,
    Thanks for your reply!
    I found that the data source is registered on server soa_server1 as follows:
    Binding Name: jdbc.soademoDatabase
    Class: weblogic.jdbc.common.internal.RmiDataSource_1033_WLStub
    Hash Code: 80328036
    toString Results: ClusterableRemoteRef(8348400613458600489S:10.2.1.143:[8001,8001,-1,-1,-1,-1,-1]:base_domain:soa_server1 [8348400613458600489S:10.2.1.143:[8001,8001,-1,-1,-1,-1,-1]:base_domain:soa_server1/291])/291
    Binding Name: jdbc.SOADataSource
    Class: weblogic.jdbc.common.internal.RmiDataSource_1033_WLStub
    Hash Code: 92966755
    toString Results: ClusterableRemoteRef(8348400613458600489S:10.2.1.143:[8001,8001,-1,-1,-1,-1,-1]:base_domain:soa_server1 [8348400613458600489S:10.2.1.143:[8001,8001,-1,-1,-1,-1,-1]:base_domain:soa_server1/285])/285
    I don't know how to determine which server the DBAdapter is targetted to.
    But I found the following information:
    Under Deoloyment->DBAdapter->Monitoring->Outbound Connection Pools
    Outbound Connection Pool Server State Current Connections Created Connections
    eis/DB/SOADemo AdminServer Running 1 1
    eis/DB/SOADemo soa_server1 Running 1 1
    eis/DB/soademoDatabase AdminServer Running 1 1
    eis/DB/soademoDatabase soa_server1 Running 1 1
    The DbAdapter is related to the following files:
    C:\ Oracle\ Middleware\ home_11gR1\ Oracle_SOA1\ soa\ connectors\ DbAdapter. rar
    C:\ Oracle\ Middleware\ home_11gR1\ Oracle_SOA1\ soa\ DBPlan\ Plan. xml
    I unzipped DbAdapter.rar, opened weblogic-ra.xml and found that there's only one data source is registered:
    <?xml version="1.0"?>
    <weblogic-connector xmlns="http://www.bea.com/ns/weblogic/90">
    <enable-global-access-to-classes>true</enable-global-access-to-classes>
    <outbound-resource-adapter>
    <default-connection-properties>
    <pool-params>
    <initial-capacity>1</initial-capacity>
    <max-capacity>1000</max-capacity>
    </pool-params>
    <properties>
    <property>
    <name>usesNativeSequencing</name>
    <value>true</value>
    </property>
    <property>
    <name>sequencePreallocationSize</name>
    <value>50</value>
    </property>
    <property>
    <name>defaultNChar</name>
    <value>false</value>
    </property>
    <property>
    <name>usesBatchWriting</name>
    <value>true</value>
    </property>
    <property>
    <name>usesSkipLocking</name>
    <value>true</value>
    </property>
    </properties>
              </default-connection-properties>
    <connection-definition-group>
    <connection-factory-interface>javax.resource.cci.ConnectionFactory</connection-factory-interface>
    <connection-instance>
    <jndi-name>eis/DB/SOADemo</jndi-name>
              <connection-properties>
                   <properties>
                   <property>
                   <name>xADataSourceName</name>
                   <value>jdbc/SOADataSource</value>
                   </property>
                   <property>
                   <name>dataSourceName</name>
                   <value></value>
                   </property>
                   <property>
                   <name>platformClassName</name>
                   <value>org.eclipse.persistence.platform.database.Oracle10Platform</value>
                   </property>
                   </properties>
              </connection-properties>
    </connection-instance>
    </connection-definition-group>
    </outbound-resource-adapter>
    </weblogic-connector>
    Then I decided to use eis/DB/SOADemo for testing.
    For JDeveloper project, after I deployed to weblogic server, it works fine.
    But for OSB project referencing wsdl, jca and mapping file from JDeveloper project, still got the same error as follows:
    BEA-380001: Invoke JCA outbound service failed with application error, exception:
    com.bea.wli.sb.transports.jca.JCATransportException: oracle.tip.adapter.sa.api.JCABindingException: oracle.tip.adapter.sa.impl.fw.ext.org.collaxa.thirdparty.apache.wsif.WSIFException: servicebus:/WSDL/DBAdapterTest/DBReader [ DBReader_ptt::DBReaderSelect(DBReaderSelect_inputParameters,PersonTCollection) ] - WSIF JCA Execute of operation 'DBReaderSelect' failed due to: Could not create/access the TopLink Session.
    This session is used to connect to the datastore.
    Caused by Exception [EclipseLink-7060] (Eclipse Persistence Services - 2.0.2.v20100323-r6872): org.eclipse.persistence.exceptions.ValidationException
    Exception Description: Cannot acquire data source [jdbc/SOADataSource].
    Internal Exception: javax.naming.NameNotFoundException: Unable to resolve 'jdbc.SOADataSource'. Resolved 'jdbc'; remaining name 'SOADataSource'.
    ; nested exception is:
    BINDING.JCA-11622
    Could not create/access the TopLink Session.
    This session is used to connect to the datastore.
    Caused by Exception [EclipseLink-7060] (Eclipse Persistence Services - 2.0.2.v20100323-r6872): org.eclipse.persistence.exceptions.ValidationException
    Exception Description: Cannot acquire data source [jdbc/SOADataSource].
    Internal Exception: javax.naming.NameNotFoundException: Unable to resolve 'jdbc.SOADataSource'. Resolved 'jdbc'; remaining name 'SOADataSource'.
    You may need to configure the connection settings in the deployment descriptor (i.e. DbAdapter.rar#META-INF/weblogic-ra.xml) and restart the server. This exception is considered not retriable, likely due to a modelling mistake.
    It almost drive me crazy!!:-(
    What's the purpose of 'weblogic-ra.xml' under the folder of 'C:\Oracle\Middleware\home_11gR1\Oracle_OSB1\lib\external\adapters\META-INF'?
    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Maybe you are looking for

  • PID loop rate with counters

    I'm trying to use a PXI-8106 RT and LabviewRT 8.5 as a PID controller to control 3 motors.  I'm measuring the speed of the motors by measuring the frequency of the encoder with a 6608. This article http://zone.ni.com/devzone/cda/tut/p/id/5423 claims

  • Mac wont start after using bootcamp with a cd to install win 8

    recently i tried to install windows 8 on my mac installed 10.9 mavericks ... i used bootcamp with a dvd to do the same .... first after partioning the disk the mac restarted and a balck screen appeared with an error NO BOOTABLE DEVICE and then ejecti

  • How to make full system restore disk in Windows 8

    http://www.techrepublic.com/blog/window-on-windows​/restore-windows-8-with-system-image-recovery/7464 And its called Windows 7 File Recovery? But finally found way to make the good old fashion restore disks.

  • Using iWeb to create a site uploaded to another domain via Fetch or the lik

    I'm sorry for the like subject line, but I don't know what to call what I'm looking for. I create my website in iWeb, publish to a folder, put it up on my personal domain using Fetch. (www.cynfrank.com) My domain is hosted by a company called Vario.

  • We need new OpenGL updates for ATI 2600 cards!

    I am now on game number two where I cannot run it because my graphics card drivers are out of date.  The last time Apple had ATI Radeon 2600 graphics card uppdates was in 2010.  OpenGL have had many updates in their gaming structure since then.  ATI