Table Source Index

I realize this may be fundamentals, but, I'm new to the SES world...
I am attempting to index a table stored in an Oracle Database. I can get SES to do the initial analysis...but no records get indexed. I am sure there is something simple I have missed...any input would be greatly appreciated.

Not really enough to go on there -
Are you using a table crawl or a database crawl?
Have you made sure that your Recrawl Policy (Schedule -> Edit) is set to "Process All Documents"?
Are there any errors in the crawler log file?

Similar Messages

  • Neen help with date range searches for Table Sources

    Hi all,
    I need help, please. I am trying to satisfy a Level 1 client requirement for the ability to search for records in crawled table sources by a date and/or date range. I have performed the following steps, and did not get accurate results from Advanced searching for date. Please help me understand what I am doing wrong, and/or if there is a way to define a date search attribute without creating a LOV for a date column. (My tables have 500,00 rows.)
    I am using SES 10.1.8.3 on Windows 32.
    My Oracle 10g Spatial Table is called REPORTS and this table has the following columns:
    TRACKNUM Varchar2
    TITLE Varchar2
    SUMMARY CLOB
    SYMBOLCODE Varchar2
    Timestamp Date
    OBSDATE Date
    GEOM SDO_GEOMETRY
    I set up the REPORTS table source in SES, using TRACKNUM as the Primary Key (unique and not null), and SUMMARY as the CONTENT Column. In the Table Column Mappings I defined TITLE as String and TITLE.
    Under Global Settings > Search Attributes I defined a new Search Attribute (type Date) called DATE OCCURRED (DD-MON-YY).
    Went back to REPORTS source previously defined and added a new Table Column Mapping - mapping OBSDATE to the newly defined DATE OCCURRED (DD-MON-YY) search attribute.
    I then modified the Schedule for the REPORTS source Crawler Policy to “Process All Documents”.
    Schedule crawls and indexes entire REPORTS table.
    In SES Advanced Search page, I enter my search keyword, select Specific Source Group as REPORTS, select All Match, and used the pick list to select the DATE OCCURRED (DD-MON-YY) Attribute Name, operator of Greater than equal, and entered the Value 01-JAN-07. Then the second attribute name of DATE_OCCURRED (DD-MON-YY), less than equals, 10-JAN-07.
    Search results gave me 38,000 documents, and the first 25 I looked at had dates NOT within the 01-JAN-07 / 10-JAN-07 range. (e.g. OBSDATE= 24-MAR-07, 22-SEP-), 02-FEB-08, etc.)
    And, none of the results I opened had ANY dates within the SUMMARY CLOB…in case that’s what was being found in the search.
    Can someone help me figure out how to allow my client to search for specific dated records in a db table using a single column for the date? This is a major requirement and they are anxiously awaiting my solution.
    Thanks, in advance….

    raford,
    Thanks very much for your reply. However, from what I've read in the SES Admin Document is that (I think) the date format DD/MM/YYYY pertains only to searches on "file system" sources (e.g. Word, Excel, Powerpoint, PDF, etc.). We have 3 file system sources among our 25 total sources. The remaining 22 sources are all TABLE or DATABASE sources. The DBA here has done a great job getting the data standardized using the typical/default Oracle DATE type format in our TABLE sources (DD-MON-YY). Our tables have anywhere from 1500 rows to 2 million rows.
    I tested your theory that the dates we are entering are being changed to Strings behind the scenes and on the Advanced Page, searched for results using OBSDATE equals 01/02/2007 in an attempt to find data that I know for certain to be in the mapped OBSDATE table column as 01-FEB-07. My result set contained data that had an OBSDATE of 03-MAR-07 and none containing 01-FEB-07.
    Here is the big issue...in order for my client to fulfill his primary mission, one of the top 5 requirements is that he/she be able to find specific table rows that are contain a specific date or range of dates.
    thanks very much!

  • Need help with date range searches for Table Sources in SES

    Hi all,
    I need help, please. I am trying to satisfy a Level 1 client requirement for the ability to search for records in crawled table sources by a date and/or date range. I have performed the following steps, and did not get accurate results from Advanced searching for date. Please help me understand what I am doing wrong, and/or if there is a way to define a date search attribute without creating a LOV for a date column. (My tables have 500,00 rows.)
    I am using SES 10.1.8.3 on Windows 32.
    My Oracle 10g Spatial Table is called REPORTS and this table has the following columns:
    TRACKNUM Varchar2
    TITLE Varchar2
    SUMMARY CLOB
    SYMBOLCODE Varchar2
    Timestamp Date
    OBSDATE Date
    GEOM SDO_GEOMETRY
    I set up the REPORTS table source in SES, using TRACKNUM as the Primary Key (unique and not null), and SUMMARY as the CONTENT Column. In the Table Column Mappings I defined TITLE as String and TITLE.
    Under Global Settings > Search Attributes I defined a new Search Attribute (type Date) called DATE OCCURRED (DD-MON-YY).
    Went back to REPORTS source previously defined and added a new Table Column Mapping - mapping OBSDATE to the newly defined DATE OCCURRED (DD-MON-YY) search attribute.
    I then modified the Schedule for the REPORTS source Crawler Policy to “Process All Documents”.
    Schedule crawls and indexes entire REPORTS table.
    In SES Advanced Search page, I enter my search keyword, select Specific Source Group as REPORTS, select All Match, and used the pick list to select the DATE OCCURRED (DD-MON-YY) Attribute Name, operator of Greater than equal, and entered the Value 01-JAN-07. Then the second attribute name of DATE_OCCURRED (DD-MON-YY), less than equals, 10-JAN-07.
    Search results gave me 38,000 documents, and the first 25 I looked at had dates NOT within the 01-JAN-07 / 10-JAN-07 range. (e.g. OBSDATE= 10-MAR-07, 22-SEP-07, 02-FEB-08, etc.)
    And, none of the results I opened had ANY dates within the SUMMARY CLOB…in case that’s what was being found in the search.
    Can someone help me figure out how to allow my client to search for specific dated records in a db table using a single column for the date? This is a major requirement and they are anxiously awaiting my solution.
    Thanks very much, in advance….

    raford,
    Thanks very much for your reply. However, from what I've read in the SES Admin Document is that (I think) the date format DD/MM/YYYY pertains only to searches on "file system" sources (e.g. Word, Excel, Powerpoint, PDF, etc.). We have 3 file system sources among our 25 total sources. The remaining 22 sources are all TABLE or DATABASE sources. The DBA here has done a great job getting the data standardized using the typical/default Oracle DATE type format in our TABLE sources (DD-MON-YY). Our tables have anywhere from 1500 rows to 2 million rows.
    I tested your theory that the dates we are entering are being changed to Strings behind the scenes and on the Advanced Page, searched for results using OBSDATE equals 01/02/2007 in an attempt to find data that I know for certain to be in the mapped OBSDATE table column as 01-FEB-07. My result set contained data that had an OBSDATE of 03-MAR-07 and none containing 01-FEB-07.
    Here is the big issue...in order for my client to fulfill his primary mission, one of the top 5 requirements is that he/she be able to find specific table rows that are contain a specific date or range of dates.
    thanks very much!

  • MS SQL Server 2014: Error inserting into Temp table with index and identity field

    In this thread, I mentioned a problem with SQL Server 2014:
    SQL Server 2014: Bug with IDENTITY INSERT ON
    The question was answered, it is a bug. To keep you informed on this issue, I open this discussion.
    Problem:
    The code below works perfectly fine on MS SQL Server 2008 R2 and MS SQL Server 2012, but gives an error every second time the proc is executed on MS SQL Server 2014. If I do not define any index on the temp table, the problem disappears. Defining the index
    after the insert, does not help.
    SET NOCOUNT ON
    GO
    IF EXISTS (SELECT 1 FROM sys.procedures WHERE name = 'usp_Test') DROP PROC dbo.usp_Test;
    GO
    CREATE PROC dbo.usp_Test AS
    BEGIN
    SET NOCOUNT ON
    CREATE TABLE #Source(ID integer NOT NULL);
    INSERT INTO #Source VALUES (1), (2), (3);
    CREATE TABLE #Dest (ID integer IDENTITY(1,1) NOT NULL);
    CREATE INDEX #IDX_Dest ON #Dest (ID);
    PRINT 'Check if the insert might cause an identity crisis';
    SELECT 'Source' AS SourceTable, * FROM #Source;
    SELECT 'Destination' AS DestTable, * FROM #Dest;
    SET IDENTITY_INSERT #Dest ON;
    PRINT 'Do the insert';
    INSERT INTO #Dest (ID) SELECT ID FROM #Source;
    PRINT 'Insert ready';
    SET IDENTITY_INSERT #Dest OFF;
    SELECT * FROM #Dest;
    DROP TABLE #Source;
    DROP TABLE #Dest;
    END;
    GO
    PRINT 'First execution of the proc, everything OK';
    EXEC dbo.usp_Test;
    PRINT '';
    PRINT 'Second execution of the proc, the insert fails.';
    PRINT 'Removing the index #IDX_Dest causes the error to disappear.';
    EXEC dbo.usp_Test;
    GO
    DROP PROC dbo.usp_Test;
    GO

    There is some progress. Communication from a former Microsoft employee tells us this:
    Shivendra Vishal
    Engineer at Microsoft
    I am no longer with MS, and I do not have code access, however from the public symbols, I could make out following:
    sqlmin!SetidentI2I4+0x1f3:
    000007fe`f4d865d3 488b10 mov rdx,qword ptr [rax] ds:00000000`00000000=????????????????
    ExceptionAddress: 000007fef4d865d3 (sqlmin!SetidentI2I4+0x00000000000001f3)
    ExceptionCode: c0000005 (Access violation)
    ExceptionFlags: 00000000
    NumberParameters: 2
    Parameter[0]: 0000000000000000
    Parameter[1]: 0000000000000000
    Attempt to read from address 0000000000000000
    This is a read AV and from registers it is clear that we were trying to move the value of location pointed by qword of register rax which is not valid:
    rax=0000000000000000 rbx=0000000000000038 rcx=0000000000001030
    rdx=0000000000000006 rsi=00000001f55def98 rdi=00000000106fd070
    rip=000007fef4d865d3 rsp=00000000106fcf40 rbp=00000000106fcfe9
    r8=0000000000000000 r9=00000001f55def60 r10=00000001f55defa0
    r11=00000000106fcd20 r12=0000000000000000 r13=0000000000000002
    r14=00000001f49c3860 r15=00000001f58c0040
    iopl=0 nv up ei pl nz na po nc
    cs=0033 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010206
    The stack is:
    # Child-SP RetAddr Call Site
    00 00000000`106fcf40 000007fe`f30c1437 sqlmin!SetidentI2I4+0x1f3
    01 00000000`106fd050 000007fe`f474e7ce sqlTsEs!CEsExec::GeneralEval4+0xe7
    02 00000000`106fd120 000007fe`f470e6ef sqlmin!CQScanUpdateNew::GetRow+0x43d
    03 00000000`106fd1d0 000007fe`f08ff517 sqlmin!CQueryScan::GetRow+0x81
    04 00000000`106fd200 000007fe`f091cebe sqllang!CXStmtQuery::ErsqExecuteQuery+0x36d
    05 00000000`106fd390 000007fe`f091ccb9 sqllang!CXStmtDML::XretDMLExecute+0x2ee
    06 00000000`106fd480 000007fe`f08fa058 sqllang!CXStmtDML::XretExecute+0xad
    07 00000000`106fd4b0 000007fe`f08fb66b sqllang!CMsqlExecContext::ExecuteStmts<1,1>+0x427
    08 00000000`106fd5f0 000007fe`f08fac2e sqllang!CMsqlExecContext::FExecute+0xa33
    09 00000000`106fd7e0 000007fe`f152cfaa sqllang!CSQLSource::Execute+0x86c
    0a 00000000`106fd9b0 000007fe`f152c9e8 sqllang!CStmtExecProc::XretLocalExec+0x25a
    0b 00000000`106fda30 000007fe`f152a1d8 sqllang!CStmtExecProc::XretExecExecute+0x4e8
    0c 00000000`106fe1e0 000007fe`f08fa058 sqllang!CXStmtExecProc::XretExecute+0x38
    0d 00000000`106fe220 000007fe`f08fb66b sqllang!CMsqlExecContext::ExecuteStmts<1,1>+0x427
    0e 00000000`106fe360 000007fe`f08fac2e sqllang!CMsqlExecContext::FExecute+0xa33
    0f 00000000`106fe550 000007fe`f0902267 sqllang!CSQLSource::Execute+0x86c
    10 00000000`106fe720 000007fe`f0909087 sqllang!process_request+0xa57
    11 00000000`106feee0 000007fe`f2bf49d0 sqllang!process_commands+0x4a3
    12 00000000`106ff200 000007fe`f2bf47b4 sqldk!SOS_Task::Param::Execute+0x21e
    13 00000000`106ff800 000007fe`f2bf45b6 sqldk!SOS_Scheduler::RunTask+0xa8
    14 00000000`106ff870 000007fe`f2c136ff sqldk!SOS_Scheduler::ProcessTasks+0x279
    15 00000000`106ff8f0 000007fe`f2c138f0 sqldk!SchedulerManager::WorkerEntryPoint+0x24c
    16 00000000`106ff990 000007fe`f2c13246 sqldk!SystemThread::RunWorker+0x8f
    17 00000000`106ff9c0 000007fe`f2c13558 sqldk!SystemThreadDispatcher::ProcessWorker+0x3ab
    18 00000000`106ffa70 00000000`775d59ed sqldk!SchedulerManager::ThreadEntryPoint+0x226
    19 00000000`106ffb10 00000000`7780c541 kernel32!BaseThreadInitThunk+0xd
    1a 00000000`106ffb40 00000000`00000000 ntdll!RtlUserThreadStart+0x21
    Unassembling the function:
    000007fe`f4d8658e 4c8b10 mov r10,qword ptr [rax]
    000007fe`f4d86591 4533e4 xor r12d,r12d
    000007fe`f4d86594 410fb7d5 movzx edx,r13w
    000007fe`f4d86598 4533c9 xor r9d,r9d
    000007fe`f4d8659b 4533c0 xor r8d,r8d
    000007fe`f4d8659e 488bc8 mov rcx,rax
    000007fe`f4d865a1 4489642420 mov dword ptr [rsp+20h],r12d
    000007fe`f4d865a6 41ff5230 call qword ptr [r10+30h]
    000007fe`f4d865aa 8b5597 mov edx,dword ptr [rbp-69h]
    000007fe`f4d865ad 4c8b10 mov r10,qword ptr [rax]
    000007fe`f4d865b0 4489642438 mov dword ptr [rsp+38h],r12d
    000007fe`f4d865b5 4489642430 mov dword ptr [rsp+30h],r12d
    000007fe`f4d865ba 458d442401 lea r8d,[r12+1]
    000007fe`f4d865bf 4533c9 xor r9d,r9d
    000007fe`f4d865c2 488bc8 mov rcx,rax
    000007fe`f4d865c5 c644242801 mov byte ptr [rsp+28h],1
    000007fe`f4d865ca 4488642420 mov byte ptr [rsp+20h],r12b
    000007fe`f4d865cf 41ff5250 call qword ptr [r10+50h]
    000007fe`f4d865d3 488b10 mov rdx,qword ptr [rax] <=================== AV happened over here
    000007fe`f4d865d6 488bc8 mov rcx,rax
    000007fe`f4d865d9 4c8bf0 mov r14,rax
    000007fe`f4d865dc ff5268 call qword ptr [rdx+68h]
    000007fe`f4d865df 488d55e7 lea rdx,[rbp-19h]
    000007fe`f4d865e3 4c8b00 mov r8,qword ptr [rax]
    000007fe`f4d865e6 488bc8 mov rcx,rax
    000007fe`f4d865e9 41ff5010 call qword ptr [r8+10h]
    000007fe`f4d865ed f6450a04 test byte ptr [rbp+0Ah],4
    I remember few issues with scan2ident function, I am not sure if they have fixed it however it appears that this is intoduced to SQL 2014 and we need help from MS to get this resolved as it needs code analysis.
    It is not getting simulated for other versions of SQL apart from SQL 2014.
    Also to add, interestingly, the value of rax is not visibly changed and it was successfully passed on to rcx, which has a valid value, so something should have changed the value of rax inside call to function using call qword ptr [r10+50h], and looking at this
    it appears that it might be a list of functions and we are going at particular offset [50h]. So, bottom line is that the call to function qword ptr [r10+50h], should be changing something in rax, and debugging/analyzing this code might give us some more idea.

  • Import dumpfile with seperate tablespaces for table and index

    Hi,
    We have a schema for which the its tables are stored in seperate tablespace and indexes are stored in different tablespace. Now we have take full schema export. Now we want to import it on another schema. Now I want to know if the we have difference in the tablespace name we use REMAP_TABLESPACE clause of the impdp command but what about the seperate tablespace for table and indexes. How would Oracle handle this.
    Regards,
    Abbasi

    Hi,
    I hope you created the same tablespace structure on the target side if not so remap_tablespace option you have to use for specifying different tablespaces.Oracle will take care of putting data and index.Any how if a index is moved from one tablespace to other you have to rebuild them,once you rebuild them than only stattistics are gathered otherwise you
    might face some performance issue.
    Better option is to keep same tablespace structures in source and target environment.
    Best regards,
    Rafi.
    http://rafioracledba.blogspot.com
    Edited by: Rafi (Oracle DBA) on May 9, 2011 7:07 AM

  • Logical level for logical fact table sources

    it is clear that for fact aggregates, we should use the Content tab of the Logical Table Source dialog to assign the correct logical level to each dimension.
    question is : is it mandatory to assign even for non-aggregates fact tables the logical level for each dimension (which normally should be set to the most detailed level of each dimension) ? is it any known issue if "logical levels"in content tab are not set ?
    the reason I'm asking this is a strange bug I have (I'm not going to discuss it here) and then only workaround seems to be NOT setting the logical levels (on content tab) for logical fact table sources.
    thank you !

    If levels are not set: By default levels are considered as lowest level
    It should not matter if you set or not
    Generally we set for facts explicitly when we are using Aggregate tables.
    Your current issue might be a case by case; I would suggest to check implicit fact, any table mapped to the source to force a join etc
    Mark if helps
    Let me know how it helps
    Edited by: Srini VEERAVALLI on Feb 5, 2013 8:33 AM
    Any updates on this?+_
    Edited by: Srini VEERAVALLI on Feb 14, 2013 9:09 AM

  • How to get all the column values from a table source

    Hi,
    I have created one table source on a employee table and some customized attributes using global settings>search attributes.
    Now these customized attributes mapped with the table columns through attribute mapping from the sorce tab.
    Using doOracleSearch method i passed last parameter i.e. Integer array of customized attributes.
    After execution of doOracleSearch method i am getting the results but customized attributes are not coming.
    getCustomAttributes method returns null instead of some values.
    Please refer following code for more info:
    Integer[] fetchAttr=new Integer[2];
    fetchAttr[0]=new Integer(137);
    fetchAttr[1]=new Integer(138);
    result =
    stub.doOracleSearch("BTM",
    new Integer(1),
    new Integer(10),
    Boolean.TRUE,
    Boolean.TRUE,
    group,
    "en",
    "en",
    Boolean.TRUE,
    null,
    null,
    fetchAttr);
    ResultElement[] resElements = result.getResultElements();
    for(int i = 0; i < resElements.length; i++)
    System.out.println("Title : " + resElements[0].getTitle());
    System.out.println("Snippet : " + resElements[0].getSnippet());
    System.out.println("URL : " + resElements[0].getUrl());
    System.out.println("non default : " + resElements[0].getCustomAttributes()); // it returns null here
    }

    Confirm the attributes you are asking SES for match those on the data source being searched. One thing to try is to simply tell SES to return all custom attributes for your search. Here is a snippet to build a list of all attribute IDs and pass them to your search...
    // Create and set SOAP URL
    OracleSearchService searchService = new OracleSearchService();
    searchService.setSoapURL("http://myserver:7777/search/query/OracleSearch");
    // Set attributes to fetch (all)
    Attribute[] attributesAll = searchService.getAllAttributes("en");
    ArrayList<Integer> attributeIds = new ArrayList<Integer>();
    for(Attribute a: attributesAll)
    attributeIds.add(a.getId());
    Integer attributeIdArrayAll[] = new Integer[attributeIds.size()];
    attributeIdArrayAll = attributeIds.toArray(attributeIdArrayAll);      
    // Query
    OracleSearchResult result = searchService.doOracleSearch("some text", ..........[other params], attributeIdArrayAll);
    // Print out results
    ResultElement[] resElements = result.getResultElements();
    for(int i = 0; i < resElements.length; i++)
    // Get document
    ResultElement doc = resElements;
    // Print Title
    System.out.println("Title: " + doc.getTitle());
    // Print custom attributes
    CustomAttribute[] attributes = doc.getCustomAttributes();
    for(int j = 0; j < attributes.length; j++)
    CustomAttribute attr = attributes[j];
         System.out.println("[Custom Attribute] " + attr.getName() + ": " + attr.getValue());
    Hope this helps.

  • DB02 view is empty on Table and Index analyses  DB2 9.7 after system copy

    Dear All,
                 I did the Quality refresh by System copy export/import method. ECC6 on HP-UX DB29.7.
    After Import Runstats status n Db02 for Table and Index analysis was empty and all value showing '-1'. Eventhough
    a) all standard backgrnd job scheduled in sm36
    b) Automatic runstats are enabled in db2 parameters
    c) Reorgchk all scheduled periodically from db13 and already ran twice.
    4) 'reorgchk update statistics on table all' was also ran on db2 level.
    but Run stats staus in db02 was not getting updated. Its empty.
    Please suggest.
    Regards
    Vinay

    Hi Deepak,
    Yes, that is possible (but only offline backup). But for the new features like reclaimable tablespace (to lower the high watermark)
    it's better to export/import with systemcopy.
    Also with systemcopy you can use index compression.
    After backup and restore you can have also reclaimable tablespace, but you have to create new tablespaces
    and then work with db6conv and online table move to move one tablespace online to the new one.
    Best regards,
    Joachim

  • How to get return type as Table of Index by BINAR from Procedure using JDBC

    Hi,
    We have stored procedure which takes Varchar as input and rerurn muiltiple recored of type Table of index by BINARY
    We created the procedure with in a package, its header part like below:
    CREATE OR REPLACE PACKAGE emp_pkid_pkg
    AS
    TYPE r_emp IS RECORD ( employe_profile_id NUMBER
    , client_profile_id VARCHAR2(240)
    , email VARCHAR2(240)
    , terms_acp VARCHAR2(1)
    TYPE tp_emp_profile IS TABLE OF r_emp INDEX BY BINARY_INTEGER;
    PROCEDURE er_employe_prov_profile ( e_inxid employe_provision_instance.inxid%TYPE
    , e_emp_recs OUT tp_emp_profile
    END emp_pkid_pkg;
    This procedure has body part, wich has origial business logic like below.
    CREATE OR REPLACE PACKAGE BODY emp_pkid_pkg
    AS
    PROCEDURE pr_customer_prov_profile ( e_inxid employe_provision_instance.inxid%TYPE
    , e_emp_recs OUT tp_customer_provision_profile
    IS
    CURSOR c_emp_prov_instance ( c_guid employe_provision_instance.guid%TYPE )
    etc ...
    END emp_pkid_pkg;
    We could execute the below script from oracle client tool and get the response.
    DECLARE
    e_cust emp_pkid_pkg.tp_emp_profile;
    BEGIN
    emp_pkid_pkg.er_employe_prov_profile ( 'ef45t6543y98'
    , e_cust
    FOR i in e_cust.FIRST..e_cust.LAST LOOP
    DBMS_OUTPUT.PUT_LINE ( e_cust(i).employe_profile_id
    ||'#'|| e_cust(i).client_customer_id
    ||'#'|| e_cust(i).email
    ||'#'|| e_cust(i).term_acp);
    END LOOP;
    END;
    We have requirement to get the results from procedure usind JDBC callable statement call.
    We have tried to call the procedure via JDBC callable statement but it didn't work.
    We have constructed it like the following. It was throwing error "java.sql.SQLException: invalid column type: emp_pkid_pkg.tp_emp_profile
    CallableStatement cs2 = con.prepareCall("{call emp_pkid_pkg.er_employe_prov_profile(?,?)}");
    cs2.registerOutParameter(2, OracleTypes.CURSOR, emp_pkid_pkg.tp_emp_profile);
    cs2.setString(1,empId);
    Not sure whether I am doing the logic correctly. But i tryed with diff type. Still am getting same error like above.
    Please point me to the correct approach.
    Thanks
    Edited by: 921689 on 18-Mar-2012 17:20

    >
    We have requirement to get the results from procedure usind JDBC callable statement call.
    >
    Can't be done - the reason has nothing to do with JDBC so you are in the wrong forum.
    Repost in the PL/SQL forum and I can give you an example of what you have to do
    PL/SQL
    First the TYPEs you defined are PL/SQL types so can't be referenced outside PL/SQL; you need to define SQL types.
    Second you will need to use a procedure that returns a REF CURSOR or is a PIPELINED procedure. Since your procedure doesn't fall into either category you can't use it with JDBC to do what you want.
    If your query was a PIPELINED function then you could simply query it like it was a table. I have a PIPELINED function name 'get_emp' so this works.
    select * from table(get_emp(30));Post in the PL/SQL forum and I can give you the code for the procedure. I'm not going to clutter up this forum with inappropriate material.

  • Only a few of rows in the table are indexed without an error message

    When I executed the sql command:"create index ... indextype is ctxsys.context...",
    only a few of rows were indexed.
    The number of rows returned from query clause "select * from table where contains(field,'keyword') " is much less than that returned from "select * from table where field like '%keyword%'",though its speed is very fast.
    I checked the I table,the indexed words was few.The create index command ends in 3 mins on a table with more than 2 millions rows.
    The language setting is Chinese.
    The documentations about the intermedia text services on chinese languages is difficult to find.Can someone give me some or help me to solve this problem?
    Thanks in advance!

    I get the same thing in Mac OSX 10.7.3.  When I control-click on a preview and select 'open in camera raw' I get a dialog that says it's not enabled and requires a qualifying app to have been launched first:
    I've launched bridge and photoshop and if I open an app from the mini-bridge within photoshop, camera raw is launched.

  • Logical Table source source query

    In OBIEE 10g we can have multiple logical table sources and we can also add multiple tables into a single logical table source(logical table source source). I wanted to know the difference between doing so and having multiple logical table sources for each logical source.
    Hope I made myself clear.
    Cheers
    Rem

    Hi Rem,
    When data is duplicated across different physical tables add them as separate LTS with column mapping pointing to most economical sources. Specifying the most economical source is about the idea that a single column exists in more than one table, based on the column mappings BI server picks up those LTS's which could satisfy the request with minimal joins.
    When the data is not duplicated add them in a single LTS source. When the physical sources are added in a single LTS, you have the flexibility of using outer joins. But specifying a join as outer join makes BI Server to include this source even if its not required otherwise when the join is inner, the sources will not be included if not required to satisfy the query.
    Hope this helps.
    Thanks!

  • Filter on the same logical table source on two logical tables is not working properly

    Hi,
    In the RPD, assume that I have a physical table named Employee_fact, created an Alias called D0 Employee Fact
    In the BMM I've two logical tables called Employee_fact_Type1 and Employee_fact_Type2
    Each of these logical tables has the same Logical table source - D0 Employee
    But in each logical table source, I've a filter in where clause
    such as, for Employee_fact_Type1  employee_type = 'XXX'
                 for Employee_fact_Type2 employee_type='YYY'
    These two facts are present in the same BMM and each of them are connected to two dimensions
    such as
    Dept_type1 <- Employee_fact_type_1 -> Tenure_type1
    Dept_type2 <- Employee_fact_type_2 -> Tenure_type2
    In analysis, When I query the columns from Dept_type1 and Tenure_type1, the database query generated shows the condition employee_type='YYY' instead of employee_type='XXX' .
    But, adding the columns from Dept_Type1, Employee_fact_type_1, Tenure_type1 it shows proper filter.
    I checked the relationships, presentation layer for the sources of presentation tables and everything looks good.
    Can anybody tell me what goes wrong here?

    did you set the content level as well?

  • Dynamically change the Priority Group of Logical Table Sources in OBIEE 11g

    Hi All,
    I have 2 Logical Table Sources(LTS 1 and LTS 2 for a Logical Table in BMM Layer).
    Example: Logical Table : Sample
    LTS Source 1 : Sample 1(Priority Group Set to 1)
    LTS Source 2 : Sample 2(Priority Group Set to 0)
    I have set the Priority Group of Sample 1 LTS Source to 1 and Priority Group of Sample 2 LTS Source to 0.
    I need to dynamically change the Priority Group of Sample 1 LTS Source to 0 if my role is DEVELOPER where role is a column in database.
    If my role is not equal to DEVELOPER then the Priority Group of Sample 1 LTS Source will remain same(1).
    Please suggest how can i achieve this.
    Thanks,
    Soukath Ali

    hello Soukath Ali,
    didi you find a way to dinamically changing Priority Group?
    thanks,
    Maria Teresa Marchetti

  • Problem: 1 physical table -- multiple logical table sources

    Hi,
    I'm quite new to BIEE and setting up my repository.
    So I have a question, if the following scenario is possible:
    Physical Layer: TABLE_A: COL_A, COL_B, COL_C
    TABLE_B: COL_D, COL_E, COL_F
    Join TABLE_A.COL_A = TABLE_B.COL_D
    In Business Model I have a Dimension Table with TABLE_A as datasource with fields DIM1 (COL_B).
    The Fact Table (MEASURE) would have twice TABLE_B as data source with different where-clauses on COL_F and logical table columns (ATT1 and ATT2) of value COL_E.
    So far I have created everything and the consistency check shows no errors or warnings, but I get an error in Answer: Incorrectly defined logical table source (for fact table MEASURE) does not contain mapping for [MEASURE.ATT1, MEASURE.ATT2], when I creating an report showing DIM1, ATT1, ATT2.
    Isn't it possible to have one physical column used as multiple data source?
    I know it 's working, when I create the physical table twice ... but maybe there's a solution for business model.
    Thanks
    chrissy

    Hi mengesh,
    that's what I also tried, but it's always returning me the same error.
    I know it would work, when I import the physical table twice or more, but that's not what I want to do, because at the end I have 10 or more fields based on this one physical table. There's one field indicating what value is contained in the record, this means:
    COL_F | COL_E
    1 | customer name
    2 | customer number
    3 | customer branche
    4 | salesman
    5 | date
    6 | report number
    etc.
    I don't think it's usefull to import the physical table as often as I need this field. So I want to divide it in business model.
    thanks
    chrissy

  • Max number of logical table sources

    Hi,
    I have one logical table based (a fact table) on 8 logical table sources. This is done to simulate some partitioning as my customer does not have an partition set up on data base level.
    Anyway my challenge is that a request fired in answers does not seem to take all logical table sources into account when building the physical SQL.
    The missing logical table source (the one not being part of the physical SQL) does not differ from the other logical table sources.
    Is there a limit of how many logical table sources BIEE can handle under one logical table?
    BIEE version: 10.1.3.4

    Alastair,
    thanks for replying.
    The logical tables sources partition the data by year. The aggregation level is the same for all LTS, the check for using this with other sources at this level is set and I've tried to force a result from every LTS (not combined but each LTS for itself). Forcing the results is done by using a dashboard prompt in answers filtering on year.
    The whole 'LTS-setup' works in combination with Oracle Warehouse Builder. What I mean by this is the following: In OWB I split the data from one table into 8 different tables, each containing 3 years of data (except for the oldest data where the number of years and amount of data will grow over time).
    This is done by comparing values of a year column with the actual year. E.g. the most actual table would contain data for the years 2010 - 2008, the next would contain data for 2007 - 2005, and so on.
    Now as the years in the different tables will change, I decided to establish an equaly changing counterpart in BIEE. Basically I defined some repository variables: actaul_year, three_years_ago, six_years_ago, 9_years_ago, etc.
    The fragmentation content of the different LTS is defined like this:
    most actual data: MyModel.Year.Year >= VALUEOF("3_years_ago")d
    next table: VALUEOF("3_years_ago") > MyModel.Year.Year AND MyModel.Year.Year >= VALUEOF("6_years_ago")
    next table: VALUEOF("6_years_ago") > MyModel.Year.Year AND MyModel.Year.Year >= VALUEOF("9_years_ago")
    etc.
    The funny thing is that the missing table is the one containing years 13 - 15. All the other tables (e.g. 16 - 18 years and 19 - 21 years) are represented in the physical SQL. This particular table is not even represented in the physical SQL.
    I haven't yet tried to take one of the 'working' tables out to see if this helps (getting an idea if there is a limit on number of LTS per logical table).
    Any idea?
    Thanks
    regards
    Andy

Maybe you are looking for

  • Is there a list somewhere of HP Failure IDs and what they mean?

    I'm getting the following Failure ID when I run system disgnostic test on the HDD in a HP Envy U0823F-6BA705-XD7X1F-60UE03 Anyone know what it means or where I can find out? The disk itself appears to be running ok. I ran a chkdsk \F on it in a separ

  • Delete ALL Duplicates

    Hello everybody, I'm in a trouble so I've decided to post here. I have an internal table with several duplicates, and I need to delete ALL of them. I'm using this sentences: SORT <table> BY <fields>. DELETE ADJACENT DUPLICATES FROM <table> COMPARING

  • WINDOWS UPDATE FREEZE in a blank screen for ever

    8 days ago I bought a new HP ALL IN ONE COMPUTER  PAVILION 23 . It work well for 8 days ; so on the 9 day i notice the Mc Afee antivirus  will expired in one motnh . Then I bought on line the offer of Mc Afee for one year and acticated . Then on the

  • How do you find a filechooser file's absolute path ?

    Hi everyone, I'm a little confused. I'm using a filechooser in my GUI to get two files. I then pass their names to another class where I am attempting to read them. When I choose testfile1 from C:\My Documents And Settings\ Danielle \ My Documents \

  • Java FX Dynamic TableView

    Hi I have a problem with dynamic TableView and the error is: SEVERE: javafx.scene.control.Control loadSkinClass Failed to load skin 'StringProperty [bean: TableRow[id=null, styleClass=cell indexed-cell table-row-cell], name: skinClassName, value: com