Lookup using indexed fields

I am doing a NamedCache.entryset(Filter) where I am trying to look up based on an indexed property. Is there a way to delegate this call to the backing cachestore if the cache is configured to have one? Right now, if its not found in the cache, it returns an empty set.
Is there an alternate way or extra steps involved to achieve this, for searching in the back-store automatically if not found in the cache
(for an index based search)
Thank You

Hi CoherenceUser
Presumably you are asking if you can make Coherence search the underlying data source (e.g. the database) that you cache sits on top of. The short answer is no, you cannot.
You could do something like a DB (SQL) query to give you back the keys and then do a getAll on the cache with this set of keys. Any entries not in the cache will then be loaded by the cache store. Obviously this makes your queries slower though.
If you are going to query the cache then you need to make sure it is primed with all the data first.
JK

Similar Messages

  • Select using Indexed field, still using sequential read

    Hi Experts,
    I am selecting from CATSDB table
    SELECT SINGLE *  FROM catsdb
              WHERE belnr = var.
    BELNR is indexed so I am expecting that this statement will do a direct read.
    But when this statement is run, SAP message indicates that
    it is performing a sequential read.  Anybody knows why this is?  And how do I make direct read happen?
    Thank you very much for your help.
    Bes Regards,
    Rose

    Hi,
    may be you need to update the database statistics for that table or you need to recreate the index. Check out db02 for the table or make some check using se14.
    regards
    Siggi
    PS: Also fm rsdu_analyze_tables will be of some help. do a test run in se37 and enter the name of the table.
    Message was edited by:
            Siegfried Szameitat

  • How to map lookup main table field in another main table using MDM 7.1?

    We created a new SAP MDM 7.1 repository with multiple main tables.  The first main table is called ProductMaster table which contains Products information.  The ProductCode is the primary key and the only display field for the table during data loading process. The second main table is ProductByRegion table which has a main table lookup field ProductCode and a RegionId field.  These two fields (ProductCode and RegionId) combine as the PK for this main table.  Both main tables have key mapping enabled. 
    I was able to load ProductMaster table using Import Manager.  But Iu2019m having trouble to load data into ProductByRegion table using MDM Import Manager.  Although I have met all the 5 requirements below (excerpted from MDM Import Manager Reference Guide P.222), the ProductCode wonu2019t show up on the destination value pane.  If I mapped all productCode to NULL field, ProductCode wonu2019t load.  If I u2018Addu2019 all ProductCode to Destination Value pane, the Import Manager added duplicated rows to Product Master table while only loading 1 record to ProductByRegion table.  I canu2019t get ProductCode show up in Matching Destination Field list.  When I checked ProductMaster records in MDM Data Manager, I right-clicked on one of records, chose Edit Key Mappings, it didnu2019t show anything.  However, if I right-clicked on one of those duplicated rows, Edit Key Mapping shows remote system and key correctly.
    Where did I do wrong?  How can I fix the problem?
    Thank you for help in advance.
    From: SAP MDM Import Manager Reference Guide:
    Mapping to Main Table Lookup Destination Fields
    Import Manager handles main table lookup fields (Lookup [Main])
    differently than other MDM lookup fields. Specifically, Import Manager
    does not display the complete set of display field values of the records
    of the underlying lookup table. Instead, the values it displays for a main
    table lookup field are limited by both the key mappings for the lookup
    table and the values in the source file.
    Also, Import Manager does not automatically display the values of a
    Lookup [Main] destination field in the Destination Values grid when you
    select the field in the Destination Fields grid. Instead, for a main table
    lookup field value to appear in the Destination Values grid, all of the
    following conditions must be met:
    u2022 The lookup table must have key mapping enabled
    u2022 The lookup field must be mapped to a source field
    u2022 The source field must contain key values for the lookup table
    u2022 The destination value must have a key on the current remote system
    u2022 The destination valueu2019s key must match a source field value
    NOTE ►► The current remote system is the remote system that was
    selected in Import Manageru2019s Connect to Source dialog (see
    u201CConnecting to a Remote Systemu201D on page 416 for more information).
    Vicky

    Hi Michael,
    Thank you very much for your response.  I'm new to SAP MDM, I need some clarification and help regarding your solution. 
    I did use two maps to load ProductMaster and ProductByRegion separately.  Here were my steps:
    1. create main table ProductMaster with key mapping enabled at the table level and set ProductCode as unique and writable once (primary key).
    2. create a map to load ProductMaster record from a staging table located an oracle database.  But Key mapping didn't show anything when I looked at them using Data Manager.
    3. create main table ProductByRegion with a lookup field looking at ProductMaster table.  This field and RegionId combines as a unique field for ProductByRegion table. 
    4. create a map to load ProductByRegion table.  But ProductCode records only shows on the source pane not destination pane and can't be mapped properly.
    My questions:
    1. How can I "Ensure that you add key mapping info for all ProductMaster records" besides enabling Key Mapping on the table level?
    2. How can I define a concatenation of ProductCode and RegionId as a REMOTE KEY"?
    Thanks a lot for your help!
    Vicky

  • How to create an indexed field (Duplicates OK) using VBA

    I used the following code to check to see if a field exist and if it does not exist one created. How can I expand this to not only create the field but also create an index (Duplicates OK).
    Regards, Jim
    Sub AddFldSect4_4()
    Call FieldExists("DisplayOrder", "Section4_4")
    End Sub
    Function FieldExists(ByVal fieldName As String, ByVal tableName As String) As Boolean
    Const gcfHandleErrors As Boolean = True
    If gcfHandleErrors Then On Error GoTo Error_Handler
    Dim db As Database
    Dim tbl As TableDef
    Dim fld As Field
    Dim strName As String
    Set db = CurrentDb
    Set tbl = db.TableDefs(tableName)
    For Each fld In tbl.Fields
    If fld.Name = fieldName Then
    FieldExists = True
    Exit For
    End If
    Next
    If FieldExists = False Then
    Call addColumn(tableName, fieldName)
    End If
    Error_Handler_Exit:
    On Error Resume Next
    Exit Function
    Error_Handler:
    MsgBox "The following error has occured." & vbCrLf & vbCrLf & _
    "Error Number: " & err.Number & vbCrLf & _
    "Error Source: Field Exist" & vbCrLf & _
    "Error Description: " & err.Description, _
    vbCritical, "An Error has Occured!"
    Resume Error_Handler_Exit
    End Function
    Public Sub addColumn(tableName As String, fieldName As String)
    Const gcfHandleErrors As Boolean = True
    If gcfHandleErrors Then On Error GoTo Error_Handler
    Dim strField As String
    Dim curDatabase As Object
    Dim tblTest As Object
    Dim fldNew As Object
    Set curDatabase = CurrentDb
    Set tblTest = curDatabase.TableDefs(tableName)
    strField = fieldName
    Set fldNew = tblTest.CreateField(strField, dbInteger)
    tblTest.Fields.Append fldNew
    Error_Handler_Exit:
    On Error Resume Next
    Exit Sub
    Error_Handler:
    MsgBox "The following error has occured." & vbCrLf & vbCrLf & _
    "Error Number: " & err.Number & vbCrLf & _
    "Error Source: Add Col" & vbCrLf & _
    "Error Description: " & err.Description, _
    vbCritical, "An Error has Occured!"
    Resume Error_Handler_Exit
    End Sub
    jim neal

    I used the following code to check to see if a field exist and if it does not exist one created. How can I expand this to not only create the field but also create an index (Duplicates OK).
    Hi Jim,
    I give you a couple of examples that I use to create or delete indexes. They are "heavily parametrized", so they can be used for every table using any fields. Study your Help for further information on "Create Index", and others.
    tmp_db.Execute "CREATE INDEX " & cur_veldnaam & " ON " & cur_item & "_tbl (" & cur_type & ")" & " WITH PRIMARY" tmp_db.Execute "CREATE UNIQUE INDEX " & cur_veldnaam & " ON " & cur_item & "_tbl (" & cur_velden & ")" '& " WITH PRIMARY" tmp_db.Execute "DROP INDEX " & cur_keynaam & " ON " & cur_item & "_tbl"
    tmp_db stands for the BE-database; cur_item & "_tbl" represents the table name.
    Imb.

  • I get this error message when I try to use or set up publish services... An internal error has occurred: ?:0: attempt to index field 'exportSettings' (a nil value)

    Can anyone help with this error message when trying to use publish services?  An internal error has occurred: ?:0: attempt to index field 'exportSettings' (a nil value)

    Can anyone help with this error message when trying to use publish services?  An internal error has occurred: ?:0: attempt to index field 'exportSettings' (a nil value)

  • How to use Index, Match, or lookup for multi cell data..

    Hey all,  I'm new to using numbers (any spreadsheet really).  I can't figure out how to make this work.
    What I want to do for example is lookup a weight value in (F) by matching all the criteria in (A),(B),(C),(D) & (E).  If possible I would like my search criteria to match everything exactly except the Temp in (E).  The search temp can be warmer or greater than the temp listed. 
    I've tried using Index and Match together but I can't figure out how to match more than one valvue.  If all the other criteria is matched there should only be one weight to return.  I can match one criteria valvue but it only returns the first or last match.  I need it to match each item and return one weight.  What am I doing wrong??
    Please help, and keep it simple for the stupid me.
    Thanks

    The difficulty here is that MATCH and VLOOKUP return the values from the single row with the first match best fitting the type of match specified. VLOOKUP searches from the bottom of the column, MATCH form the top.
    I'd suggest a rearrangement of your data to allow a search with fewer steps.
    Since you want an exact match for the first four columns and a 'smallest value equal to or greater than' match for the fifth column I would suggest concatenating the values in columns A through E into a single value in a new column F inserted after column E.
    For the part of the data table shown, this concatenated value would remain the same for 13 rows, while the temperature ranges in five degree steps from -20 to +45, then a single three-degree step to 48.
    One value in those concatenated then changes, and the new concatenated value repeats for the next 13 rows with the temperature passing through the same range with the same steps.
    If this pattern is consistent (or even reasonably consistent) through the whole table. then a rearrangement of the data as shown below will greatly reduce table size and search time.
    Each distinct set of values A though D is presented on only a single row, the four values are concatenated in column E, and are followed by the temperature values, one per column, in the header row, with the associated weights for those conditions and that temperature listed across that row.
    Note: weights in row 2 (runway 13) are taken from your table. Boredom with retyping then set in, so those in row 2 (runway 31) are formula generated to present a series of decreasing weights, but bear no other relation to those in your table.
    With the table presented in this form, the search is reduced to a single OFFSET expression, with the ROW offset determined by matching the concatenated value of columns A-D of the Query table with their exact counterpart in the Data table, and the column offset determined by matching the temperature in the Query with its closest less than or equal to match in row 1 of Data.
    Formula:
    Query::F2: =OFFSET(Data :: $A$1,MATCH(A&B&C&D,Data :: $E,0)-1,MATCH(E,Data :: $1:$1,1)-1)
    Not handled: What do you want returned for temperatures less than -20 or greater than +48 degrees? As written, the formula will return a not found error for -21 and below, and the value associated with 48° for 49 and above.
    Regards,
    Barry

  • Indexed field of Reporting: Opportunity Product Revenues

    Hi all.
    I am using the Indexed field in Revenue.
    "Analytics: Opportunity Product Revenue History" can be found in the Indexed field of the revenue.
    However, "Reporting: Opportunity Product Revenues" can not find it in.
    How can I find the Indexed field of the revenue in Opportunity Product Revenues report ?
    thanks,
    takashi
    Edited by: user10934060 on 2011/11/15 1:56

    Derrick,
    I need a bit more information as to what you are trying to achieve, if you are trying to reflect the product revenue on an opty as a report then filter that opty ID in your report URL in the webapplet and it will work. I've done it.
    cheers
    Alex

  • How can i use index in select query.. facing problem with the select query.

    Hi Friends,
    I am facing a serious problem in one of the select query. It is taking a lot of time to fetch data in Production Scenario.
    Here is the query:
      SELECT * APPENDING CORRESPONDING FIELDS OF TABLE tbl_summary
        FROM ztftelat LEFT JOIN ztfzberep
         ON  ztfzberep~gjahr = st_input-gjahr
         AND ztfzberep~poper = st_input-poper
         AND ztfzberepcntr  = ztftelatrprctr
        WHERE rldnr  = c_telstra_accounting
          AND rrcty  = c_actual
          AND rvers  = c_ver_001
          AND rbukrs = st_input-bukrs
          AND racct  = st_input-saknr
          AND ryear  = st_input-gjahr
          And rzzlstar in r_lstar                            
          AND rpmax  = c_max_period.
    There are 5 indices present for Table ZTFTELAT.
    Indices of ZTFTELAT:
      Name   Description                                               
      0        Primary key( RCLNT,RLDNR,RRCTY,RVERS,RYEAR,ROBJNR,SOBJNR,RTCUR,RUNIT,DRCRK,RPMAX)                                          
      005    Profit (RCLNT,RPRCTR)
      1        Ledger, company code, account (RLDNR,RBUKRS, RACCT)                                
      2        Ledger, company code, cost center (RLDNR, RBUKRS,RCNTR)                           
      3        Account, cost center (RACCT,RCNTR)                                        
      4        RCLNT/RLDNR/RRCTY/RVERS/RYEAR/RZZAUFNR                        
      Z01    Activity Type, Account (RZZLSTAR,RACCT)                                        
      Z02    RYEAR-RBUKRS- RZZZBER-RLDNR       
    Can anyone help me out why it is taking so much time and how we can reduce it ? and also tell me if I want to use index number 1 then how can I use?
    Thanks in advance.

    Hi Shiva,
    I am using two more select queries with the same manner ....
    here are the other two select query :
    ***************1************************
    SELECT * APPENDING CORRESPONDING FIELDS OF TABLE tbl_summary
        FROM ztftelpt LEFT JOIN ztfzberep
         ON  ztfzberep~gjahr = st_input-gjahr
         AND ztfzberep~poper = st_input-poper
         AND ztfzberepcntr  = ztftelptrprctr
        WHERE rldnr  = c_telstra_projects
          AND rrcty  = c_actual
          AND rvers  = c_ver_001
          AND rbukrs = st_input-bukrs
          AND racct  = st_input-saknr
          AND ryear  = st_input-gjahr
          and rzzlstar in r_lstar             
          AND rpmax  = c_max_period.
    and the second one is
    *************************2************************
      SELECT * APPENDING CORRESPONDING FIELDS OF TABLE tbl_summary
        FROM ztftelnt LEFT JOIN ztfzberep
         ON  ztfzberep~gjahr = st_input-gjahr
         AND ztfzberep~poper = st_input-poper
         AND ztfzberepcntr  = ztftelntrprctr
        WHERE rldnr  = c_telstra_networks
          AND rrcty  = c_actual
          AND rvers  = c_ver_001
          AND rbukrs = st_input-bukrs
          AND racct  = st_input-saknr
          AND ryear  = st_input-gjahr
          and rzzlstar in r_lstar                              
          AND rpmax  = c_max_period.
    for both the above table program is taking very less time .... although both the table used in above queries have similar amount of data. And i can not remove the APPENDING CORRESPONDING. because i have to append the data after fetching from the tables.  if i will not use it will delete all the data fetched earlier.
    Thanks on advanced......
    Sourabh

  • RFC Lookup using RFC_READ_TABLE

    Hi,
    I'm doing RFC lookup using RFC_READ_TABLE to read data from table "LIPS". I am getting the following error with the XML signature
    DVEBMGS01/j2ee/cluster/server0/./temp/classpath_resolver/Mapb16b89b090ec11debb52001e0b450820/source/com/sap/xi/tf/_MM_GetCompanyDetails_.java:72: ';' expected String rfcXML = "<?xml version="1.0" encoding="UTF-8"?><ns0:RFC_READ_TABLE xmlns:ns0="urn:sap-com:document:sap:rfc:functions"><QUERY_TABLE>"+ DBTABLE + "</QUERY_TABLE><DATA></DATA><FIELDS><item><FIELDNAME>"+ FIELDS "</FIELDNAME></item></FIELDS><OPTIONS><item><TEXT>" WHERE_CLAUSE +"</TEXT></item></OPTIONS></ns0:RFC_READ_TABLE>" ; ^ 1 error.
    Please let me know how this works
    Regards,
    Ravi

    Hi Ravi,
    This seems to me like a syntax error. Fix that and it should work fine.
    Regards,
    Neetesh

  • How to take a row from database using index on crystal reports?

    i use datebase field a place but i want to use just a row database field another place. can i take database row its index number? just like [3].

    Hi Guys,
    This is a first.... Can I ask the original poster to please sign up with a new account and repost? At this point we don't know who you are.
    Our developer have been made aware of this problem and should be looking into shortly.
    I am going to lock this post also so the Real Gary doesn't get a bunch of E-mails.
    Thank you
    Don

  • Update databse from internal table statement not using index

    Hi Guys,
    We are updating a databse table from a file. The file has a couple of fields which have data different from what the database has (non-primary fields :). We upload the file data into an internal table and then update the database table from internal table. At a time, internal table is supposed to have 10,000 records. I did SQL trace and found that the update statement is not making use of the databse index.
    Should not the update statement here be using the table index (for primary key)?
    Regards,
    Munish

    ... as often there are recommendations in this forum which makes me wonder, how people overestimate their knowledge!!!
    Updates and Deletes do of course use indexes, as can be seen in the SQL Trace (use explain).
    Inserts don't use indexes, because in many databases inserts are just done somewhere, But also with the INSERT, the primary key is the constraint for the uniqueness condition, duplicate keys are not allowed.
    Coming to the original question, what is you actually coding for the update?
    What is the table, which fields are in the internal table and what are the indexes?
    Siegfried

  • CBO calculates un acceptable cost while using index.

    Hi,
    I was wondering to see the execution plan for both the case using index/without index.
    We are using oracle 10G(10.2.0.3.0) with RAC in production at O/S:- sun solaris 10.
    Java based application is running in this database. One of the sql query is taking long time to fetch the records. I analyzed the sql plan and noticed the FTS. I created indexes to the column(s) which is refering in where clauses. I noticed a strage behavior. Execution plan shows that the CBO is using right path but its not acceptable as application is time outs while return the rows.
    first execution plan with/without index (not using the index).
    PLAN_TABLE_OUTPUT
    Plan hash value: 419342726
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 196 | 3332 | 67845 (3)| 00:13:35 |
    |* 1 | TABLE ACCESS FULL | CWDBENUMDESCRIPTIONS | 1 | 35 | 3 (0)| 00:00:01 |
    | 2 | SORT GROUP BY | | 196 | 3332 | 67845 (3)| 00:13:35 |
    |* 3 | TABLE ACCESS FULL| CWORDERINSTANCE | 51466 | 854K| 67837 (3)| 00:13:35 |
    Predicate Information (identified by operation id):
    1 - filter("ERR"."CODE"=:B1)
    3 - filter("OI"."ERROR_CODE" IS NOT NULL AND "OI"."DIVISION"='OR9' AND
    "OI"."ACTIVE"=TO_NUMBER(:1) AND "OI"."ORDER_STATE"<>'O_NR_NS' AND
    "OI"."ORDER_STATE"<>'C_C_QR' AND "OI"."ORDER_STATE"<>'O_NR_NS_D')
    SQl query was modified to force the index to use /*+ index(oi oi_div) */ the execution is as below:-
    PLAN_TABLE_OUTPUT
    Plan hash value: 1157277132
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 196 | 3332 | 394K (1)| 01:18:52 |
    |* 1 | TABLE ACCESS FULL | CWDBENUMDESCRIPTIONS | 1 | 35 | 3 (0)| 00:00:01 |
    | 2 | SORT GROUP BY | | 196 | 3332 | 394K (1)| 01:18:52 |
    |* 3 | TABLE ACCESS BY INDEX ROWID| CWORDERINSTANCE | 51466 | 854K| 394K (1)| 01:18:52 |
    |* 4 | INDEX RANGE SCAN | OI_DIV | 3025K| | 14226 (1)| 00:02:51 |
    Predicate Information (identified by operation id):
    1 - filter("ERR"."CODE"=:B1)
    3 - filter("OI"."ERROR_CODE" IS NOT NULL AND "OI"."ACTIVE"=TO_NUMBER(:1) AND
    "OI"."ORDER_STATE"<>'O_NR_NS' AND "OI"."ORDER_STATE"<>'C_C_QR' AND
    "OI"."ORDER_STATE"<>'O_NR_NS_D')
    My questions are here:-
    1). why FTS is less costly comparing index scan where there are 15000000 rows in the table.
    2). while forcing index to use cost increase drastically (the statistics is latest one analyzed on 6th of feb 2009)
    3). what should i suppose to change to get the performance benefit.
    Thanks,
    Pradeep

    user587112 wrote:
    select null, oi.division, oi.METADATATYPE, oi.ERROR_CODE,
           (  select err.DESCRIPTION
    FROM   CWDBENUMDESCRIPTIONS err
    WHERE  oi.ERROR_CODE = err.CODE
    count(*)
    from CWORDERINSTANCE oi
    where
         oi.ERROR_CODE is not null
          and oi.division in ('BK9')
         and oi.order_state not in ('O_NR_NS_D', 'C_C_QR', 'O_NR_NS')
          and oi.metadatatype = :1
         and oi.duedate>=:2
         and oi.active = :3
    group by oi.division, oi.metadatatype, oi.error_code
    order by oi.division, oi.metadatatype, oi.error_code
    In this query, if we use as it is how its being displayed, it runs like a rocket, but if we change division in ('OR9') instead of 'BK9' it does not use index and leads to time out the application.
    Number of records   division
    1964690                             ---------------- why this field is null ?
    3090666              OR9
    3468                 BA9
    1242                 EL9
    2702                 IN9
    258                  EU9
    196198               DT9
    1268                 PA9
    8                    BK9
    2332                 BH9
    1405009              TP9
    According to the stats in your original execution plan, it looks like you have a histogram on this column, and the index you want to use is on just the division column.
    Oracle estimate for 'OR9' is 3M rowids from the index, resulting in 50,000+ rows from the table - that's a lot of work - it's not surprising that the optimizer chose a tablescan for 'OR9' - but chose to use the index for 'BK9' which has only 8 rows.
    How often do you want to run this query ? And how accurate does the answer have to be ?
    If you want this query to run faster even when it's processing a huge number of rows, one option would be to create a materialized view over the data that could supply the result set much more efficiently (possibly getting your front-end code to call a materialized view refresh before running the query).
    The only other altenative is probably to create an index that covers all the columns in the query so that you don't have to visit the table - and if you order them correctly you won't have to do a sort group by. I think this index should do the trick: (division, metadatatype, error_code,active, duedate,orderstate). You could also compress this index on at least the first column, but possibly on far more columns, to minimise the size,
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge."
    Stephen Hawking
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • ORA-03113: end-of-file on communication channel ; On All Indexed fields

    Hi Friends,
    Please help
    I am getting the following error,when querying a table in 10.2.0.4.0.
    ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    When I exclude one of the indexed fields in the where condition it works fine.I rebuilded/dropped index,But still oracle throws me out.
    Please see some Infom from Alter.log and Trace files
    Alter log
    Errors in file d:\oracle\product\10.2.0\admin\cvs2\udump\cvs2_ora_3064.trc:
    ORA-07445: exception encountered: core dump ACCESS_VIOLATION __VInfreq__qksqbFind1RowTabs+72 PC:0x313865C ADDR:0x131 UNABLE_TO_READ] [
    Trace file
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Windows XP Version V5.1 Service Pack 3
    CPU : 2 - type 586, 2 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:829M/2038M, Ph+PgF:2573M/3933M, VA:964M/2047M
    Instance name: cvs2
    Redo thread mounted by this instance: 1
    Oracle process number: 28
    Windows thread id: 3064, image: ORACLE.EXE (SHAD)
    *** ACTION NAME:() 2009-06-09 16:50:51.357
    *** MODULE NAME:(SQL*Plus) 2009-06-09 16:50:51.357
    *** SERVICE NAME:(CVS2) 2009-06-09 16:50:51.357
    *** SESSION ID:(148.7775) 2009-06-09 16:50:51.357
    *** 2009-06-09 16:50:51.357
    ksedmp: internal or fatal error
    ORA-07445: exception encountered: core dump ACCESS_VIOLATION __VInfreq__qksqbFind1RowTabs+72 PC:0x313865C ADDR:0x131 UNABLE_TO_READ] [
    Current SQL statement for this session:
    Call Stack Trace calling call entry argument values in hex
    location type point (? means dubious value) -------- ----------------------------
    __VInfreq__qksqbFin 00000000
    d1RowTabs+72
    kkogtp+366 CALLrel qksqbFind1RowTabs+
    0
    vopastp+276 CALLrel kkogtp+0 9C17008 9C17008 9C16808 0
    voppfdDescendents+ CALLrel vopastp+0 9C16808 1 0
    374
    voppfd+88 CALLrel voppfdDescendents+ 9C16808 1
    0
    opitca+854 CALLrel voppfd+0 9C16808 1
    __PGOSF346__kksFull CALLrel _opitca+0 9ECB128 3D2396F0
    TypeCheck+15
    _rpiswu2+426 CALLreg 00000000 BF9C004
    kksLoadChild+8074 CALLrel rpiswu2+0 457453CC 9D 3DE12CF8 5
    3DE12A00 9D 3DE12D24 0 5CDB50
    0 BF9C004 0
    kxsGetRuntimeLock+ CALLrel kksLoadChild+0 C669558 3D738E24 BF9CA5C
    and it goes and goes.....
    Thanks In Advance
    SSN

    I'm not 100% sure, but I'd suggest you to open SR in metalink. And send these trace files them for analyze
    Kamran Agayev A. (10g OCP)
    http://kamranagayev.wordpress.com
    [Step by Step install Oracle on Linux and Automate the installation using Shell Script |http://kamranagayev.wordpress.com/2009/05/01/step-by-step-installing-oracle-database-10g-release-2-on-linux-centos-and-automate-the-installation-using-linux-shell-script/]

  • Purpose of using string field

    how to use string field and its purpose...

    STRING: Character string with variable length This data type can be used only in types (data elements, structures, table types) and domains. In the dictionary a length can be specified for this type (at least 256 characters). It can be used in database tables only with restrictions. For a description of them, refer to the documentation of the ABAP statement 'STRING'. In ABAP, this type is implemented as a reference to a storage area of variable size. The system proposes 132 characters as default for the output length. You cannot attach search helps to components of this type.
    SSTRING: Short character string with variable length. In the Dictionary the number of characters can be specified for this type (from 1 to 1333). This data type can be used only in types (data elements, structures, table types) and domains. It can be used in database tables. To do so, refer to the documentation of the ABAP statement 'STRING'. In ABAP, this type is implemented as a reference to a storage area of variable size. String fields of this type can be used in indexes and in the WHERE condition of a SELECT statement. You cannot use them in table keys

  • Create a Lookup for ProjetServer field in SharePoint List item

    Hi,
    I would like to display values in custom fields of MS Project Server 2010 in Ms SharePoint list 2010. The values ought to be displayed as lookup in a field of Listitem form. I would like to futher query and populate certain fields in New
    Items form from Project Server based on the value in the lookup.
    I am looking for guidance to implement this. Any help in this regard will be appreciated.

    Abhijit,
    You could do this by creating an External Column, connected to an External COntent Type from Project Server.
    Andrew Lavinsky has several good blog posts on using ECTs in Project Server. Here is an example: http://azlav.wordpress.com/2011/04/05/creating-a-centralized-document-repository-with-external-content-types/
    Cheers,
    Prasanna Adavi, Project MVP
    Blog:
      Podcast:
       Twitter:   
    LinkedIn:
      

Maybe you are looking for

  • Program for Creating Purchase Order with reference to purchase requisition

    Hi , I need to Create purchase Order with reference to  Purchase requisition, in my case i need to automize the Process which is happening in MD04, Can you please suggest me ? Thanks , Murali

  • Combined FIles in PI using BPM without correlation

    Hi All I have a requirement, I have two files coming from the source system one file with Header Detials & other file with Item Detials and I have to combine these two files into a single file. I have used the BPM in PI to combine the two files uisng

  • Error while posting Travel advance

    Hi Experts, while posting travel expense to FI in PRRW i am getting following error. Post from posting run 0000000049 E RW609 Error in document: TRAVL 0000000052 AEDCLNT220 E F5354 Account 11301400 in company code 1000 cannot be directly posted to Po

  • Photoshop CS6 crashing

    Can someone give me some adivice about my Photoshop CS6 which keeps crashing.  As soon as I try to import a photo it just closes.  I have recently installed the purchased programme onto my laptop and have used it a little up now.  It doesn't even app

  • Select Current user

    good evening all The following query returns results when executed in B1: SELECT U_NAME FROM OUSR WHERE INTERNAL_K = $[USER] Basically it allows me to select based on the CURRENT user. Is there an equivalent command/feature in Net Point? thanks John