Non-indexed queries

Hello,
Is there a way to retrieve all the non indexed query's used in the database?
I use toad to monitor the database and there is a certain amount of queries non-indexed I want to find out.
TIA.
Alder

In TOAD the 'Database -> Spool SQL -> Spool SQL to Screen' option is often your friend.
From the spool output produced while refreshing the Database Monitor in TOAD you can see that TOAD uses the following query to derive the ratio of non-indexed to indexed SQL.
SELECT SUM (DECODE (name, 'table scans (long tables)', VALUE, 0)) /
       (SUM (DECODE (name, 'table scans (long tables)', VALUE, 0)) +
        SUM (DECODE (name, 'table scans (short tables)', VALUE, 0))) *
       100
          non_indexed_sql,
       100 -
       SUM (DECODE (name, 'table scans (long tables)', VALUE, 0)) /
       (SUM (DECODE (name, 'table scans (long tables)', VALUE, 0)) +
        SUM (DECODE (name, 'table scans (short tables)', VALUE, 0))) *
       100
          indexed_sql
FROM   v$sysstat
WHERE  1 = 1
AND    (name IN ('table scans (long tables)', 'table scans (short tables)'));As you can see this is simply the ratio of the count of 'table scans (long tables)' against 'table scans (short tables)' which are simple metrics for the database held in v$sysstat. This cannot be traced to the original statements themselves.
You could try looking at the operation/options columns in v$sql_plan or dba_hist_sql_plan views - BUT - as has already been pointed out, it is not always preferable to use indexes over other access paths to answer queries - it is heavily dependent on the data distribution and the question being asked. You would be better off looking at which queries were resource intensive.

Similar Messages

  • Non - indexed query's (9I)

    Hello everbody,
    How do I retrieve all the NON indexed query's used in the database? I want to find out which ones I need to optimize.
    thanks John

    Hello
    That's not as easy as it sounds. Creating indexes for every query that does not use one could very easily cause more problems than it fixes. Full table scans are not evil, they are often the fastest way to access a relatively large portion of data in a table.
    If you have issues with performance in your database you need to know exactly what they are, and you can get a very good idea by using statspack.
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96533/statspac.htm#21793
    Once you have the right metrics, you can make an informed decision as to what the correct course of action is to solve any performance problems you are having. Without these metrics, you are groping in the dark.
    HTH
    David

  • Histogram on non-indexed columns

    Hi All,
    Is there any point in having histogram on non-indexed column? If we use a non-indexed column for filtering data, there will be a full table scan anyway.
    Does having histogram in such column make any difference.
    I am on v11.2. Are there any differences/advances in this respect in 11g ?
    Thanks in advance.

    rahulras wrote:
    Hi All,
    Is there any point in having histogram on non-indexed column? If we use a non-indexed column for filtering data, there will be a full table scan anyway.
    Does having histogram in such column make any difference.
    I am on v11.2. Are there any differences/advances in this respect in 11g ?
    Thanks in advance.Histogram used to correct estimate selectivity(~ cardinality) by CBO if there are indexes or no!.So if there are indexes then after estimating selectivity based on values CBO can be select FULL TABLE SCAN but not INDEX SCAN!.Finally histograms use when your columns data non uniform distributed(actually skewed).

  • How long should a search to Include Non-Indexed Files take??

    Hello,
    I left last night doing a search in Bridge and I said to include the non-indexed files ... it has been running over 12 hours???
    Is this normal on a large server?
    If I let it keep going, will it make all the future searches faster?
    thanks!
    babs

    I just did a search of at least 14,000 files and it took less than 1 minute.
    Hence my statement that Bridge (despite being a fabulous app for ingestion and sorting etc, all the work needed before the files are ready for distribution or publishing) it is lousy for use as a DAM (Digital Asset Management).
    It is also not designed for use over a network as you already clearly stated.
    To spoil your party a bit more, I just timed my search in my DAM application Canto Cumulus Single user edition. Having almost 60.000 keepers from the last 17 years in my digital archive a search took almost 2 seconds....
    Yes it is not fair because Bridge comes (somewhat) for free with PS or other CS applications and Cumulus single user is around 300 $ I believe.
    But there are some cheaper DAM applications. All those DAM apps are pretty lousy in the work Bridge is very good at. It is just trying to get the right tools for the job

  • Indexed attributes but non indexed search

    Hi !
    There's something that I don't understand in my directory.
    I have a request which is qualified as "non indexed" but when I look at the configuration of the indexes, it seems to be ok. Where am I wrong ?
    Search filter (using logconv.pl):
    Unindexed Search #1
    - Date/Time: 05/Jul/2007:17:26:06
    - Connection Number: 619481
    - Operation Number: 2
    - Etime: 21
    - Nentries: 20017
    - IP Address: 127.0.0.1
    - Bind DN: uid=xxxxxxxxx
    - Search Filter: (&(sn=*)(givenname=*))
    indexed attributes :
    dn: cn=sn,cn=index,cn=userRoot,cn=ldbm database,cn=plugins,cn=config
    objectClass: top
    objectClass: nsIndex
    cn: sn
    nsSystemIndex: false
    nsIndexType: pres
    nsIndexType: eq
    nsIndexType: sub
    dn: cn=givenName,cn=index,cn=userRoot,cn=ldbm database,cn=plugins,cn=config
    objectClass: top
    objectClass: nsIndex
    cn: givenName
    nsSystemIndex: false
    nsIndexType: pres
    nsIndexType: eq
    nsIndexType: sub
    Anyone an idea please ?
    Thanks for your help.

    Hi !
    There's something that I don't understand in my
    directory.
    I have a request which is qualified as "non indexed"
    but when I look at the configuration of the indexes,
    it seems to be ok. Where am I wrong ?
    Search filter (using logconv.pl):
    Unindexed Search #1
    - Date/Time: 05/Jul/2007:17:26:06
    - Connection Number: 619481
    - Operation Number: 2
    - Etime: 21
    - Nentries: 20017
    - IP Address: 127.0.0.1
    - Bind DN: uid=xxxxxxxxx
    - Search Filter: (&(sn=*)(givenname=*))
    indexed attributes :
    dn: cn=sn,cn=index,cn=userRoot,cn=ldbm
    database,cn=plugins,cn=config
    objectClass: top
    objectClass: nsIndex
    cn: sn
    nsSystemIndex: false
    nsIndexType: pres
    nsIndexType: eq
    nsIndexType: sub
    dn: cn=givenName,cn=index,cn=userRoot,cn=ldbm
    database,cn=plugins,cn=config
    objectClass: top
    objectClass: nsIndex
    cn: givenName
    nsSystemIndex: false
    nsIndexType: pres
    nsIndexType: eq
    nsIndexType: sub
    Anyone an idea please ?
    Thanks for your help.Although you have defined a substring index for both attributes the actual size of the index lists for those indexes may have exceeded the maximum value for index lists as governed by the system-wide configuration attribute "nsslapd-allidsthreshold" which is set by default to 4000. When the list of IDs for an index grows to the nsslapd-allidsthreshold attribute size, the allids token is set on that index which causes the index to be disregarded and those searches become unindexed.
    You may use the dbscan ("./dbscan -n -i -f /<path-to>/userRoot_sn.db3) utility that comes with the Driectory Server Resource Kit in order to confirm that those indexes are marked allids.
    Once confirmed you may consider to increase your nsslapd-allidsthreshold value - however changing the nsslapd-allidsthreshold attribute is costly, in that it requires you to rebuild (reindex) your database and will require thorough testing and examination of your data set as this is a system-wide attribute (in 5.x). It's also important not to set the nsslapd-allidsthreshold attribute so high as this puts the server in the position to constantly manage very large index ID lists.
    There is a very interesting article called "Understanding Indexing and the ALLIDs Threshold" @ http://blogs.sun.com/DirectoryManager/?page=3 which discusses that topic in depth.

  • Browsers/Users to go to http:/none/index.cfm

    I've getting a weird error on my CFMX 7.02 Server.
    Sporadically when clicking around on websites, I'll get a
    page cannot be displayed error. When I look at the application logs
    on the server. I can that somehow the users is trying to access
    \webroot\none\index.cfm instead of just webroot\index.cfm
    I can't reproduce the error consistently, only if I click
    around repeatedly I was wondering if anyone else has seen this
    happen before and could possibly share some insight. Thanks, Jake

    I have never run into or heard of a problem like this. My
    first thought
    when you described the behavior was a variable used to build
    a url link
    or something like that occasionally returning an unexpected
    and
    incorrect value. Does that ring any bells?
    I.E.
    <cfoutput...>
    <a href="#aVar#/index.cfm">
    </cfoutput>
    Where aVar does not equal the expected string under some
    condition.

  • Using GUIDs : Performance of Indexes/Queries?

    Hi,
    We are building a new database. The database tables is coupled together with key values which are GUIDs. The database size may grow to in excess of 2Billion records in a few months.
    I am not sure here about how would Oracle Indexes perform given the very random nature of values of GUIDs. If I create an Index on a GUID column, would Oracle really use the Index when I have queries with the GUIDs in the where clause? Or would it do a full table scan for those millions of rows? Please let me know your views.
    If anyone know of some best practices surrounding GUIDs or some good article on using GUIDs and possible issues associated with that, how to plan such huge database with keys with are GUIDs in nature, etc.., then even that would help me out.
    Thanks a lot!
    Biren.

    Hi Biren,
    By GUID, are you referring to the Oracle Portal product?

  • INDEX QUERIES PROBLEM ::: PLZ HELP ME

    HI all,
    I 'm using Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 on linux server.
    I have a table named customers with 16 millions of records, this table has 5 columns
    customers :
    - id number primary key
    - Giid varchar2 (indexed)
    - Name varchar2
    - Adress Varchar2
    I did two tests :
    - select id from customers where id = 77788 it takes 27 seconds
    - select * from customers where id = 77788 it takes 125 seconds
    The problem that I have to use the second query anyweher on my apllications levels and it takes a long time. It's strange beacause the id 's indexed as primary key.
    The second thing that 's stange that when I do : select * from customers where Giid = 'THISTEST' it's very very fast more fast then select * from customers where id = 77788 whereas both columns 're indexed similary!!!
    any idea , explanation about what happened? and how to optimize my query : select * from customers where id = 77788
    thanks a lot,
    regards
    Message was edited by:
    HAGGAR

    To execute a query such as yours Oracle will basically have to do 2 things:
    1. Read Index Structure to find requested value
    2. Extract needed data (select ...)
    Logically both of these steps require reading the database. But Oracle has a cache of recently read data blocks in memory (the buffer cache in the SGA). When a needed block is still in the buffer cache Oracle can avoid a physical disk read (10 milliseconds order of magnitude) and use the copy still in memory (10 microseconds order of magnitude), which is a lot faster.
    When the needed data (step 2) is all in the index being used by step 1, then Oracle can also avoid reading the table data at all, and just use the data it already has in the index block. So the whole query then just becomes a read on a root index block, a couple of branches and then the leaf index block.
    This is why 'select id ... where id = ...' is faster than 'select *'. The first query only reads the index and never the table, and the index block is probably in memory. An index on one column is able to pack many more rows into one block than the corresponding table of many columns, increasing any hit ratio on memory buffers.
    The second query wants all the columns from the table, so Oracle has to read the table data too, which is the cause of the extra elapsed time for this query.
    What will determine the performance of the second query is how big the data row is for that table, and whether that data block is already cached in Oracle's memory buffers. Bigger data rows take up more space, and so may require more memory space and so reduce buffer hit ratio.
    You do not mention how big your Oracle buffer cache is, nor your total SGA.
    Also, if you are querying using literal values and not bind variables, then Oracle is parsing these queries each time, which also increases the execution time.
    Also, you say 'table' but is this really a view? In which case, there are table joins going on in the second query, which would also explain the increased elapsed time.
    27 seconds to retrieve a single row does seem a long time. But you give us very little extra information. How big is each data row? What is your database block size (this affects row packing density). Have you updated the statistics recently? How busy is the system before, during and after this query? Are all the CPUs busy? Is the disk I/O system busy? How many disk I/Os are actually being done? The extra elapsed time between the 2 queries implies that a lot of disk I/O is taking place, when really there should only be one extra disk I/O to the table data for an extra 10 milliseconds.
    Do you do lots of updates on the data in these tables over time that would cause row chaining? This would cause extra disk I/Os to retrieve the current copy of a data row.
    Without any other information, I would guess you may have problems with the size of your SGA and the buffer cache, your database block size may be too small (not enough rows per page), the other workloads on this system using up too much CPU and I/O, a poor performing (cheap) disk I/O sub-system. But the cause could equally be elsewhere.
    As stated by others, rebuilding indexes will not help, and is irrelevant. If your data is partitionable you could consider partitioning it, so that similar data is grouped together on disk, and hope that it would increase your buffer cache hit ratio. However, to do this, you need to know the pattern of queries that will be executed over time, and the pattern of data being accessed. If you don't know that, or it is totally random, then you won't benefit from partitioning.
    John

  • Best way to insert data into a non indexed partitioned table?

    Hi,
    I've a rather basic question that I haven't been able to find an answer to so hope that someone may be able to help.
    We need to insert sales data into a partitioned table that is range partitioned on sales date. The incoming sales data can span a number of days, e.g. 31-JUL-2007 through to 04-AUG-2007 inclusive. I've come up with two approaches and would like to know which you would recommend and why?
    Approach 1: Load data into each individual partition using a statement like:
    INSERT /*+ APPEND */ INTO sales PARTITION (sales_20070731) SELECT * FROM sales_new WHERE sales_date = TO_DATE('31-AUG-2007','DD-MON-YYYY')
    Approach 2: Load data into the entire table using a statement like:
    INSERT /*+ APPEND */ INTO sales SELECT * FROM sales_new
    Thanks

    You need to compare both approaches by creating simple test case.
    But there is no advantages in approach 1until you will have index on sales_date column.
    With index these approaches are comparable and you can choose whatever best fits to your requirements.
    Best Regards,
    Alex

  • Non-indexing of word files in spotlight and error message

    Spotlight on my macbookpro seems not to index any of msword files (including those with .doc in the name), and so I tried going into the terminal and typing "mdimport -f directoryname. The result was an endless flood of error messages like those below. I'm clueless, but it seems like it's saying the appropriate plugin is not loading. Anybody understand what's really going on?
    Thanks,
    Tom
    2006-04-01 20:32:26.541 mdimport[1488] CFLog (21): dyld returns 2 when trying to load /Library/Spotlight/Microsoft Office.mdimporter/Contents/MacOS/Microsoft Office
    2006-04-01 20:32:26.543 mdimport[1488] CFLog (22): Cannot find function pointer OfficeImporterPluginFactory for factory BFA4E323-1889-11D9-82C8-000A959816BE in CFBundle/CFPlugIn 0x328170 </Library/Spotlight/Microsoft Office.mdimporter> (bundle, not loaded)
    2006-04-01 20:32:26.575 mdimport[1488] CFLog (21): dyld returns 2 when trying to load /Library/Spotlight/Microsoft Office.mdimporter/Contents/MacOS/Microsoft Office
    2006-04-01 20:32:26.577 mdimport[1488] CFLog (22): Cannot find function pointer OfficeImporterPluginFactory for factory BFA4E323-1889-11D9-82C8-000A959816BE in CFBundle/CFPlugIn 0x328170 </Library/Spotlight/Microsoft Office.mdimporter> (bundle, not loaded)
    macbookpro 1.83   Mac OS X (10.4.5)  
    powerbook g4 667   Mac OS X (10.3.9)  

    I noticed that the update info showed that it was supposed to improve things. My experience was that when it came to content indexing Spotlight seemed to take its own sweet time about getting that done--it seemed like it continued to add content indexes long after it finished its regular index run. So that might be a factor. Also, it is my understanding that it does not index everything in very long files, only the first so many pages (don't remember just how many and am not sure that is correct info, Doc Smoke would know), and if that is true length could be a factor. I just re-indexed a folder full of MSWord docs, and Spotlight does seem to be finding the content now, however the files are all short, one to two pages. I did a search in another folder, which contains a few MSWord docs and which I had not re-indexed, and Spotlight did find one file that had a unique word but not another (in a longer file). So it looks like the situation may be improved, but you will have to re-index to see any of the improvement.
    Francine
    Francine
    Schwieder

  • Spatial index queries

    Hi
    I was wondering
    Does anyone know of the correct code to use to do the following:
    1) Drop spatial indexes at the same time as droping a table
    2) Renaming Spatial indexes
    Any help would be greatly appreciated
    Thanks in advance

    Steve,
    Does anyone know of the correct code to use to do the following:
    1) Drop spatial indexes at the same time as droping a tableJust drop the table and the index will go automatically.
    select asii.sdo_index_owner, index_name, table_name, column_name, asii.sdo_index_table
      from all_sdo_index_info asii 
           inner join  
           all_sdo_index_metadata asim 
           on ( asim.sdo_index_owner = asii.sdo_index_owner 
                and 
                asim.sdo_index_name = asii.index_name 
       Where asii.table_owner = 'CODESYS'
         and asii.table_name  = 'GEOGRAPHIC_UNIT_POLYGON_SDO'
         and asii.column_name = 'POLYGON';
    -- Results
    SDO_INDEX_OWNER INDEX_NAME                   TABLE_NAME                  COLUMN_NAME
    CODESYS         GGRPHC_NT_PLYGN_SD_PLYGN_SPX GEOGRAPHIC_UNIT_POLYGON_SDO POLYGON
    drop table GEOGRAPHIC_UNIT_POLYGON_SDO;
    -- Results
    table GEOGRAPHIC_UNIT_POLYGON_SDO dropped.
    purge recyclebin;
    -- Results
    purge recyclebin
    select asii.sdo_index_owner, index_name, table_name, column_name, asii.sdo_index_table
      from all_sdo_index_info asii 
           inner join  
           all_sdo_index_metadata asim 
           on ( asim.sdo_index_owner = asii.sdo_index_owner 
                and 
                asim.sdo_index_name = asii.index_name 
       Where asii.table_owner = 'CODESYS'
         and asii.table_name  = 'GEOGRAPHIC_UNIT_POLYGON_SDO'
         and asii.column_name = 'POLYGON';
    -- Results
    no rows selected
    2) Renaming Spatial indexes
    -- ALTER INDEX [schema.]index RENAME TO <new_index_name>;
    select asii.sdo_index_owner, index_name, table_name, column_name, asii.sdo_index_table
      from all_sdo_index_info asii 
           inner join  
           all_sdo_index_metadata asim 
           on ( asim.sdo_index_owner = asii.sdo_index_owner 
                and 
                asim.sdo_index_name = asii.index_name 
       Where asii.table_owner = 'CODESYS'
         and asii.table_name  = 'PROJPOINT2D'
         and asii.column_name = 'GEOM';
    -- Results
    SDO_INDEX_OWNER INDEX_NAME            TABLE_NAME  COLUMN_NAME     SDO_INDEX_TABLE
    CODESYS         PROJPOINT2D_GEOM_SPIX PROJPOINT2D GEOM            MDRT_20C2D$
    ALTER INDEX codesys.PROJPOINT2D_GEOM_SPIX RENAME TO PROJPOINT2D_GEOM_SPX;
    select asii.sdo_index_owner, index_name, table_name, column_name, asii.sdo_index_table
      from all_sdo_index_info asii 
           inner join  
           all_sdo_index_metadata asim 
           on ( asim.sdo_index_owner = asii.sdo_index_owner 
                and 
                asim.sdo_index_name = asii.index_name 
       Where asii.table_owner = 'CODESYS'
         and asii.table_name  = 'PROJPOINT2D'
         and asii.column_name = 'GEOM';
    index CODESYS.PROJPOINT2D_GEOM_SPIX altered.
    -- Results
    index CODESYS.PROJPOINT2D_GEOM_SPIX altered.
    select asii.sdo_index_owner, index_name, table_name, column_name, asii.sdo_index_table
      from all_sdo_index_info asii 
           inner join  
           all_sdo_index_metadata asim 
           on ( asim.sdo_index_owner = asii.sdo_index_owner 
                and 
                asim.sdo_index_name = asii.index_name 
       Where asii.table_owner = 'CODESYS'
         and asii.table_name  = 'PROJPOINT2D'
         and asii.column_name = 'GEOM';
    -- Results
    SDO_INDEX_OWNER INDEX_NAME            TABLE_NAME  COLUMN_NAME     SDO_INDEX_TABLE
    CODESYS         PROJPOINT2D_GEOM_SPX  PROJPOINT2D GEOM            MDRT_20C2D$Notice how the rename is a rename: the SDO_INDEX_TABLE name does not change!
    If this is correct or helpful, please award me with points.
    regards
    Simon

  • How Context text search index synchronized when updating non index column?

    Hi,
    Please can any one guide me,
    I have update a column in a table which is not indexed in context index.
    After indexing i synchronize still iam not able to see updated column.
    if update index column in table its getting synchronize and i could see the updated columns in contains
    select statement.please give me a solution for this?

    Hi i have a table called customer and Phone relation ship will be customeriid in both tables
    In customer table i indexed comapnyname
    CREATE OR REPLACE PROCEDURE companycontactColumns_Search
    (p_rowid IN ROWID,
    p_clob IN OUT CLOB)
    AS
    M_clob CLOB;
    BEGIN
    FOR c1 IN (SELECT Customeriid,Companyname||' ' AS data
    FROM Customer
    WHERE ROWID = p_rowid) LOOP
    M_clob := M_clob || c1.data;
    FOR c2 IN
    (SELECT ' ' || PHONENUMBER || ' ' || Phonenumber AS data
    FROM Phone
    WHERE parentiid = c1.Customeriid)
    LOOP
    M_clob := M_clob || c2.data;
    END LOOP;
    END LOOP;
    p_clob := M_clob;
    END companycontactColumns_Search;
    Begin
    Ctx_DDL.create_preference('Companycontactsearch2','USER_DATASTORE');
    ctx_DDL.Set_Attribute('companycontactsearch2','Procedure','companycontactColumns_Search');
    end;
    create index customer_text_idx on Customer(companyname)
    indextype is ctxsys.context Parameters('datastore companycontactsearch2 Sync (on commit)');
    Hi if i update phonenumber in phone table its not getting refereshed please guide me.

  • How do you differentiate between indexed and non-indexed prompts?

    Hi,
    I'm trying to fill Webi parameters dynamically, using the Webi REST SDK (4.1 SP04). This environment runs on top of SAP BW and uses BICS to connect to the BEx query.
    When the prompt is implemented as a BEx variable, the call to GET /biprws/raylight/v1/documents/{docid}/parameters will show @type: sapVariable, which tells me it requires the values to be passed as an ID and (optionally) their description.
    Prompts that are defined in the Webi query panel, return @type: prompt. This however doesn't indicate if they need the prompt value IDs or just their plain values.
    Example (JSON):
      "parameters": {
        "parameter": [
            "@dpId": "DP0",
            "@type": "prompt",
            "@optional": "false",
            "id": 2,
            "technicalName": "psTestPrompt",
            "name": "TestPrompt",
            "answer": {
              "@type": "Text",
              "@constrained": "false",
              "info": {
                "@cardinality": "Single",
                "lov": {
                  "@refreshable": "false",
                  "@hierarchical": "false",
                  "id": "UNIVERSELOV_DS0.DO1a6"
    In this case, I'll look at the value for @refreshable; if it's true, the prompt requires IDs, if false, just the plain values.
    It's only when a prompt does not have an LOV associated with it (disabled in the prompt properties, see screenshot below), I won't have the lov information when retrieving the parameter information and thus won't be able to determine whether to pass value IDs or just plain values.
    FYI, replying with prompt value IDs looks something like this:
      "parameters": {
        "parameter": [
            "id": 2,
            "answer": {
              "values": {
                "value": {
                  "@id": "000000123456",
                  "$": "123456"
    Plain values looks like this:
      "parameters": {
        "parameter": [
            "id": 2,
            "answer": {
              "values": {
                "value": 123456
    In short: how do I determine whether to pass values by ID or just their plain value given the prompt template retrieved through a GET /biprws/raylight/v1/documents/{docid}/parameters call?

    Hi Anthony,
    Just FYI, I've also noticed that when you have 2 BEx variables, and you only define 1 of them as a prompt (using the BEx variable window in the Webi Query Panel), the REST SDK will still report all of BEx variables as prompts, instead of the 1 variable that was checked as prompt.
    (this happens in BI 4.1 SP4 Patch 4)
    In the query panel:
    Uncheck Use BEx query defined default values at runtime
    Uncheck the Set as prompt checkbox for the second variable.
    REST SDK output for GET /biprws/raylight/v1/documents/{docid}/parameters?lovInfo=false :
      "parameters": {
        "parameter": [
            "@dpId": "DP0",
            "@type": "sapVariable",
            "@optional": "false",
            "id": 0,
            "technicalName": "ZMXXXX",
            "name": "XXXXX",
            "answer": {
              "@type": "Text",
              "@constrained": "true",
              "info": {
                "@cardinality": "Multiple",
                "lov": {
                  "@refreshable": "true",
                  "@hierarchical": "true",
                  "id": "UNIVERSELOV_DS0.:M:STR:ZMXXXX"
            "@dpId": "DP0",
            "@type": "sapVariable",
            "@optional": "false",
            "id": 1,
            "technicalName": "ZYYYYYY",
            "name": "YYYYYYYY",
            "answer": {
              "@type": "Text",
              "@constrained": "true",
              "info": {
                "@cardinality": "Multiple",
                "lov": {
                  "@refreshable": "true",
                  "@hierarchical": "true",
                  "id": "UNIVERSELOV_DS0.:M:STR:ZYYYYYY"

  • Standard and Non-standard exports and queries issue

    Hi,
    We have created a few STANDARD books with standard exports in them. The standard exports use standard queries.
    The ISSUE is that in the Verifications/Filters tab, where we have put in a lot of queries - ONLY the STANDARD queries show up. We would like to have all the queries show up. I am the adminsitrator and still only the standard ones show up.
    I am worried that if I "Save As" the export as a non-standard export, it does not allow me to uncheck the standard tick column and save it but I have to manually change the name of the export. Can we change this in anyway?
    Is there a system preference where I can allow all the queries to show up in the verifications/filter tab at least for the admnistrators if not for all? Thanks
    - Ad

    Thanks for your input but DRM does seem to be behaving what you said or may be I am still missing something here.
    I created a standard query.
    I have a standard export.
    Now, in the verification/filter tab of my Standard export, I am able to see my standard queries.
    But,
    I am unable to see the Non-Standard Queries.
    HOWEVER,
    If I have a Non-Standard export --> I am able to ALL the queries. This makes it a lot more flexible in doing things.
    My question is the WHY are my standard exports not able to reach the non-standard queries. What I wanted to do a twist in my export for analysis purposes for a particular use. If non-standard exports have access to Standard and Non-Standard queries --> it just makes more sense to always use a non-standard export and create non-standard queries.

  • Why is it only possible to run queries on a Distributed cache?

    I found by experiementation that if you put a NearCache (only for the benefit of its QueryMap functions) on top of a ReplicatedCache, it will throw a runtime exception saying that the query operations are not supported on the ReplicatedCache.
    I understand that the primary goal of the QueryMap interface is to be able to do large, distributed queries on the data across machines in the cluster. However, there are definitely situations where it is useful (such as in my application) to be able to run a local query on the cache to take advantage of the index APIs, etc, for your searches.

    Kris,
    I believe the only API that is currently not supported for ReplicatedCache(s) is "addIndex" and "removeIndex". The query methods "keySet(Filter)" and "entrySet(Filter, Comparator)" are fully implemented.
    The reason the index functionality was "pushed" out of 2.x timeframe was an assumption that ReplicatedCache would hold a not-too-big number of entries and since all the data is "local" to the querying JVM the performance of non-indexed iterator would be acceptable. We do, however, plan to fully support the index functionality for ReplicatedCache in our future releases.
    Unless I misunderstand your design, since the com.tangosol.net.NamedCache interface extends com.tangosol.util.QueryMap there is no reason to wrap the NamedCache created by the ReplicatedCache service (i.e. returned by CacheFactory.getReplicatedCache method) using the NearCache construct.
    Gene

Maybe you are looking for

  • Submit a form through email with multiple signatures

    I have a form, where a number of people can digitally sign the form and email it to the next person. Designed in LiveCycle, with javascript and reader extended. On the first sign and email, everything works great. However when the next person gets th

  • Address Book Images Gone Funky

    Adding images to Address Book contacts just results in a random part of the image showing. In iOS I can add images just fine, and they show up in Address Book ok. It's never done this before. Any ideas??

  • Re-sizing grouped shapes

    Recently I have been unable to re-size grouped objects in a page layout. I have about 4 floating shapes forming the image of a framed painting on a wall. When grouped, I can adjust the height of the group, but not the width. I have been doing this sa

  • Kernel panic for unknown reason on internet

    My computer has been freezing and flashing a kernel panic every few days. I can't seem to find a common factor. Sometimes its when I'm checking my email. Othertimes, I'm surfing the internet on Safari. Once it did it when I was trying to send a repor

  • Ipad1 iOS 5.1 : itunes crashes all the time

    Hi all I have got the old IPAD1, installed latest ios 5.1.1. Since then the itunes shop application on ipad crashes all the time. Other applications look ok. I restarted (home,power) and I switched off and on. Did not resolve the issue. Does anybody