DRG-51030: wildcard query expansion resulted in too many terms

Hello,
I have sql report in my apex application that is based on sql query with oracle text. I am trying to execute query with % parameter: select * from mytable where contains(term, '%')>0
I now that wildcard_maxterms default maximum is 5000 and in sql-developer query works normally without any errors, but in apex I receive DRG-51030: wildcard query expansion resulted in too many terms.
Please help!!!

Hello,
There's a good discussion of that exception here -
How to limit the number of search results returned by oracle text
John
http://jes.blogs.shellprompt.net
http://apex-evangelists.com

Similar Messages

  • Error - DRG-51030: wildcard query expansion resulted in too many terms

    Hi All,
    My searches against a 100 million company names table on org names often result in the following error:
    DRG-51030: wildcard query expansion resulted in too many terms
    A sample query would be:
    select v.* --xref.external_ref_party_id,v.*
    from xxx_org_search_x_v vwhere 1 =1
    and state_province = 'PA'
    and country = 'US'
    and city = 'BRYN MAWR'
    and catsearch(org_name,'BRYN MAWR AUTO*','CITY=''BRYN MAWR''' ) > 0
    I understand that is caused by the presence of the word Auto to which we append a * . (If i remove the auto the search works fine).
    My question is - is there a way to limit the query expansion to only , say 100, results that get returned from the index?

    Thanks for the reply. This is how the preferences are set:
    exec ctx_ddl.create_preference('STEM_FUZZY_PREF', 'BASIC_WORDLIST');
    exec ctx_ddl.set_attribute('STEM_FUZZY_PREF','FUZZY_MATCH','AUTO');
    exec ctx_ddl.set_attribute('STEM_FUZZY_PREF','FUZZY_SCORE','60');
    exec ctx_ddl.set_attribute('STEM_FUZZY_PREF','FUZZY_NUMRESULTS','100');
    exec ctx_ddl.set_attribute('STEM_FUZZY_PREF','STEMMER','AUTO');
    exec ctx_ddl.set_attribute('STEM_FUZZY_PREF', 'wildcard_maxterms',15000) ;
    exec ctx_ddl.create_preference('LEXTER_PREF', 'BASIC_LEXER');
    exec ctx_ddl.set_attribute('LEXTER_PREF','index_stems', 'ENGLISH');
    exec ctx_ddl.set_attribute('LEXTER_PREF','skipjoins',',''."+-/&');
    exec ctx_ddl.create_preference('xxx_EXT_REF_SEARCH_CTX_PREF', 'BASIC_STORAGE');
    exec ctx_ddl.set_attribute('xxx_EXT_REF_SEARCH_CTX_PREF', 'I_TABLE_CLAUSE','tablespace ICV_TS_CTX_IDX');
    exec ctx_ddl.set_attribute('xxx_EXT_REF_SEARCH_CTX_PREF', 'K_TABLE_CLAUSE','tablespace ICV_TS_CTX_IDX');
    exec ctx_ddl.set_attribute('xxx_EXT_REF_SEARCH_CTX_PREF', 'N_TABLE_CLAUSE','tablespace ICV_TS_CTX_IDX');
    exec ctx_ddl.set_attribute('xxx_EXT_REF_SEARCH_CTX_PREF', 'I_INDEX_CLAUSE','tablespace ICV_TS_CTX_IDX Compress 2');
    exec ctx_ddl.set_attribute('xxx_EXT_REF_SEARCH_CTX_PREF', 'P_TABLE_CLAUSE','tablespace ICV_TS_CTX_IDX ');
    exec ctx_ddl.set_attribute('xxx_EXT_REF_SEARCH_CTX_PREF', 'I_ROWID_INDEX_CLAUSE','tablespace ICV_TS_CTX_IDX ');
    exec ctx_ddl.set_attribute('xxx_EXT_REF_SEARCH_CTX_PREF', 'R_TABLE_CLAUSE','tablespace ICV_TS_CTX_IDX LOB(DATA) STORE AS (CACHE) ');
    exec ctx_ddl.create_index_set('xxx_m_iset');
    exec ctx_ddl.add_index('xxx_m_iset','city, country');
    exec ctx_ddl.add_index('xxx_m_iset','postal_code, country');
    Users will always use city or postal code when searching for a name. When I run this query -
    SELECT dr$token
    FROM DR$XXX_EXT_REF_SEARCH_CTX_I1$I
    where dr$token like 'AUTO%'
    ORDER BY dr$token desc
    i get more than 1M rows.
    is there a way to include and search for the city name along with the org name?
    Thanks again..

  • DRG-51030: wildcard query expansion

    Hi,
    encountered an error when I ran this,
    SELECT /*+use_hash(t$oracle_text)*/
    count(documentid)
    FROM t$oracle_text
    WHERE CONTAINS (
    dummy,
    'near(((approv%=underwrit%=report%),(waive%=exception%=override%)),10)',
    1) > 0
    ERROR at line 1:
    ORA-29902: error in executing ODCIIndexStart() routine
    ORA-20000: Oracle Text error:
    DRG-51030: wildcard query expansion resulted in too many terms
    I have few question:
    1) what is this = sign in where clause.
    2)How to get rid of this error .
    3)is there any other alternative to run this without hitting wildcard query expansion limit.
    Thanks,

    Hello,
    There's a good discussion of that exception here -
    How to limit the number of search results returned by oracle text
    John
    http://jes.blogs.shellprompt.net
    http://apex-evangelists.com

  • DRG-51030: wildcard query expansion after using @

    The following query returns one row:
    select * from userinterface where contains (searchtx, 'Mustermann within name and %gmx.de within email')>0;
    returns one row.
    If I add @ (=> %@gmx.de within email) the following error occurs:
    select * from userinterface where contains (searchtx, 'Mustermann within name and %@gmx.de within email')>0;
    returns
    ERROR at line 1:
    ORA-29902: error in executing ODCIIndexStart() routine
    ORA-20000: Oracle Text error:
    DRG-51030: wildcard query expansion resulted in too many terms
    Is the character @ a special character causing the wildcard query expansion? "@gmx.de" is more restictive compared to "gmx.de" (Oracle 10.2.0.4)
    select * from userinterface where contains (searchtx, 'Mustermann within name and %de within email')>0;
    is also working and returns three rows.

    Like Roger said, you could include the @ in your printjoins attribute of your basic_lexer and use a substring_index, then that might allow you to search for '%@gmx.de' and find '[email protected]' without encountering the wildcard expansion error. Increasing the wildcard_maxterms would also make the error less likely. However, I believe that is using Oracle Text in a manner other than it is intended and will cause more problems than it will solve, like increasing the size of your domain index tables by storing substrings and making your searches slower.
    If you just leave things the way they are, then @ and . are treated as spaces and '[email protected]' is seen as 'somebody gmx de' and indexed as three separate words, so searching for '@gmx.de' will search for 'gmx de' and will find it, so there is no need for a leading wildcard. If you want to avoid problems with errors due to users entering unnecessary leading wildcards with such special characters, then you can replace them in the string before searching. I usually find it convenient to create a cleanup function and include any such problems, then include that in my code that uses a bind variable for the search string. Please see the example below.
    SCOTT@orcl_11gR2> -- table:
    SCOTT@orcl_11gR2> create table userinterface
      2    (id       number,
      3       searchtx  xmltype)
      4  /
    Table created.
    SCOTT@orcl_11gR2> -- lexer:
    SCOTT@orcl_11gR2> begin
      2    ctx_ddl.create_preference('german_lexer','basic_lexer');
      3    ctx_ddl.set_attribute('german_lexer','composite','german');
      4    ctx_ddl.set_attribute('german_lexer','mixed_case','yes');
      5    ctx_ddl.set_attribute('german_lexer','alternate_spelling','german');
      6  end;
      7  /
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- index:
    SCOTT@orcl_11gR2> CREATE INDEX ui_t_ind on userinterface (searchtx)
      2       indextype is ctxsys.context
      3            PARAMETERS ('
      4                 SECTION GROUP ctxsys.auto_section_group
      5                 LEXER        german_lexer
      6                 MEMORY        100000000
      7                 SYNC        (MANUAL)'
      8            )
      9  /
    Index created.
    SCOTT@orcl_11gR2> -- insert data:
    SCOTT@orcl_11gR2> insert /*+ APPEND */ into userinterface (id, searchtx)
      2  values (1, xmltype (
      3  '<?xml version="1.0"?>
      4  <data>
      5    <name>Mustermann</name>
      6    <email>[email protected]</email>
      7  </data>'))
      8  /
    1 row created.
    SCOTT@orcl_11gR2> insert /*+ APPEND */ into userinterface (id, searchtx)
      2  values (2, xmltype (
      3  '<?xml version="1.0"?>
      4  <data>
      5    <name>Hans Haeberle</name>
      6    <email>[email protected]</email>
      7  </data>'))
      8  /
    1 row created.
    SCOTT@orcl_11gR2> -- synchronize, then optimize with rebiuld:
    SCOTT@orcl_11gR2> begin
      2    ctx_ddl.sync_index ('ui_t_ind');
      3    ctx_ddl.optimize_index ('ui_t_ind', 'rebuild');
      4  end;
      5  /
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- what is tokenized, indexed, and searchable:
    SCOTT@orcl_11gR2> select token_text from dr$ui_t_ind$i
      2  /
    TOKEN_TEXT
    DATA
    EMAIL
    Haeberle
    Hans
    Häberle
    Mann
    Muster
    Mustermann
    NAME
    com
    de
    gmx
    somebody
    unknown
    whatever
    15 rows selected.
    SCOTT@orcl_11gR2> -- function to clean up search strings:
    SCOTT@orcl_11gR2> create or replace function cleanup
      2    (p_string in varchar2)
      3    return         varchar2
      4  as
      5    v_string     varchar2 (100);
      6  begin
      7    v_string := p_string;
      8    v_string := replace (p_string, '%@', ' ');
      9    return v_string;
    10  end cleanup;
    11  /
    Function created.
    SCOTT@orcl_11gR2> show errors
    No errors.
    SCOTT@orcl_11gR2> -- example search strings, queries, and results:
    SCOTT@orcl_11gR2> column name  format a15
    SCOTT@orcl_11gR2> column email format a20
    SCOTT@orcl_11gR2> variable search_string varchar2 (100)
    SCOTT@orcl_11gR2> begin
      2    :search_string :=
      3        'Mustermann within name and %@gmx.de within email';
      4  end;
      5  /
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> select id,
      2           extractvalue (u.searchtx, '//name') name,
      3           extractvalue (u.searchtx, '//email') email
      4  from   userinterface u
      5  where  contains (searchtx, cleanup (:search_string)) > 0
      6  /
            ID NAME            EMAIL
             1 Mustermann      [email protected]
    1 row selected.
    SCOTT@orcl_11gR2> begin
      2    :search_string :=
      3        'Häberle within name and unknown@whatever within email';
      4  end;
      5  /
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> select id,
      2           extractvalue (u.searchtx, '//name') name,
      3           extractvalue (u.searchtx, '//email') email
      4  from   userinterface u
      5  where  contains (searchtx, cleanup (:search_string)) > 0
      6  /
            ID NAME            EMAIL
             2 Hans Haeberle   [email protected]
    1 row selected.
    SCOTT@orcl_11gR2>

  • Error MDX result contains too many cells (more than 1 million). (WIS 10901)

    Hi,
    We have developed an universe on BI query and developed report on it. But while running this BO query in Web Intelligence we get the following error
    A database error occured. The database error text is: Error in MDDataSetBW.GetCellData.  MDX result contains too many cells (more than 1 million). (WIS 10901)
    This BO query is restricted for one document number.
    Now when i check in the BI cube there are not more than 300-400 records for that document number.
    If i restrict the BO query for document number, delivery number, material and acknowledged date then the query runs successfully.
    Can anyone please help with this issue.

    follow this article to get the mdx generated by the webi report.
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/90b02218-d909-2e10-1988-a2ca74547900
    then try to execute the same in mdxtest transaction in bw

  • MDX result contains too many cells - But relese note applied!!!!!!

    Hi experts!
    When I create a query with my infoview, it retrive me the following error:
    <ERROR COMPONENT="WIS" ERRORCODE="10901" ERRORTYPE="USER" MESSAGE="Si è verificato un errore del database. Il testo dell&apos;errore del database è: Errore in MDDataSetBW.GetCellData.  MDX result contains too many cells (more than 1 million). (WIS 10901)" PREFIX="ERR">
    I try to understand if there are some problems about the sap.notes. In particular I ask to the technical people if the following note had installed into my system:
    Note 931479 - MDX: More than 1,000,000 instances per axis
    The technical people said me that this note had been installed because the system is to level 23.
    bold
    Then what problem may be due the fact that my mdx is not able to contains more than 1 million of cell ?
    bold
    thank'you in advance !

    Hi,
    whats the release of the SAP BW System - including patch level ?
    whats the release of the SAP BusinessObjects system including patch level ?
    Ingo

  • MDX result contains too many cells - NUMC 6

    Hi to all.
    I'm using I bex query in order to retrieve the field to my universe.
    I know that when there is an element defined as NUMC (6), some errors may occurs into the MDX extraction (e.g.  MDX result contains too many cells (more than 1 million)).
    Can I set everythink in my BEX query or in my universe definition in order to not change the caratcteristic in my SAP BW system ?
    Indeed I will have some bureaucratic and formal problem to change the sap bw type definition of the caracteristics.
    Thank'you in advance.

    hi Ingo,
    Are you sure ? Indeed when I try to run a query where I have a NUMC6 filed and another field there is the error attached above.
    Instead when I run the same query without the NUMC6 field the query  ends correctly.
    Moreover I see in the "SAP Note 1378064" where I see in the "Reason and Prerequisites": The internal data types are incorrect due to NUMC(6). The conversion to integer is required.
    But I don't know if this conversion is allowed in the Bex Query step or in the universe filed definition.
    Any advice will be accepted !

  • "This webpage has a redirect loop"     "....resulted in too many redirects"

    Hi, I am trying to connect to a cloud account for the first time.
    BUT I get this error here...
    This webpage has a redirect loop
    The webpage at https://database-xxxxxx.db.us1.oraclecloudapps.com/apex/f?p=4500:1000:118209995883759 has resulted in too many redirects. Clearing your cookies for this site or allowing third-party cookies may fix the problem. If not, it is possibly a server configuration issue and not a problem with your computer.
    Here are some suggestions:
    Reload this webpage later.
    Learn more about this problem.
    Any help appreciated,
    Bill

    Found the problem!
    Oracle - need to capture that error and make more descriptive.
    maybe - "User does not have the proper role to view this page"
    I went into the
    Identity Management Console:     https://idmconsole.us1.cloud.oracle.com/identity/faces/pages/Identity.jspx
    for my Domain.
    "Manage Roles" - hit search (because the roles don't show if you don't hit the search button)
    and then Assigned the "Database Developer" and "Database Administrator" roles to the users that were getting the error.

  • Tag Query History mode returning too many rows of data

    I am running a Tag Query from HQ to a plant site and want to limit the amount of data that returns to the minimum required to display trends on a chart.  The minimum required is subjective, but will be somewhere between 22 and 169 data points for a weeks data.  Testing and viewing the result is needed to determine what is an acceptable minimum. 
    I build a Tag Query with a single tag and set it to History Mode.  I set a seven day period going midnight to midnight.  And I set the row count to 22.  When I execute the query it returns 22 data points.  But when I go to visualization, I get 565 datapoints.  So obviously that is not what I want as I want a very slim dataset coming back from the IP21 server (to minimize the load on the pipe). 
    Any suggestions?

    Hi Michael,
    it looks to me like you have enabled the "Use Screen Resolution" option in your display template or in the applet HTML. Setting this option makes the display template fetch as many rows as there are pixels in the chart area. Like setting a rowcount in the applet HTML as a param, this will override any rowcount limitations you have set at the Query Template level...
    Hope this helps,
    Sascha

  • Chart duplicating Series results in too many legend items

    I've created an item for this on bugs.adobe.com - https://bugs.adobe.com/jira/browse/FLEXDMV-2258
    My client requires this in order to finish the application I am working on so I'm hoping someone has a workaround or suggestions for me.
    The forum would not allow me to attach my jpg and mxml file but they are available on the bug url above
    Copied from the bug:
    My application has a collection of machines, each of which has the same properties. I want to graph those properties, but organize them by dates (the values of the properties are different for different days). So the y axis items are dates, and each date has a cluster of machines. Each date has the same number of machines and each machine has the same properties. The issue I'm having is that I want each property (in the example there are 3 properties, represented by BarSeries) to have the same color for each machine in each cluster. So overall, there should only be 3 colors and the legend should have 3 items. Instead, the chart creates different colors for each machine (represented by a BarSet), so there are 3 (machines/BarSets) times 3 (properties/BarSeries) which gives 9 colors and 9 items in the legend. I'm looking to have 3 colors and 3 items in the legend. I was hoping there would be a property on BarSet or BarChart to specify to share BarSeries among the BarSets rather than it forcing unique instances of BarSeries for the BarSets. I tried storing just 3 instances of BarSeries (one for each property) and assigning each BarSet.series property to the stored array of BarSeries, but the outcome was that it would only display one item for each cluster rather than 3.
    Thank you for any suggestions!

    The legend isn't the only problem.  The BarSets are being rendered with different colors for each BarSeries when I want each BarSet to have the same set of colors.

  • Limit number of rows from wildcard expansion- DRG-51030

    We use CONTEXT iindex in 11g to search on a text DB column, "Name".
    This is used in a UI to show autosuggest list of 25 matching names.
    When the end user types an 'a' we want to show a list of the first 25 names that contain an 'a'.
    We hit the issue of too many matches in the wildcard expansion query:
    DRG-51030: wildcard query expansion resulted in too many terms
    This is a frequent use case when the user types just 1 character ('a' will easily match over 50K names in our case).
    Is there a way to make the wildcard expansion query only return the first 25 rows?
    We never show more than 25 names in our UI - so we would like the expansion query to also return max of 25 rows.
    Our query is:
    SELECT ResEO.DISPLAY_NAME,
    ResEO.RESOURCE_ID,
    ResEO.EMAIL
    FROM RESOURCE_VL ResEO
    WHERE CONTAINS (ResEO.DISPLAY_NAME , '%' || :BindName || '%' )>0
    Also,
    Is there a way to use CTXCAT type of index and achieve this (expansion query limit of 25)?
    We are considering switching to CTXCAT index based on documentation that recommends this type of an index for better performance.

    Your best bet may be to look up the words directly in the $I token table.
    If your index is called NAME_INDEX you could do:
    select /* FIRST_ROWS(25) */ token_text from
      (  select token_text
         from dr$name_index$i  
         where token_text like 'A%' )
    where rownum < 26;That should be pretty quick.
    However, if you really want to do %A% - any word which has an A in it - it's not going to be so good, because this will prevent the index being used on the $I table - so it's going to do a full table scan. In this case you really need to think a bit harder about what you're trying to achieve and why. Does it really make any sense to return 25 names which happen to have an A in them? Why not wait until the user has typed a few more characters - 3 perhaps? Or use my technique for one or two letters, then switch over to yours at three characters (or more).
    A couple of notes:
    - Officially, accessing the $I table is not supported, in that it could change in some future version, though it's pretty unlikely.
    - I trust you're using the SUBSTRING_INDEX option if you're doing double truncated searches - a wild card at the beginning and end. If not, your performance is going to be pretty poor.

  • Too many simultaneous persistent searches

    The Access Manager (2005Q1) in our deployment talks to load-balanced Directory Server instances and as recommended by Sun we have set the value of the property com.sun.am.event.connection.idle.timeout to a value lower than load-balancer timeout.
    However on enabling this property, we see the following error messages in the debug log file amEventService:
    WARNING: EventService.processResponse() - Received a NULL Response. Attempting to re-start persistent searches
    EventService.processResponse() - received DS message => [LDAPMessage] 85687 SearchResult {resultCode=51, errorMessage=too many simultaneous persistent searches}
    netscape.ldap.LDAPException: Error result (51); too many simultaneous persistent searches; LDAP server is busy
    Any idea - why this occurs?
    Do we need to modify the value associated with the attribute nsslapd-maxpsearch ?
    How many Persistent searches does Access Manager fire on the Directory Server ? Can this be controlled ?
    TIA,
    Chetan

    I am having an issue where the Access Manager does not seem to fire any persistent searches at all to the DS.
    We have disabled properties which try to disable certain types of persistent searches and hence in reality there should be lots of persistent searches being fired to the DS.
    Also, there does seem to be some communication between the DS and the Access Manager instance. ....as the AM instance we work on talks only to a particular DS instance. But they do not seem to find any persistent searches being fired from our side at all....the only time they did see some persistent searches was when I did a persistent search from the command line.
    What could be the issue??
    thanks
    anand

  • ORA-02042: too many distributed transactions

    1. I have been working on a portal application for several weeks.
    2. Portal is installed in 1 instance and my data is in another instance.
    3. I've had no ora-02042 problems in the devt environment set up that way.
    4. I've recently migrated the application/pages to a test environment set up that way.
    5. I've been working in the test environment for several days with no problems.
    6. For some portlets on some pages I'm now getting:
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>
    Failed to parse query
    Error:ORA-02042: too many distributed transactions
    Error:ORA-02042: too many distributed transactions
    ORA-02063: preceding line from
    LINK_TO_TEST (WWV-11230) Failed to parse as PACID_SCHEMA -
    select user_action.userid, action.name,
    user_action.created_date,
    user_action.created_by, action.action_id,
    'del' del_link from user_action , action
    where user_action.action_id =
    action.action_id and user_action.userid
    LIKE UPPER(:userid) order by USERID
    ASC, NAME ASC, CREATED_DATE
    ASC (WWV-08300)
    <HR></BLOCKQUOTE>
    7. I cannot find anything about this error in the db log files for either instance.
    8. I've increased distributed transactions to 200 in the portal db and bounced
    it. Still get the error.
    9. No records in dba_2pc_pending or dba_2pc_neighbors in the portal instance.
    10. I get the error in various reports and form LOVs at different times. Pages with a lot of portlets seem to be more prone to the error.
    Here is a typical LOV error:
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>
    COMBOBOX LOV ERROR:
    LOV: "ASIMM_1022.TITLE_LOV"
    Parse Message: Parse as pre-set global: "PACID_SCHEMA".
    Find Message: LOV is of type DYNAMIC (LOV based on SQL query).
    Query: "select code display_column, code return_column from codes where table_id = 'OFFICER_TITLE' order by code"
    wwv_parse.parse_as_user: ORA-02042: too many distributed transactions ORA-02063: preceding line from LINK_TO_TEST wwv_parse.parse_as_user: Failed to parse as PACID_SCHEMA - select code display_column, code return_column from codes where table_id = 'OFFICER_TITLE' order by code wwv_security.check_comp_privilege: Insufficient privileges. wwpre_utl.get_path_id: The preference path does not exist: ORACLE.WEBVIEW.PARAMETERS.217_USER_INPUT_ASIMM_5423428
    <HR></BLOCKQUOTE>
    Why are these select statements being interpreted as distributed transactions? Note:1032658.6 suggests I "USE SET TRANSACTION READ ONLY". Is this necessary? If so how?
    What puzzles me is that this set up has been working fine for several days. I don't know of any changes to my environment apart from me increasing distributed transactions.
    null

    Hi,
    this is information from metalink:
    The ORA-2042 indicates that you should increase the parameter
    distributed_transactions.
    The ORA-2063 indicates that this must be done at the remote
    database.
    Explanation
    If the distributed transaction table is full on either side of
    the database link you get the error ORA-2042:
    ORA-02042: "too many distributed transactions"
    Cause: the distributed transaction table is full,
    because too many distributed transactions are active.
    Action: increase the INIT.ORA "distributed_transactions" or
    run fewer transactions.
    If you are sure you don't have too many concurrent
    distributed transactions, this indicates an internal
    error and support should be notified.
    Instance shutdown/restart would be a workaround.
    When the error is generated at the remote database it is
    accompanied with an ORA-2063. In this case the parameter
    distributed_transactions must be increased at the remote
    database.
    If there is no ORA-2063 the parameter distributed_transactions
    must be increased at the local database.

  • CF6.1: Administrator Login Failure - too many redirects

    We a running CF 6.1 with a Commonspot content management system with IIS 6.0 as the web server.  The CF Administrator login screen comes up and I am able to enter the password.  Depending upon the browser various error messages (see below) appear that seem to indicate redirect issues between index.cfm and enter.cfm.
    There are two IIS web sites (commonspot and Administration) which both have CFIDE in them.  Renaming the "administrator" folder under either web site causes the CF admin page to be not found. I'm looking for any troubleshooting help.  I think there is an issue between index.cfm and enter.cfm in the login process.  The server is virtual so we can snap back the the original state if the new config fails.
    Thank you,
    Ron Deluce
    Error messages:
    1) Firefow 3.5.2
         The page isn't redirecting properly
         Firefox has detected that the server is redirecting the request for this address in a way that will never complete.
        *   This problem can sometimes be caused by disabling or refusing to accept
              cookies.
    2) IE 8.0.3  (sits there with an apparent URL loop in the gutter window.
      3) Google Chrome 2.0.172.43
    This webpage has a redirect loop.
    The webpage at http://dtsc-cm.dtsc.ca.gov/cfide/administrator/index.cfmhas resulted in too many redirects. Clearing your cookies for this site may fix the problem. If not, it is possibly a server configuration issue and not a problem with your computer.
    Here are some suggestions:
    Reload this web page later.
    Learn more about this problem.
    More information on this error
    Below is the original error message
    Error 310 (net::ERR_TOO_MANY_REDIRECTS): There were too many redirects.
    4) Safari (Windows) 4.0.3
    Safari can’t open the page.
    Too many redirects occurred trying to open “http://dtsc-cm.dtsc.ca.gov/cfide/administrator/index.cfm”. This might occur if you open a page that is redirected to open another page which then is redirected to open the original page.

    This problem happens only on a Mac machine. So it is Safari 3.2.x and upwards( inc. safari 4.x) on Mac ( Tiger or Leopard ) that this problem exists. Older version of Safari work fine.. One interesteting datapoint being this problem occurs only when we access a partner application ( which uses plsql SSO SDK). This problem does not happen for OIDDAS or Portals!!!
    Any experiences you can share will be a great help...
    Thanks..

  • Unable to create report. Query produced too many results

    Hi All,
    Does someone knows how to avoid the message "Unable to create report. Query produced too many results" in Grid Report Type in PerformancePoint 2010. When the mdx query returns large amount of data, this message appears. Is there a way to get all
    the large amount in the grid anyway?
    I have set the data Source query time-out under Central Administration - Manager Service applications - PerformancePoint Service Application - PerformancePoint Service Application Settings at 3600 seconds.
    Here Event Viewer log error at the server:
    1. An exception occurred while running a report.  The following details may help you to diagnose the problem:
    Error Message: Unable to create report. Query produced too many results.
            <br>
            <br>
            Contact the administrator for more details.
    Dashboard Name:
    Dashboard Item name:
    Report Location: {3592a959-7c50-0d1d-9185-361d2bd5428b}
    Request Duration: 6,220.93 ms
    User: INTRANET\spsdshadmin
    Parameters:
    Exception Message: Unable to create report. Query produced too many results.
    Inner Exception Message:
    Stack Trace:    at Microsoft.PerformancePoint.Scorecards.Server.PmServer.ExecuteAnalyticReportWithParameters(RepositoryLocation analyticReportViewLocation, BIDataContainer biDataContainer)
       at Microsoft.PerformancePoint.Analytics.ServerRendering.OLAPBase.OlapViewBaseControl.ExtractReportViewData()
       at Microsoft.PerformancePoint.Analytics.ServerRendering.OLAPBase.OlapViewBaseControl.CreateRenderedView(StringBuilder sd)
       at Microsoft.PerformancePoint.Scorecards.ServerRendering.NavigableControl.RenderControl(HtmlTextWriter writer)
    PerformancePoint Services error code 20604.
    2. Unable to create report. Query produced too many results.
    Microsoft.PerformancePoint.Scorecards.BpmException: Unable to create report. Query produced too many results.
       at Microsoft.PerformancePoint.Scorecards.Server.Analytics.AnalyticQueryManager.ExecuteReport(AnalyticReportState reportState, DataSource dataSource)
       at Microsoft.PerformancePoint.Scorecards.Server.PmServer.ExecuteAnalyticReportBase(RepositoryLocation analyticReportViewLocation, BIDataContainer biDataContainer, String formattingDimensionName)
       at Microsoft.PerformancePoint.Scorecards.Server.PmServer.ExecuteAnalyticReportWithParameters(RepositoryLocation analyticReportViewLocation, BIDataContainer biDataContainer)
    PerformancePoint Services error code 20605.
    Thanks in advance for your help.

    Hello,
    I would like you to try the following to adjust your readerquotas.
    Change the values of the parameters listed below to a larger value. We recommend that you double the value and then run the query to check whether the issue is resolved. To do this, follow these steps:
    On the SharePoint 2010 server, open the Web.config file. The file is located in the following folder:
    \Program Files\Microsoft Office Servers\14.0\Web Services\PpsMonitoringServer\
    Locate and change the the below values from 8192 to 16384.
    Open the Client.config file. The file is located in the following folder:
    \Program Files\Microsoft Office Servers\14.0\WebClients\PpsMonitoringServer\
    Locate and change the below values from 8192 to 16384.
    After you have made the changes, restart Internet Information Services (IIS) on the SharePoint 2010 server.
    <readerQuotas
    maxStringContentLength="2147483647"
    maxNameTableCharCount="2147483647"
    maxBytesPerRead="2147483647"
    maxArrayLength="2147483647"
                  maxDepth="2147483647"
    />
    Thanks
    Heidi Tr - MSFT
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

Maybe you are looking for