Odd QoQ issue when querying Solr collection

Hello, everyone.
I've got a query of query issue that has me stumped.  Maybe I'm just missing something very simple, but this has got me really confuzzed.
I have a Solr collection that is indexing a few tables in an Oracle database.  Let's call it "hdq", for this discussion.
I wrote a semi-complex query of related tables from which the CFINDEX is using to index the data.  This is working just fine.
I created the Solr collection in the CF9 CFAdmin, and am using the following to index with:
<cfindex action="refresh" collection="hdq" key="QUESTION_ID" type="custom" title="QUESTION_TITLE" query="search_questions" body="QUESTION_TX,QUESTION_TITLE,CATEGORY_NM,TAG_NM,ANSWER_TITLE,ANSWER_TX"
    custom1="QUESTION_STATUS" custom2="TAG_NM" custom3="QUESTION_STATUS" custom4="QUESTION_TYPE" category="CATEGORY_NM">
Then I do a CFSEARCH and name it "hd_questions".  Again, so far, so good, no problems.
If I do a CFDUMP of "hd_questions", one of the columns is KEY (which is QUESTION_ID in the database.)  If I CFOUTPUT the collection, KEY is there.
If I QoQ the CFSEARCH of the collection and use SELECT custom3, score, summary, context, key FROM hd_questions, I get an error message that
Encountered "key. Incorrect Select List, Incorrect select column,
.. then it gives the line number of the page that produced the error, and
<cfquery dbtype="query" name="hd_results">
Am I missing something simple, here?  KEY is in the collection, I can see it in CFDUMP, I can see it in CFOUTPUT.  But if I query the collection and try to select KEY, there is an error.
Any thoughts/ideas?
Thank you,
^_^

Key is a reserved word in Coldfusion, so can't be used directly in a QoQ without escaping it.  Try wrapping it in [ ] instead, i.e. [key]
It may also help to give it an alias too, e.g. SELECT [key] AS someKey
See the list of reserved words here: http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec173d0-7f ff.html and the QoQ guide to using reserved words here: http://help.adobe.com/en_US/ColdFusion/9.0/Developing/WSc3ff6d0ea77859461172e0811cbec0e4fd -7ff0.html#WSc3ff6d0ea77859461172e0811cbec22c24-7008

Similar Messages

  • Recieving 904 error when querying local collections

    We've been recieving an intermitant bug within several packaged routines when we attempt to query local table collections.
    We've bounced between using the old "THE(SELECT(CAST...) from dual) d" and new "TABLE(CAST...) d" syntacticals for several weeks.
    On recompilation of the packages, a 904 error is raised on casting out of the collection. We literaly put back the old/new syntactical and all is well until the next build when we recieve the same 904 error once again.
    The problem seems to arise most often in objects that have methods which either use other packaged code or access table data. The collections are valid at runtime in all occurances of the issue.
    We're running 8.1.6 Release 3 on Sun boxes. Oddly, our development NT boxes rarely exibit the error even with hundreds of builds a day.
    Has anyone come across this same issue, and is there a fix or a simple WA (not catching the 904 error and trying the other syntactical flavor - too much maintenance).
    Thanks in advance
    Cheers
    S T U E
    null

    I had the same or vary similar problem.
    I solved it by changing the ${oracle.jdeveloper.deploy.dir} in the internal-targets.xml file to the middleware home mine was /oracle/fmwhome using the new PS6 example VirtualBox download.
    This is the old path in the internal-targets.xml file: /oracle/jdevhome/jdeveloper/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib
    Here is the new path: /oracle/fmwhome/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib
    This reference is around line 12 in the taskdef
    <taskdef resource="net/sf/antcontrib/antcontrib.properties">
    <classpath>
    <pathelement location="${middleware.install.home.directory}/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar"/> i changed this line middleware.install.home.directory is a var that points to /oracle/fmwhome
    </classpath>
    </taskdef>
    Hope this helps.
    Edited by: 825836 on Aug 15, 2012 11:31 AM

  • Latest PowerQuery issues with data load to data models built with older version + issue when query is changed

    We have a tool built in excel + Powerquery version 2.18.3874.242 - 32 Bit (No PowerPivot) using data load to data model (not to workbook). There are data filters linked to excel cells, inserted in OData query before data is pulled.
    The Excel tool uses organisational credentials to authenticate.
    System config: Win 8.1, Office 2013 (32 bit)
    The tool runs for all users as long as they do not upgrade to PowerQuery_2.20.3945.242 (32-bit).
    Once upgraded users can no longer get the data to load to the Model. Data still loads to the Workbook but the model breaks down. Resetting load to data model erases all measures.
    Here are the exact errors users get:
    1. [DataSource.Error] Cannot parse OData response result. Error: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
    2. The Data Model table could not be refreshed: There isn't enough memory to complete this action. Try using less data or closing other applications. To increase memory available, consider ......

    Hi Nitin,
    Is this still an issue? If so, can you kindly provide the details that Hadeel has asked for?
    Regards,
    Michael Amadi
    Please use the 'Mark as answer' link to mark a post that answers your question. If you find a reply helpful, please remember to vote it as helpful :)
    Website: http://www.nimblelearn.com, Twitter:
    @nimblelearn

  • Performance issues when creating a Report / Query in Discoverer

    Hi forum,
    Hope you are can help, it involves a performance issues when creating a Report / Query.
    I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = ‘Posted’ we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
    Please see attached the SQL Inspector Plan:
    Before Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    AND-EQUAL
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
    INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_N1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    After Condition
    SELECT STATEMENT
    SORT GROUP BY
    VIEW SYS
    SORT GROUP BY
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    NESTED LOOPS
    NESTED LOOPS OUTER
    NESTED LOOPS
    TABLE ACCESS FULL GL.GL_JE_BATCHES
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
    INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
    INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
    INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
    INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
    INDEX RANGE SCAN GL.GL_JE_LINES_U1
    INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
    TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
    INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
    TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
    INDEX RANGE SCAN GL.GL_PERIODS_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
    INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
    TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
    INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
    Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
    Many thanks,
    Lance

    Hi Rod,
    I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
    I’ve been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
    I think the problem is with the column using DECODE. When querying the column in TOAD the value of ‘P’ is returned. But in discoverer the condition is done on the value ‘Posted’. I’m not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
    Lance
    DECODE( JOURNAL_BATCH1.STATUS,
    '+', 'Unable to validate or create CTA',
    '+*', 'Was unable to validate or create CTA',
    '-','Invalid or inactive rounding differences account in journal entry',
    '-*', 'Modified invalid or inactive rounding differences account in journal entry',
    '<', 'Showing sequence assignment failure',
    '<*', 'Was showing sequence assignment failure',
    '>', 'Showing cutoff rule violation',
    '>*', 'Was showing cutoff rule violation',
    'A', 'Journal batch failed funds reservation',
    'A*', 'Journal batch previously failed funds reservation',
    'AU', 'Showing batch with unopened period',
    'B', 'Showing batch control total violation',
    'B*', 'Was showing batch control total violation',
    'BF', 'Showing batch with frozen or inactive budget',
    'BU', 'Showing batch with unopened budget year',
    'C', 'Showing unopened reporting period',
    'C*', 'Was showing unopened reporting period',
    'D', 'Selected for posting to an unopened period',
    'D*', 'Was selected for posting to an unopened period',
    'E', 'Showing no journal entries for this batch',
    'E*', 'Was showing no journal entries for this batch',
    'EU', 'Showing batch with unopened encumbrance year',
    'F', 'Showing unopened reporting encumbrance year',
    'F*', 'Was showing unopened reporting encumbrance year',
    'G', 'Showing journal entry with invalid or inactive suspense account',
    'G*', 'Was showing journal entry with invalid or inactive suspense account',
    'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
    'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
    'I', 'In the process of being posted',
    'J', 'Showing journal control total violation',
    'J*', 'Was showing journal control total violation',
    'K', 'Showing unbalanced intercompany journal entry',
    'K*', 'Was showing unbalanced intercompany journal entry',
    'L', 'Showing unbalanced journal entry by account category',
    'L*', 'Was showing unbalanced journal entry by account category',
    'M', 'Showing multiple problems preventing posting of batch',
    'M*', 'Was showing multiple problems preventing posting of batch',
    'N', 'Journal produced error during intercompany balance processing',
    'N*', 'Journal produced error during intercompany balance processing',
    'O', 'Unable to convert amounts into reporting currency',
    'O*', 'Was unable to convert amounts into reporting currency',
    'P', 'Posted',
    'Q', 'Showing untaxed journal entry',
    'Q*', 'Was showing untaxed journal entry',
    'R', 'Showing unbalanced encumbrance entry without reserve account',
    'R*', 'Was showing unbalanced encumbrance entry without reserve account',
    'S', 'Already selected for posting',
    'T', 'Showing invalid period and conversion information for this batch',
    'T*', 'Was showing invalid period and conversion information for this batch',
    'U', 'Unposted',
    'V', 'Journal batch is unapproved',
    'V*', 'Journal batch was unapproved',
    'W', 'Showing an encumbrance journal entry with no encumbrance type',
    'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
    'X', 'Showing an unbalanced journal entry but suspense not allowed',
    'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
    'Z', 'Showing invalid journal entry lines or no journal entry lines',
    'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ),

  • Hi there! So I am using Lightroom 3 on a pc and I have ran into an issue when exporting images. The DPI and image size (in inches) that I am selecting during the export process. For example I just exported a collection and set the dpi to 180 and the size

    Hi there! So I am using Lightroom 3 on a pc and I have ran into an issue when exporting images. The DPI and image size (in inches) that I am selecting during the export process. For example I just exported a collection and set the dpi to 180 and the size to 7 inches on the long edge. My exported result is 457 dpi and sized at 3200x2134 pixels.. Any ideas on why this is happening and what I can do to correct it?

    The DPI setting in a digital image has no meaning at all. You need to learn how to calculate what you need in your exported image. The only measurement in a digital image that has any meaning is the number of pixels in each direction. It doesn't matter what you set that DPI to (actually it's PPI or pixels per inch). The image will have the same number of pixels regardless of the setting. If you need an image that is 5 x 7" (for example) at 200 PPI then you would want an image that measured:
    5 x 200 = 1000 pixels
    7 x 200 = 1400 pixels
    So you would need an image that is 1000 x 1400 pixels to have a 5 x 7" image at 200 PPI. The reason your exported image had such a high PPI setting is because you specified the number of inches you wanted the image to be. And there were enough pixels in the image that it calculated out to be that high PPI setting.
    I apologize, I don't explain this very well. But you need to learn to do the math to determine how large you really want your exported images to be.

  • Issue when using broadcaster in query desinger

    Hi All,
            we are having issues when we try to either distribute workbooks, it comes with a workbook page but nothing shows up on the page,samething when I try from
    query designer and try to publish it using broadcaster,the page does not show up anything.I am not sure why is it doing that.It was working before where it asked me onnce i distributed a workbook to make a new setting to broadcast the workbook.
    Your answere are greatly appreciated and will assign maximum points if it is solved.
    Regards
    Abyie

    Hi Gurus,
    I was able to sucessfully configure the Launch_Broadcaster. Now the broadcast settings in web template work same as the one when we launch the query to web from query designer(SEND, button function). However the broadcaster in web template(WAD) does not have XML(Ms Excel) output format. It has only MHTML, HTML, PDF & Online Link to Current Data.
    Is there a setting we need to perform in order to have XML(Excel) output format in WAD reports. Please advise.
    Thanks Much!

  • Issue  when 2 consolidation group hierarchies used in one single query

    Dear experts,
    I have an issue.
    I need to build a query where we compare our forecast data (figures for year end 12.2009) to our plan 2010 data.
    The problem is that we changed (in our consolidation system SEM BCS) our consolidation group hierarchy for next year only , (for the plan :  the change has been  done from march 2010 on the version P1 used for the plan)
    Ex :
    In forecast  the structure is as below :
    We have a zone France/Spain/Portugal that includes :
    1. the business unit France 1 and the business unit includes the french company AFrench company Bfrench company C
    2. the business unit France 2 with french companies D, E.
    3. the business Unit Spain : with spanish company A, B C etc.
    4.The business Unit Portugal : with portuguese company A, B, C etc.
    Our Plan conso group hierarch is as below  :
    We have a zone FRANCE  (France/Spain/Portugal does not exit anymore) that includes :
    1. the Business unit France 1 with french Companies A, B and C
    2. the BU France 2 with french companies D, E
    In my query i want to compare a business unit profit and Loss Forecast Versus Plan.
    If i select as a consolidation group, exemple the business unit France 1 (which no longer has the same Zone above but still exists in the hierarchy), I have the following error message :
    "BW server error
    Exception condition : "HIERARCHY NOT FOUND" raised."
    My consolidation group characteristic is set up in both columns (forecast and Plan) with a hierarchy node variable based on the right hierarchy version and the right hierarchy key date, that is why i am now lost...
    I have a doubt regarding the feasability of such query...
    Many thanks in advance!!
    Armelle

    Thanks very much for your quick answer Naveen.
    Actually i saw with our technical team yesterday that we may have an issue when creating a new conso group hierarchy
    SAP note 957506
    This problem is caused by program error in the CL_RRHI_INCL_CREATOR_TID class.
    The error can occur if a hierarchy contains new characteristic values and if additional hierarchies for the characteristic are activated simultaneously.
    We may need a support package to solve this.
    As you said, both hierarchies are based on the same characteristic.
    I understand that we need pro forma presentation to compare our figures.
    But we always ask for Business units level figures, and the BUs level did not change, only the zones level.
    Anyway i am blocked for the moment, since nothing can work with this program error (even all my old queries for which the consolidation group is mandatory).
    I leave my question opened and will let you know if everything worked (or not) as I had expected.

  • Odd issue when using UDT (user defined type)

    Hi all,
    11g.
    I ran into an odd issue when using UDT, I have these 4 schemas: USER_1, USER_2, USER_A, USER_B.
    Both USER_1 and USER_2 have an UDT (actually a nested table):
    CREATE OR REPLACE TYPE TAB_NUMBERS AS TABLE OF NUMBER(10)USER_A has a synonym points to the type in USER_1:
    create or replace synonym TAB_NUMBERS for USER_1.TAB_NUMBERS;USER_B has a synonym points to the type in USER_2:
    create or replace synonym TAB_NUMBERS for USER_2.TAB_NUMBERS;Both USER_A and USER_B have a procedure which uses the synonym:
    CREATE OR REPLACE PROCEDURE proc_test (p1 in tab_numbers)
    IS
    BEGIN
      NULL;
    END;And in the C# code:
    OracleConnection conn = new OracleConnection("data source=mh;user id=USER_A;password=...");
    OracleCommand cmd = new OracleCommand();
    cmd.Connection = conn;
    cmd.CommandText = "proc_test";
    cmd.CommandType = CommandType.StoredProcedure;
    OracleParameter op = new OracleParameter();
    op.ParameterName = "p1";
    op.Direction = ParameterDirection.Input;
    op.OracleDbType = OracleDbType.Object;
    op.UdtTypeName = "TAB_NUMBERS";
    Nested_Tab_Mapping_To_Object nt = new Nested_Tab_Mapping_To_Object();
    nt.container = new decimal[] { 1, 2 };
    op.Value = nt;
    ......This code works fine, but it raises an error when I change the connection string from USER_A to USER_B, the error says:
    OCI-22303: type ""."TAB_NUMBERS" not foundInterestingly, if I change op.UdtTypeName = "TAB_NUMBERS"; to op.UdtTypeName = "USER_2.TAB_NUMBERS", the error is gone, and everything works fine.
    Anyone has any clues?
    Thanks in advance.

    Erase and reformat the ext HD. Then, redo the SuperDuper! backup.

  • Preformance issues when upgrading from dbxml 2.3.8 to 2.4.13

    I have recently upgraded my dbxml distribution from 2.3.8 to 2.4.13 (including the latest patch). I have noticed that many of the queries I issue take longer than they used to. Specifically if the query is complex it takes 10 times as long as it did with 2.3.8. Also I have noted a difference in speed between issuing the query via the dbxml shell (query takes ~45 seconds) vs issuing it through a c++ interface (query takes 4+ minutes). The query am issuing is:
    for $i in collection('projectDatabase.dbxml')/project
    where $i[obsblock/obsblockStatus eq 'INCOMPLETE' and obsblock/receiverBand eq '3MM' and obsblock/remainingTime >= 1.0 and ((obsblock/reqRaCoverage/@low <= 2.61799 and obsblock/reqRaCoverage/@low >= 0.654498) or (obsblock/reqRaCoverage/@high <= 2.61799 and obsblock/reqRaCoverage/@high >= 0.654498) or (0.654498 <= obsblock/reqRaCoverage/@high and 0.654498 >= obsblock/reqRaCoverage/@low) or ((obsblock/reqRaCoverage/@high < obsblock/reqRaCoverage/@low) and ((0.654498 <= obsblock/reqRaCoverage/@high) or (2.61799 <= obsblock/reqRaCoverage/@high) or (0.654498 >= obsblock/reqRaCoverage/@low) or (2.61799 >= obsblock/reqRaCoverage/@low)))) and obsblock/arrayConfiguration eq 'C' and projectID ne'opnt' and projectID ne'rpnt' and projectID ne'tilt' and projectID ne'fringe' and projectID ne'test']
    return $i
    if I simplify the query by removing references to the obsblock/reqRaCoverage/@low and obsblock/reqRaCoverage/@high nodes:
    for $i in collection('projectDatabase.dbxml')/project
    where $i[obsblock/obsblockStatus eq 'INCOMPLETE' and obsblock/receiverBand eq '3MM' and obsblock/remainingTime >= 1.0 and obsblock/arrayConfiguration eq 'C' and projectID ne'opnt' and projectID ne'rpnt' and projectID ne'tilt' and projectID ne'fringe' and projectID ne'test']
    return $i
    it returns much faster. I am wondering if this is an issue with optimization of the complex query?
    The database is fully indexed and was re-indexed when I upgraded to 2.4.13

    Doug,
    I've got your container. It's a wholedoc container with document indexes (a lot of them). Here's what I see for query speeds for your large, slow query using the dbxml shell.
    Query is:
    query "for $i in collection('pdb.dbxml')/project where $i[obsblock/obsblockStatus eq 'INCOMPLETE' and obsblock/receiverBand eq '3MM' and obsblock/remainingTime >= 1.0 and ((obsblock/reqRaCoverage/@low <= 2.61799 and obsblock/reqRaCoverage/@low >= 0.654498) or (obsblock/reqRaCoverage/@high <= 2.61799 and obsblock/reqRaCoverage/@high >= 0.654498) or (0.654498 <= obsblock/reqRaCoverage/@high and 0.654498 >= obsblock/reqRaCoverage/@low) or ((obsblock/reqRaCoverage/@high < obsblock/reqRaCoverage/@low) and ((0.654498 <= obsblock/reqRaCoverage/@high) or (2.61799 <= obsblock/reqRaCoverage/@high) or (0.654498 >= obsblock/reqRaCoverage/@low) or (2.61799 >= obsblock/reqRaCoverage/@low)))) and obsblock/arrayConfiguration eq 'C' and projectID ne'opnt' and projectID ne'rpnt' and projectID ne'tilt' and projectID ne'fringe' and projectID ne'test'] return $i"
    2.3.11 -- 2.4 seconds
    2.4.13 -- 60 seconds
    That is a significant slowdown and it needs further investigation but is almost certainly related to the fact that it is wholedoc storage. The optimizer appears to be choosing unwisely. This will take a few days to work out.
    I also changed the container to be node storage with node indexes and the times went to:
    2.3.11 -- 13 sec (slower than wholedoc)
    2.4.13 -- 40 sec (faster than wholedoc)
    I do know why your application is slower than the dbxml shell. There is a new flag that should almost always be used for wholedoc container queries -- DBXML_DOCUMENT_PROJECTION. Add that to you query excecution.
    Another thing -- query preparation is a bit slower in 2.4 so you should use prepared queries whenever possible to amortize that cost.
    Regards,
    George

  • Solr collections disappearing

    Yesterday I created 7 solr collections. I loaded them up with data and started testing the multi-collection search against them using a webservice to access the search. I tested the searching from my local machine and was happy. This will finally resolve an issue we have had for 8 million years!!! This morning I came in and tested it again, still good. So I pushed the update to a semi-live state and allowed some traffic to it. Almost immediately I started getting "Cannot perform web service invocation search." errors. When I went to the CF administrator to see what was going on, none of the collections were listed. All the physical files still existed, but the collections were not longer registered. I restarted the Solr Service and they all came back. Again, I pushed a little traffic and they disappeared immediately.
    There are no entries in the error logs for the server doing the search nor for the server trying to serve the search.
    The server is running CF9.0 enterprise, 64-bit with 7GB of ram to the jvm.The largest collection is 554,000 documents.
    Has anyone seen this behavior before?
    Thanks,
    Scott

    for anyone else that runs into this issue, I found the solution in an article by Mark Kruger at http://www.coldfusionmuse.com/index.cfm/2010/4/4/solr-troubleshooting-coldfusion-9.
    The error was that while CF had plenty of ram in the jvm, solr only allocates 256MB by default. By updating the solr.lax configuration file to allow more ram to solr, I now have stability and a great solution to our problem!
    Scott

  • CF9 - multiple Solr collections search, remove dupes

    Hello, everyone.
    I'm trying to search more than one Solr collection for a keyword (let's say "Item A"), but I want to remove any duplicates and keep the ranking.
    Hypothetically, let's say each collection has "Item A" in it, each ranked slightly different in their respective collection (99.9, 92.7, and 89.5).  How would I remove the two lower ranked "Item A"'s from the CFSEARCH?
    Thank you,
    ^_^

    I think I've got it.
    For anyone else who has this same need, try QoQ.
    <cfsearch name="cfSolrSearch" collection="collectionOne,collectionTwo,collectionThree" criteria="Item A">
    <cfquery name="removeDupes" dbtype="query">
      SELECT DISTINCT title, cast(score as decimal) as score
      FROM cfSolrSearch
      ORDER BY score DESC
    </cfquery>
    <cfdump var="#removeDupes#">
    ^_^
    PS.  In the first post, I mentioned "rank"; I meant "score".

  • How to deploy configuration manager client package to a query based collection

    Hi
    I have created one OU based collection.Now i wants to deploy client package to this collection so that the client automatically get installed,whenever a new machine added to the OU and so on to the collection.
    But the issue is with configuration manager client package deployment,as we can't deploy the default configuration package to the collection(Deploy option is grayed out). For this i created a new configuration manager client package and deployed this package
    to the query based collection. 
    Now when a new machine added to this collection,the client did not appear to deploy on this machine. Please help

    If you want all the devices in the OU to get the client, you can use a Computer Startup Script to deploy it.  Everybody just uses Jason's since it does everything and is well documented.
    http://blog.configmgrftw.com/configmgr-client-startup-script/
    I hope that helps,
    Nash
    Nash Pherson, Senior Systems Consultant
    Now Micro -
    My Blog Posts
    If you've found a bug or want the product worked differently,
    share your feedback.
    <-- If this post was helpful, please click "Vote as Helpful".

  • Delay when querying from CUBE_TABLE object, what is it?

    Hi Guys,
    We are using Oracle OLAP 11.2.0.2.0 with an 11g Cube, 7 Dimensions, Compressed and partitioned by Month.
    We have run into a performance issue when implementing OBIEE.
    The main issue we have is a delay while drilling on a hierarchy. Users have been waiting 7-12 seconds per drill on a hierarchy, and the query is only returning a few cells of data. We have managed to isolate this to slow performing queries on CUBE_TABLE.
    For example, the following query returns one cell of data:
    SELECT FINSTMNT_VIEW.BASE, FINSTMNT_VIEW.REPORT_TYPE, FINSTMNT_VIEW.COMPANY, FINSTMNT_VIEW.SCENARIO, FINSTMNT_VIEW.PRODUCT, FINSTMNT_VIEW.ACCOUNT, FINSTMNT_VIEW.SITE, FINSTMNT_VIEW.TIME
    FROM "SCHEMA1".FINSTMNT_VIEW FINSTMNT_VIEW
    WHERE
    FINSTMNT_VIEW.REPORT_TYPE IN ('MTD' )
    AND FINSTMNT_VIEW.COMPANY IN ('E01' )
    AND FINSTMNT_VIEW.SCENARIO IN ('ACTUAL' )
    AND FINSTMNT_VIEW.PRODUCT IN ('PT' )
    AND FINSTMNT_VIEW.ACCOUNT IN ('APBIT' )
    AND FINSTMNT_VIEW.SITE IN ('C010885' )
    AND FINSTMNT_VIEW.TIME IN ('JUN11' ) ;
    1 Row selected in 4.524 Seconds
    Note: FINSTMNT_VIEW is the automatically generated cube view.
    CREATE OR REPLACE FORCE VIEW "SCHEMA1"."FINSTMNT_VIEW" ("BASE","REPORT_TYPE", "COMPANY", "SCENARIO", "PRODUCT", "ACCOUNT", "SITE", "TIME")
    AS
    SELECT "BASE", "REPORT_TYPE", "COMPANY", "SCENARIO", "PRODUCT", "ACCOUNT", "SITE", "TIME"
    FROM TABLE(CUBE_TABLE('"SCHEMA1"."FINSTMNT"') ) ;
    If we increase the amount of data returned by adding to the query, it only increased the query time by .4 seconds
    SELECT FINSTMNT_VIEW.BASE, FINSTMNT_VIEW.REPORT_TYPE, FINSTMNT_VIEW.COMPANY, FINSTMNT_VIEW.SCENARIO, FINSTMNT_VIEW.PRODUCT, FINSTMNT_VIEW.ACCOUNT, FINSTMNT_VIEW.SITE, FINSTMNT_VIEW.TIME
    FROM "SCHEMA1".FINSTMNT_VIEW FINSTMNT_VIEW
    WHERE
    FINSTMNT_VIEW.REPORT_TYPE IN ('MTD' )
    AND FINSTMNT_VIEW.COMPANY IN ('E01' )
    AND FINSTMNT_VIEW.SCENARIO IN ('ACTUAL' )
    AND FINSTMNT_VIEW.PRODUCT IN ('PT' )
    AND FINSTMNT_VIEW.ACCOUNT IN ('APBIT' )
    AND FINSTMNT_VIEW.SITE IN ('C010885', 'C010886', 'C010891', 'C010892', 'C010887', 'C010888', 'C010897', 'C010893', 'C010890', 'C010894', 'C010896', 'C010899' )
    AND FINSTMNT_VIEW.TIME IN ('JUN11' ) ;
    12 rows selected - In 4.977 Seconds
    If we increase the data returned even more:
    SELECT FINSTMNT_VIEW.BASE, FINSTMNT_VIEW.REPORT_TYPE, FINSTMNT_VIEW.COMPANY, FINSTMNT_VIEW.SCENARIO, FINSTMNT_VIEW.PRODUCT, FINSTMNT_VIEW.ACCOUNT, FINSTMNT_VIEW.SITE, FINSTMNT_VIEW.TIME
    FROM "SCHEMA1".FINSTMNT_VIEW FINSTMNT_VIEW
    WHERE
    FINSTMNT_VIEW.REPORT_TYPE IN ('MTD' )
    AND FINSTMNT_VIEW.COMPANY IN ('ET', 'E01', 'E02', 'E03', 'E04' )
    AND FINSTMNT_VIEW.SCENARIO IN ('ACTUAL' )
    AND FINSTMNT_VIEW.PRODUCT IN ('PT', 'P00' )
    AND FINSTMNT_VIEW.ACCOUNT IN ('APBIT' )
    AND FINSTMNT_VIEW.SITE IN ('C010885', 'C010886', 'C010891', 'C010892', 'C010887', 'C010888', 'C010897', 'C010893', 'C010890', 'C010894', 'C010896', 'C010899' )
    AND FINSTMNT_VIEW.TIME IN ('JUN11', 'JUL11', 'AUG11', 'SEP11', 'OCT11', 'NOV11', 'DEC11', 'JAN12') ;
    118 rows selected - In 14.213 Seconds
    If we take the time for each query and divide by the number of rows, we can see that querying more data results in a much more efficient query:
    Time/Rows returned:
    1 Row - 4.524
    12 Rows - 0.4147
    118 Rows - 0.120449153
    It seems like there is an initial delay of approx 4 seconds when querying the CUBE_TABLE object. Using AWM to query the same data using LIMIT and RPR is almost instantaneous...
    Can anyone explain what this delay is, and if there is any way to optimise the query?
    Could it be the AW getting attached before each query?
    Big thanks to anyone that can help!

    Thanks Nasar,
    I have run a number of queries with logging enabled, the things you mentioned all look good:
    Loop Optimization: GDILoopOpt     COMPLETED
    Selection filter: FILTER_LIMITS_FAST     7
    ROWS_FAILED_FILTER     0
    ROWS_RETURNED     1
    Predicates: 7 pruned out of 7 predicates
    The longest action I have seen in the log is the PAGING operation... but I do not see this on all queries.
    Time Total Time OPERATION
    2.263     27.864          PAGING     DYN_PAGEPOOL     TRACE     GREW     9926KB to 59577KB
    1.825     25.601          PAGING     DYN_PAGEPOOL     TRACE     GREW     8274KB to 49651KB
    1.498     23.776          PAGING     DYN_PAGEPOOL     TRACE     GREW     6895KB to 41377KB
    1.232     22.278          PAGING     DYN_PAGEPOOL     TRACE     GREW     5747KB to 34482KB
    1.17     21.046          PAGING     DYN_PAGEPOOL     TRACE     GREW     4788KB to 28735KB
    1.03     19.876          PAGING     DYN_PAGEPOOL     TRACE     GREW     3990KB to 23947KB
    2.808     18.846          PAGING     DYN_PAGEPOOL     TRACE     GREW     3325KB to 19957KB
    What is strange is that the cube operation log does not account for all of the query time. For example:
    SELECT "BASE_LVL" FROM TABLE(CUBE_TABLE('"EXAMPLE"."FINSTMNT"'))
    WHERE
    "RPT_TYPE" = 'MTD' AND
    "ENTITY" = 'ET' AND
    "SCENARIO" = 'ACTUAL' AND
    "PRODUCT" = 'PT' AND
    "GL_ACCOUNT" = 'APBIT' AND
    "CENTRE" = 'TOTAL' AND
    "TIME" = 'YR09';
    This query returns in 6.006 seconds using SQL Developer, if I then take the CUBE_OPERATION_LOG for this query and subtract the start time from the end time, I only get 1.67 seconds. This leaves 4.3 seconds unaccounted for... This is the same with the my other queries, see actual time and logged time below:
    Query     Actual     Logged      Variance
    S3     6.006     1.67     4.336
    L1     18.128     13.776     4.352
    S1     4.461     0.203     4.258
    L2     4.696     0.39     4.306
    S2     5.882     1.575     4.307
    Any ideas on what this could be or how I can capture this 4.3 second overhead?
    Your help has been greatly appreciated.

  • Error while trying to create a Solr collection (java.io.IOException: The device is not ready)

    Hello, everyone.
    I'm trying to create a .cfm document that will erase a specific Solr collection and then re-create it, index it, and optimize it.
    The delete works flawlessly. 
    Unfortunately, creating it isn't working, at all.  I get the following message:
    Unable to create collection publicsearch - An error occurred while creating the collection: java.io.IOException: The device is not ready.
    This is a Windows 7 server running Apache and CFServer 9.0.1 (the version before Verity was cut.)  I'm not getting any more details than the one error message.  Any idea of what could be causing this?  I've Googled this and I'm seeing a lot of issues similar, but not quite what I have happening.
    Thanks,
    ^_^

    Nevermind.. no one bothered to inform the developer that the path to the collection AND the path to the directory that the collection is for are different on this other server.
    Fixed.

  • Issue when installing SQL Server Express 2012 - "The requested control is not valid for this service" + "Could not find the database engine startup handle"

    Good morning,
    I'm experiencing the following issue when installing Microsoft SQL Server Express 2012 (with tools, SQLEXPRWT_x86_ENU.exe) on the laptop of my company;
    Installation goes plain until around the end of the progress bar, it stops on the setup of
    SqlEngineDBStartConfigAction_install_configrc_Cpu32
    giving 7-8 times, even pressing "Cancel", the message "The requested control is not valid for this service";
    After this, I receive one last message ""Could not find the database engine startup handle", then installation ends with failures, in particular the Database Engine and the Server Replication failed to be installed.
    I've put in my SkyDrive the error log I received after the install;
    I'm at full disposal if you need further information,
    thank you in advance
    Best regards
    Francesco

    Well, i just ran into this issue and the problem was lack of admin rights. It was my company's laptop so got the setup initiated by my company's IT team with admin rights. However, the upon completion of setup, i go the same error messages as stated above: 
    SqlEngineDBStartConfigAction_install_configrc_Cpu32
    giving 7-8 times, even pressing "Cancel", the message "The requested control is not valid for this service";
    After this, I receive one last message ""Could not find the database engine startup handle", then installation ends with failures, in particular the Database Engine and the Server Replication failed to be installed.
    Also, if you open SQL Server Configuration, the status of the service is "Change Pending" and you would not be able to set the startup login type to Local Service/System/Network.
    Then, I just got my account added as Local Admin and then tried to start the service and was able to.
    However, I am not sure whether same was the case for you.
    Please mark the answer as helpful if i have answered your query. Thanks and Regards, Kartar Rana

Maybe you are looking for

  • Browse and handle jar files?

    I´m going to make a little app to help some morons I know. Pretty much what I want it to do is to make you able to browse files on your HDD and then move them into a .jar file. I know terminal commands for this but is there a way to make it really ea

  • Publish Error after copying domain file from one mac to another

    Hello there, so here is the problem: I have had iweb installed on a MBP, and a website up and running. Now that I just bought an iMac I wanted to transfer all the iWeb over to the iMac and make it the main machine. I was doing all the transfer work m

  • Message lost during Message Send in weblogic 9.2 JMS

    I am using Weblogic Server 9.2 in a clustered environment (2 nodes in separate machines). The application deployed has ejb timers which picks up data from DB, and sends text messages (very small size as its just an ID) to a Queue (using the JMS APIs

  • Table in which KP26 values are stored

    Hi ,      We are writing a report  and  we normally maintain fixed values in KP26 as labour rate for the whole year  , for us we have diff Controlling area curr and Object currency .. i would like to know from which table i need to get the values for

  • EAP-TLS Questions....

    Hi all, My setup is like this.. Laptop - LWAPP - WLC - ACS - AD I m using CA to generate certificate.. I have configured EAP-TLS on WLC & ACS SE. Everything is working fine ie when i issue a certificate from CA on my AD login name & install that cert