Ridiculous query

Have you any heard contraints can change your query results? Here is the one which happened on my 10g database. We found a query with self-join returned a wrong result (in this case, it returns 254, the correct result is 253). But if I drop the associated pk constraint, I got a correct result, if I add back the constraint, the result is wrong again. If you see the execution plan, you'll find oracle use different plan. I don't see any corruption in my database also I didn't get any errors when I run these queries. I have rebuild the index for several times and run analyze table validate structure, but
they are not helpful. Any thoughts about this?
SQL> select count(*)
2 from location.location room, location.location dorm, location.location campus, location.location school
3 where room.location_active_yn_flag = 'Y'
4 and room.parent_location_id = dorm.location_id
5 and dorm.parent_location_id = campus.location_id
6 and campus.parent_location_id = school.location_id
7 and (room.location_type_id = 18 or room.location_id = 1)
8 and (dorm.location_type_id = 17 or dorm.location_id = 1)
9 --and (campus.location_type_id = 16 or campus.location_id = 1)
10 --and (school.location_type_id = 6 or school.location_id = 1);
COUNT(*)
254
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=19 Card=1 Bytes=80
1 0 SORT (AGGREGATE)
2 1 HASH JOIN (Cost=19 Card=84 Bytes=6720)
3 2 TABLE ACCESS (FULL) OF 'LOCATION' (TABLE) (Cost=9 Card
=172 Bytes=6708)
4 2 TABLE ACCESS (FULL) OF 'LOCATION' (TABLE) (Cost=9 Card
=259 Bytes=10619)
SQL> ALTER TABLE location.location
drop CONSTRAINT pk_location;
2
Table altered.
### Run above query again
COUNT(*)
253
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=38 Card=1 Bytes=11
9)
1 0 SORT (AGGREGATE)
2 1 HASH JOIN (Cost=38 Card=84 Bytes=9996)
3 2 HASH JOIN (Cost=28 Card=84 Bytes=8904)
4 3 HASH JOIN (Cost=19 Card=84 Bytes=6720)
5 4 TABLE ACCESS (FULL) OF 'LOCATION' (TABLE) (Cost=9
Card=172 Bytes=6708)
6 4 TABLE ACCESS (FULL) OF 'LOCATION' (TABLE) (Cost=9
Card=259 Bytes=10619)
7 3 TABLE ACCESS (FULL) OF 'LOCATION' (TABLE) (Cost=9 Ca
rd=1639 Bytes=42614)
8 2 TABLE ACCESS (FULL) OF 'LOCATION' (TABLE) (Cost=9 Card
=1639 Bytes=21307)
SQL> SQL> ALTER TABLE location.location
ADD CONSTRAINT pk_location PRIMARY KEY (location_id);
2
Table altered.
### Run above query again
COUNT(*)
254
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=19 Card=1 Bytes=80
1 0 SORT (AGGREGATE)
2 1 HASH JOIN (Cost=19 Card=84 Bytes=6720)
3 2 TABLE ACCESS (FULL) OF 'LOCATION' (TABLE) (Cost=9 Card
=172 Bytes=6708)
4 2 TABLE ACCESS (FULL) OF 'LOCATION' (TABLE) (Cost=9 Card
=259 Bytes=10619)

Marcus,
Thank you for your reply. I have checked the bug you mentioned, I think it doesn't fit my case. Here in my case, the primary key changed the execution plan and gave me wrong results. I have contacted oracle support regarding this issue and submitted the trace file, but they ask me to provide the database to build the test case. Although we have seen this problem on 10.1.0.2 (solaris), 10.1.0.3 (linux) and 10.1.0.4 (solaris) with same query, but expdp/impdp the only table doesn't reproduce the error. No any errors returned while this query ran, this makes me not feel so confident for all other queries in my database. I would like to get some inputs from experts to figure out exactly what exactly goes wrong here.

Similar Messages

  • Help with processing groups of records in database

    Ok i'm at work right now trying to finish up a project for my class and what it is is a basic class to process an inner join sql statement generated table from access and order it by the student id in order to group each student together. I have written the sql statement and logic to access the database and return the array of row objects to the command line so i know it is grouping and returning info properly. I have written the conditional if statement to get the gradePoints for A = 4, B = 3, C = 2, D = 1, and F = 0 in a class method (KEEP IN MIND THE GENERATED TABLE CONSISTS OF SID, NAME, COURSE#, COURSETITLE, AND LETTERGRADE) now I just need to process each group of students individually and calculate their overall gpa's. Here in lies the problem. I imagine i need to define current student variables as like
    currentID = s[0].getStudentID();
    currentName = s[0].getStudentName();
    then loop through the values of the array with something like
    for(int j = 0; j < s; ++j)
    if(s[j].getStudentID != currentID)
    Process the grade records for the group Then reset the
    current student to the next in line with another variable
    assignment like
    currentID = s[j].getStudentID();
    currentName = s[j].getStudentName();
    }(end of if)
    Process the row object for the student created by this loop and then add it to the output string
    } (end of for loop)
    Ok so where i need help is the logic to process the grade records for the individual groups and then processing the row object. We haven't gone over anything like this in class and it is the last thing on my list to to do so any help would be appreciated. Also, if you think i'm an idiot keep it to yourself please it's not fun for people who really need help and really would like to know for future reference how to do something to be belittled. Thanks again
    Matt
    Message was edited by:
    djmd02

    sorry i guess is hopuld explain this better...I know
    sql relatively well and the sql statement for my
    query string is rediculously long with two inner
    joins and what it returns is the sid, name,
    coursenumber, coursename, and lettergrade So if you know SQL so well, why haven't you created a ViEW using this "ridiculously long" query and made your life easier?
    Two inner joins? Nothing extraordinary about that.
    this query
    generated table is then used to populate an array of
    objects for each entry in the table. They are already
    put in order depending on sid by the sql statement
    (for example)
    11111 matt deClercq 3380 intro to java A
    11111 matt deClercq 3382 database management A
    11112 john doe 3380 intro to java A>
    and so on. The problem i am having is within the for
    loop to detect the end of each student and process
    their grades for each class and calculate the gpa for
    the whole group of grades that that student has.SQL is a declarative language, not procedural. If you're "looping", it suggests to me that you're pulling all that data into the middle tier and doing the calculation in Java.
    I'm suggesting that there's a perfectly good way to do this calculation in SQL without using your ridiculous query. You might be better served if you try and figure out what that is.
    Hope this makes things a little more clear and makes
    me look a little bit more intelligent than i seemStop worrying about what people think and concentrate on your problem. You're a student. It's unlikely that you're going to appear to be on the level of Bill Joy at this point in your career. I'd be a lot more impressed if you'd stop whining about how people perceive you.
    %

  • Query on flashback_transaction_query table taking ridiculously long time

    Oracle 10.2.0.3.0 on Solaris :
    I am trying to use Flashback Transaction Query table to track transactions and generate the undo_sql within a time period for an entire schema using the following sql :
    SELECT XID,START_SCN,COMMIT_SCN,OPERATION,TABLE_NAME,TABLE_OWNER,LOGON_USER,UNDO_SQL
    FROM flashback_transaction_query
    WHERE start_timestamp >= TO_TIMESTAMP ('2007-08-16 11:50:00AM','YYYY-MM-DD HH:MI:SSAM')
    AND start_timestamp <= TO_TIMESTAMP ('2007-08-16 11:55:00AM','YYYY-MM-DD HH:MI:SSAM')
    AND TABLE_OWNER = 'JEFFERSON';
    None of my attempts to run this query has succeeded so far as it keeps executing and executing that never seems to end.
    The highest I waited is 50 minutes before cancelling it.
    I did read thru metalink doc id 270270.1 (which I think is close), however, the solution is not relevant to the requirement I have.
    Any suggestions would be of help. Thanks

    I found that if I did the following:
    select t2.*
    from
      select taddr
      from v$session
      where username = <username>
      ) t1
      inner join
      v$transaction t2
      on t1.taddr = t2.addr
    /... and used the XID value in this:
    select *
    from flashback_transaction_query
    where xid = hextoraw('< the value of XID from above');... that it would come back fast.
    But even then, I would have to wait a little bit before the update statement seemed to register elsewhere in the database. There was a delay. But once the update seemed to register -- and you reselected -- it was fast.
    I had no luck using those other columns in 10.1.0.5.
    I also ran DBMS_STATS.GATHER_FIXED_OBJECT_STATS and DBMS_STATS.GATHER_DICTIONARY_STATS but I do not know if they changed anything or if I just was not waiting long enough for the statement to register.

  • How to determine a sql query size to display a progress bar

    I would like to show a progress of an sql query within a jsp page.
    Background:
    I have a reporting web application, where over 500 contacts can run reports based on different criteria such as date range....
    I current display a message stating 'executng query please wait', however the users (hate users) do not seem to wait, thereofore they decide to run the query all over again which affected my reportign sever query size (eventually this crashes, stopping all reports)
    Problem:
    The progress bar is not a problem, how would I determine the size of the query at runtime therefore adding the time onto my progress bar.

    Yes it's doable (we do it) but it sure ain't easy.
    We've got about 23,500,000 features (and counting) in a geodata database. Precise spatial selection algorithms are expensive. Really expensive.
    We cannot impose arbitrary limits on search criteria. If the client requires the whole database we are contractually obligated to provide it...
    For online searches We use statistics to approximate the number of features which a given query is likely to return... more or less the same way that the query optimiser behind any half decent (not mysql (5 alteast)) database management system does.
    We have a batch job which records how many features are linked to each distinct value of each search criteria... we just do the calculations (presuming a normal (flat) distribution) and...
    ... if the answer is more than a 100,000 we inform the user that the request must be "batched", and give them a form to fill out confirming there contact details. We run the extract overnight and send the user an email containing a link which allows them to download the result the next morning.
    ... if the answer is more than a million features we inform the user that the request must batched over the weekend... same deal as above, except we do it over the weekend to (a) discourage this; and (b) the official version ensure we have enough time to run the extract without impinging upon the maintenance window.
    ... if the answer is more than 5 million we display our brilliant "subscribe to our DVD service to get the latest version of the whole shebang every three months (or so), so you can kill your own blooody server with these ridiculous searches" form.
    Edited by: corlettk on Dec 5, 2007 11:12 AM

  • Webi query performance in SAP BO 4.0 sp6

    Hello,
    I'm facing an issue with the webi reports in our BI 4 SP6 environment. The webi reports data fetch time is ridiculously slow, where as when these queries are run on Oracle 10g the results are returned much faster as compared to WebI. The same reports used to fetch data faster few months ago when we were on SP4.
    Now my question is what can I do to improve the query data fetch time in webi? Is there some parameter that I can tweak in BO or do I need to add more services? Please suggest. thanks
    Following are the specs of the QA environment:
    We are using a clustered environment with 2 nodes (CUSNWA3T and CUSNWA3U) - these are the names of the nodes. Each node has 12 GB of RAM and each node is a VMware virtual machine running on windows server 2008. Each of these BO servers/nodes is having its own CMS server, I have broken down the APS into sub servers after facing issues with the DSL Bridge service.
    Regards,
    samique

    Andreas,
    thats for reply. Yep I agree with your suggestion about the RAM is there any document that details out how much RAM and processing power is required by SAP BO 4.x?
    I had to turn down the explorer, crystal reports, AJS, Dashboard Services just to provide for the APS. I did break down the APS as well according to a document that I found on SAP site.
    Here is what worked for me: we use DW relational connections for the universes, all our universes are UNX universes. In the connection there are options for: Array bind size, Array Fetch size and pooling timeout. I increased all those, array bind size to 250; Array fetch size to 32670 and pooling timeout to 20 mins and the dba set the ConnectInit to set star schema transformation. Once all these were enabled the report which was loading in 20 mins started refreshing in 6 mins.
    For now things seem to be working fine but I will certainly raise the RAM request to the management but it would help if you could provide with some reference document. Thanks
    Regards,
    samique

  • Another bug in Flex? (Application.parameters and query strings)

    I'm passing in two query string parameters in the source of SWFLoader and both of them are clumped together in the first parameter by application.parameters.
    But I switch the order of the parameters in the query string and both parameters are returned correctly:
    Case #1:
    Query String:  ?cfg=zzzzzzz54B&embed_div=x
    parameters.cfg: zzzzzzz54B&embed_div=x
    parameters.embed_div:  [nothing]
    Case #2:
    Query String:  ?embed_div=x&cfg=zzzzzzz54B
    parameters.cfg: zzzzzzz54B
    parameters.embed_div: x
    Here is the actual debug commands:
    Dumper.info(this.url);
    Dumper.info(this.parameters.cfg)
    Dumper.info(this.parameters.embed_div);
    And output:
    (Case #1)
    [INFO]: file:///C:/Program%20Files/WordRad234/chm/wordrad_kt/web%20pages/zzzzzzz5/rad_3xf.swf?cfg =zzzzzzz54B%26embed_div%3Dx (String)
    [INFO]: zzzzzzz54B&embed_div=x (String)
    [INFO]: (Object)
    (Case #2)
    [INFO]: file:///C:/Program%20Files/WordRad234/chm/wordrad_kt/web%20pages/zzzzzzz5/rad_3xf.swf?emb ed_div=x&cfg=zzzzzzz54B (String)
    [INFO]: zzzzzzz54B (String)
    [INFO]: x (String)
    Something I just noticed: the equal sign after embed_div is replaced by %3D but only if embed_div comes last.

    NEVERMIND:
    It was something I was doing  to the source of SWFLoader beforehand (involving encodeURIComponent).
    I have to say, I have many, many times thought something was a bug in Flex and it was in fact my code.  In general, I think Flex/AS3 is an elegant and useful product.  The sort of ad hoc  tweaks that have to be done to avoid memory leaks though is ridiculous (though I do have that figured out pretty much as well.)

  • Slow query - db_cache_size ?

    Hi,
    Oracle 9.2.0.5.0 ( solaris )
    I've got a query which when run on a production machine runs very slow ( 10 hours ), but on a preproduction machine ( with same data ) takes about a 10th of the time. I have confirmed that on both machines we are getting the same plan.
    The only thing I can nail it down to, is that in production I'm seeing lots more "db file sequential read" wait events. Can I assume this is due to the blocks not being in/staying in the cache?
    When running on preprod, the hit ratio for the query is .90 + , on production it drops down to .70 - .80 ( as per query below )
    I have plenty of memory available on the machine, would it be wise to size up the caches? db_cache_size, db_keep_cache_size, db_recycle_cache_size ?
       SELECT (P1.value + P2.value - P3.value) / (P1.value + P2.value)
         FROM   v$sesstat P1, v$statname N1, v$sesstat P2, v$statname N2,
                v$sesstat P3, v$statname N3
         WHERE  N1.name = 'db block gets'
         AND    P1.statistic# = N1.statistic#
         AND    P1.sid = &sid
         AND    N2.name = 'consistent gets'
         AND    P2.statistic# = N2.statistic#
         AND    P2.sid = P1.sid
         AND    N3.name = 'physical reads'
         AND    P3.statistic# = N3.statistic#
         AND    P3.sid = P1.sid
    PRE-PRODUCTION
      call     count       cpu    elapsed       disk      query    current        rows   
      Parse        1      0.64       0.64          0          0          0           0      
      Execute      1      0.00       0.00          0          0          0           0      
      Fetch        2    186.92     329.88     162174    5144281          5           1      
      total        4    187.56     330.53     162174    5144281          5           1      
      Elapsed times include waiting on following events:
        Event waited on                             Times   Max. Wait  Total Waited
        ----------------------------------------   Waited  ----------  ------------
        SQL*Net message to client                       2        0.00          0.00
        db file sequential read                    160098        1.44        162.52
        db file scattered read                          1        0.00          0.00
        direct path write                              27        0.66          3.36
        direct path read                               97        0.00          0.02
        SQL*Net message from client                     2      985.79        985.79
    PRODUCTION
      call     count       cpu    elapsed       disk      query    current        rows
      Parse        1      2.41       2.34         79         16          0           0  
      Execute      1      0.00       0.00          0          0          0           0  
      Fetch        2    844.76   12305.06    1507519    5226663          0           1  
      total        4    847.17   12307.41    1507598    5226679          0           1  
      Elapsed times include waiting on following events:
        Event waited on                             Times   Max. Wait  Total Waited
        ----------------------------------------   Waited  ----------  ------------
        SQL*Net message to client                       2        0.00          0.00
        db file sequential read                   1502104        4.40      11849.13
        direct path write                             361        0.57          3.06
        direct path read                              361        0.05          0.88
        buffer busy waits                              36        0.02          0.17
        latch free                                      5        0.01          0.01
        log buffer space                                2        1.00          1.37
        SQL*Net message from client                     2      687.95        687.95
      Suggestions for further investigation more than welcome.

    user12044475 wrote:
    Hi,
    Oracle 9.2.0.5.0 ( solaris )
    I've got a query which when run on a production machine runs very slow ( 10 hours ), but on a preproduction machine ( with same data ) takes about a 10th of the time. I have confirmed that on both machines we are getting the same plan.
    The only thing I can nail it down to, is that in production I'm seeing lots more "db file sequential read" wait events. Can I assume this is due to the blocks not being in/staying in the cache?
    There are more physical reads, and the average read time is longer. This may simply be a reflection of the fact that other people are working on the production database at the same time and (a) kicking your data out of the cache and (b) causing you to queue at the disc as they do their I/O. A larger cache MIGHT protect your data a little longer, and MAY reduce their I/O at the same time so that the I/Os are faster - but we have no idea what side effects might then appear.
    It's also worth considering whether you did something as you tranferred the data from production to pre-production that helped to improve query performance. (As a simple example, an export/import could have eliminated a lot of row migration - and the nature of your plan means you MIGHT be suffering a lot of excess I/O from "table fetch continued row"). So, how did you get the data from production to test, how long ago, what's happened to it since, and do you have any session statistics taken as you ran the two queries ?
    Since your execution plan (prediction) is a long way off the actual run time, though, (even on the pre-production system), it's probably more important to work out whether you can make your query much more efficient before you make any dramatic changes to the system. I notice that you have three existences subqueries that appear at the end of the plan - such subqueries wreck the optimizer's arithmetic in your version of Oracle and can make it do very silly things. (See for example this blog note: http://jonathanlewis.wordpress.com/2006/11/08/subquery-selectivity )
    The effect of the subqueries may (for example) be why you have a full tablescan on the second table in a nested loop join at one point in your query. The expectation of a vastly reduced number of rows may be why you are seeing nested loops all over the place when (possibly) a couple of hash joins would be more appropriate.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
    +"I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I'll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be."+
    Isaac Asimov                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Delete Row in WAD layout for input ready query

    Hi All
    I am using WAD for planning applications.....when i execute the Web template, it is displaying the query in edit mode which is ok... i am able to edit and insert the records in layout but i didn't find any option for deleting row...i couldn't see any command button for delete functionality in WAD...
    please help me out......how i can get delete functionality in my user input query opened in WAD
    Thanks
    Tripple k

    Hi
    Thanks for your help...but that is not going to help...by the way i am surprised why SAP has removed these basic functionality from IP while they are there in BPS..like add row delete row...in WAD if we dont have option for new line...we have to specify the no. of new lines in prior....also if user has filled the new lines then next line will not come untill he save..trhis is ridiculas....i am not able to find any wayout for these silly functionality
    Thansk
    Tripple k

  • Modify IR based on my query/search

    Apex 4.x
    I am able to search for BLOB content using the pl/sql :
    declare
    v_name varchar2(100);
    v_doc blob ;
    begin
    select name, doc into v_name,v_doc
    from BLOB_TABLE
    where CONTAINS(doc,':P2_GO')>0;
    end;
    I am default displaying all the docs in IR with search disabled on it and using my own search text field and Go button. I have created process that executes this query(above) as well but not sure how to edit or modify the already existing IR to display based on result of the query.
    When page loads it should(and does now) display all the docs and then change(what I want it to do but dont know how) based on the search that is performed by end user.
    How do I achieve this ?

    LKSwetha wrote:
    Apex 4.x
    I am able to search for BLOB content using the pl/sql :
    declare
    v_name varchar2(100);
    v_doc blob ;
    begin
    select name, doc into v_name,v_doc
    from BLOB_TABLE
    where CONTAINS(doc,':P2_GO')>0;
    end;
    I am default displaying all the docs in IR with search disabled on it and using my own search text field and Go button. I have created process that executes this query(above) as well but not sure how to edit or modify the already existing IR to display based on result of the query.
    When page loads it should(and does now) display all the docs and then change(what I want it to do but dont know how) based on the search that is performed by end user.
    How do I achieve this ?We have a similar application. There is an HTML region with the Report Filter template, containing an item to get the text search criteria, a submit button, and the text operator is simply added to the IR query:
    select ...IR query...
    and contains(text, :p1_text_operator, 1) > 0This can generate an error message if the text operator is null, so we actually use 2 additional page items:
    P1_TEXT_OPERATOR is a hidden item
    P1_SEARCH_TERMS is the page item where users can enter their text criteria
    and a Before Regions computation to set P1_TEXT_OPERATOR to something ridiculous:
    nvl(:p1_search_terms, lpad('Z', 10, 'Z'))so that when no criteria are specified the IR just displays the standard When No Data Found Message ("No documents match the search terms").
    In your case where you want to display all of the documents if no text filter is entered, just add a predicate that will return rows when no filter is specified:
    select ...IR query...
    and (   contains(text, :p1_text_operator, 1) > 0
         or p1_search_terms is null)

  • Creating a function and passing query value

    I have what I thought was a pretty easy to resolve situation:
    I want to concatenate two query fields, if the 2nd one isn't empty.
    I created a function:
    <cfargument name="q1" value='#query.q1#' />
    <cfargument name="q1a" value='#query.q1a#' />
    <CFSET variables.myPunct = ": ">
    <cfset variables.ResultVar="">
    <cfif Trim(arguments.q1) NEQ "">
    <cfset variables.ResultVar='#arguments.q1#'>
    </cfif>
    <cfif Trim(arguments.q1a) NEQ "">
    <cfif variables.ResultVar NEQ "">
    <cfset variables.ResultVar='#variables.ResultVar &
    variables.myPunct#'>
    </cfif>
    <cfset variables.ResultVar='#variables.ResultVar &
    arguments.q1a#'>
    </cfif>
    <cfreturn variables.ResultVar>
    This is basically just the example they provide in the online
    instruction, with the names changed.
    In the detail band of my report, I have an expression builder
    field containing: report.mytestfunction()
    When I run this, I get: Element Q1 is undefined in ARGUMENTS.
    I've tried this ninety different ways (literally). It seems
    very clear to me that the query.q1 (for that matter, any of the
    query results) are NOT getting passed to the function. I have tried
    making the expression: report.mytestfunction(query.q1). I have
    tried creating an input parameter.
    The documentation on this is ridiculously limited,
    considering that the ability to implement conditional logic depends
    entirely on the "function", as far as I can tell. I can in no way
    get the function to interface with the query results. If is set
    fixed values in the function, as opposed to trying to use the query
    variables, it outputs fine.
    Any ideas?

    That has got to be the only way I DIDN'T try, although I
    could swear I tried that, too. Maybe I didn't have the "required"?
    I don't know. I know it works now. For the record, FUNCTION:
    <cfargument name="q1" required="yes" />
    <cfargument name="q1a" required="yes" />
    <CFSET variables.myPunct = ": ">
    <cfset variables.ResultVar="">
    <cfif Trim(arguments.q1) NEQ "">
    <cfset variables.ResultVar='#arguments.q1#'>
    </cfif>
    <cfif Trim(arguments.q1a) NEQ "">
    <cfif variables.ResultVar NEQ "">
    <cfset variables.ResultVar='#variables.ResultVar &
    variables.myPunct#'>
    </cfif>
    <cfset variables.ResultVar='#variables.ResultVar &
    arguments.q1a#'>
    </cfif>
    <cfreturn variables.ResultVar>
    In the "Detail" band, called function:
    report.mytestfunction(query.q1, query.q1a)
    Thanks for the tip. I'm going to go take a long walk on a
    short pier now.
    max

  • LOV query problem (1.6)

    Hi,
    I have been pulling my hair out over this one!
    Background: I have an LOV that needs to return 4 different lists depending which of 4 radio buttons (P40_OPPORTUNITY_TOGGLE) has been selected, and who is logged in (FLOW_USER).
    the LOV of based off two views:
    * OPP_GEO_V (opportunity, sales consultant and geography for opp)
    SC_EMAIL_ADDRESS
    OPP_NUMBER
    OPPORTUNITY
    CUSTOMER
    DIVISION
    REGION
    COUNTRY
    * SC_GEO_V (sales consultant and geography for SC)
    EMAIL_ADDRESS
    DIVISION
    REGION
    COUNTRY
    SALES_ACCESS
    The four lists should be:
    * All opportunities in the same country as the user
    * All opportunities in the same region as the user (depending on SALES_ACCESS)
    * All opportunities in the same division as the user (depending on SALES_ACCESS)
    * Every opportunity where the user is listed against it (SC_EMAIL_ADDRESS)
    The query:
    select distinct o.opp_number||' - '||o.customer||' - '||o.opportunity d, o.opp_number||' - '||o.customer||' - '||o.opportunity r
    from opp_geo_v o, sc_geo_v s
    where
    (nvl(:P40_OPPORTUNITY_TOGGLE,'C') = 'C'
    AND upper(s.email_address) = upper(:FLOW_USER)
    and o.country = s.country)
    OR
    (:P40_OPPORTUNITY_TOGGLE = 'R'
    AND upper(s.email_address) = upper(:FLOW_USER)
    AND s.sales_access is not null
    and o.region = s.region)
    OR
    (:P40_OPPORTUNITY_TOGGLE = 'D'
    AND upper(s.email_address) = upper(:FLOW_USER)
    AND s.sales_access = 'D'
    AND o.division = s.division)
    OR
    (:P40_OPPORTUNITY_TOGGLE = 'S'
    AND upper(o.sc_email_address) = upper(:FLOW_USER))
    Problem: HTML DB will time out when trying to submit this query. The strange thing is, it was functioning well until last night when I added the final OR block (opp_toggle = 'S'). If i remove the first 3 OR blocks, it will submit fine. If I remove the bottom OR block it will also submit fine. But for some reason it won't allow me to have all 4. Actually it won't allow any combination of the bottom OR block and any of the top 3...
    This could be something blatantly obvious or it might just be one of those oddities, but I'd appreciate any ideas.
    Thanks,
    Mark

    Ok I changed FLOW_USER to APP_USER, i think in this case it will be the same value (email address), and the functionality of the LOV is still the same, but if you insist =)
    Now this is odd and may give an idea of what is going on.....
    I have a table ALL_CLIENTS, which is one of the tables that makes up the view OPP_GEO_V (geographic_areas being the other). The fields that come from ALL_CLIENTS are:
    * SC_EMAIL_ADDRESS
    * OPP_NUMBER
    * OPPORTUNITY
    * CUSTOMER
    now, if i alter the query so I pull the data from ALL_CLIENTS, and use OPP_GEO_V for looking up the geographic information for the opportunity (with a join between ALL_CLIENTS and OPP_GEO_V --- o.opp_number = c.opp_number) in each of the OR clauses, it WORKS! It sounds ridiculous that I'm joining a view with one of the tables it's based off, but i'm getting desparate here... This is probably a confusing description, so here's the query:
    select distinct c.opp_number||' - '||c.customer||' - '||c.opportunity d, c.opp_number||' - '||c.customer||' - '||c.opportunity r
    from opp_geo_v o, sc_geo_v s, all_clients c
    where
    (nvl(:P40_OPPORTUNITY_TOGGLE,'C') = 'C'
    AND upper(s.email_address) = upper(:APP_USER)
    and o.opp_number = c.opp_number
    and o.country = s.country)
    OR
    (:P40_OPPORTUNITY_TOGGLE = 'R'
    AND upper(s.email_address) = upper(:APP_USER)
    AND s.sales_access is not null
    and o.opp_number = c.opp_number
    and o.region = s.region)
    OR
    (:P40_OPPORTUNITY_TOGGLE = 'D'
    AND upper(s.email_address) = upper(:APP_USER)
    AND s.sales_access = 'D'
    and o.opp_number = c.opp_number
    AND o.division = s.division)
    OR
    (:P40_OPPORTUNITY_TOGGLE = 'S'
    AND upper(c.sc_email_address) = upper(:APP_USER)
    and o.opp_number = c.opp_number)
    As you can see, I no longer have to look up OPP_GEO_V to compare the email address in the bottom OR clause.. but why on earth would HTML DB object to me doing that? I feel like I'm looking in the wrong spot here...
    The only problem with this is it's unacceptably slow. There are currently 836 sales consultant records and 95718 opportunity records, which will be growing significantly once this goes into production, so I kinda need it to be speedy!
    I hope this post hasn't caused confusion ....
    Mark

  • SQL query to show collection paths and members

    Hi guys, I'm after a sql query that will export all the collections along with their path and the members of those collections.
    I already have something that will display all the collections and members but it's just a flat list. I'm really after something that will produce something like this:
    Collection                        SystemName
    Servers\Prod                      Server1
    Servers\Prod\App1            Server2
    Servers\Prod\App1            Server3
    Servers\Test                       Server4
    Servers\Dev                       Server5
    Testing\Phase1                  Server6
    Testing\Phase1\Stage2      Server7
    Testing\Phase1\Stage2      Server8
    etc
    Can this be done?

    Hi Garth, appreciate the feedback and I realise the report will probably be ridiculously long, however, the point is irrelevant. Whether it's 100 page or 10,000 pages is not the issue, it's something I've been asked to produce.
    Actually I will 100% disagree with you. Your job as the subject matter expert (SME) is to guide people to what they truly want. Blindly giving them something that is useless
    is not helping them do their job at all.
    I have found that Manager and Management truly appreciate it when you say ”What you are asking for is not what you truly want. You will get a report will 10,000 pages and you
    will be over loaded with data.” You then have to follow up and ask “How exactly will know this info help you do your job? What decisions will be made from this report” etc..
     this is where you as a SME, will shine and can truly help them do their job.
    This report is do able but it is not a 5 minute task. Best of luck with the report.
    Garth Jones | My blogs: Enhansoft and
    Old Blog site | Twitter:
    @GarthMJ

  • XPath query error - xml extension in schema

    Hi,
    I am getting following error when i try to assign values to input xml payload before
    invoke process.
    "{http://schemas.xmlsoap.org/ws/2003/03/business-process/}selectionFailure" has been thrown.
    <selectionFailure>
    <part name="summary" >
    <summary>XPath query string returns multiple nodes. According to BPEL4WS spec 1.1 section 14.3, The assign activity part and query /child::parameter/child::param1 should not return multipe nodes. Please check the BPEL source at line number "40" and verify the part and xpath query /child::parameter/child::param1. </summary>
    </part>
    </selectionFailure>
    The generated XPath resolves to a single node only (tested using XMLSpy). The only reason i can think of is that the xml schema in wsdl is extending a base xml type to a new type.
    It works fine if the extending xml type is merged with it's base type, and used as one single complex type.
    ------------------WSDL START
    <?xml version="1.0" encoding="UTF-8"?>
    <definitions xmlns="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:http="http://schemas.xmlsoap.org/wsdl/http/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/" xmlns:tns="http://justdoit.ws.sunflower2.pageone.com" targetNamespace="http://justdoit.ws.sunflower2.pageone.com">
         <types>
              <xs:schema targetNamespace="http://justdoit.ws.sunflower2.pageone.com">
                   <xs:complexType abstract="true" name="BaseObject">
                        <xs:sequence>
                             <xs:element name="param1" nillable="true" type="xs:string"/>
                             <xs:element name="param2" nillable="true" type="xs:string"/>
                        </xs:sequence>
                   </xs:complexType>
                   <xs:complexType name="FooBar">
                        <xs:complexContent>
                             <xs:extension base="tns:BaseObject">
                                  <xs:sequence>
                                       <xs:element name="param3" nillable="true" type="xs:string"/>
                                       <xs:element name="param4" nillable="true" type="xs:string"/>
                                       <xs:element name="param5" nillable="true" type="xs:string"/>
                                       <xs:element name="param6" nillable="true" type="xs:string"/>
                                  </xs:sequence>
                             </xs:extension>
                        </xs:complexContent>
                   </xs:complexType>
              </xs:schema>
         </types>
         <message name="justDoItRequest">
              <part name="parameter" type="tns:FooBar"/>
         </message>
         <message name="justDoItResponse">
              <part name="parameter" type="tns:FooBar"/>
         </message>
         <portType name="justDoItPort">
              <operation name="justDoIt">
                   <input message="tns:justDoItRequest"/>
                   <output message="tns:justDoItResponse"/>
              </operation>
         </portType>
         <binding name="justDoItBinding" type="tns:justDoItPort">
              <soap:binding style="rpc" transport="http://schemas.xmlsoap.org/soap/http"/>
              <operation name="justDoIt">
                   <soap:operation soapAction="tns:justDoIt"/>
                   <input>
                        <soap:body use="literal"/>
                   </input>
                   <output>
                        <soap:body use="literal"/>
                   </output>
              </operation>
         </binding>
         <service name="justDoItService">
              <port name="NewPort" binding="tns:justDoItBinding">
                   <soap:address location="http://127.0.0.1:8080/axis/services/justDoIt"/>
              </port>
         </service>
    </definitions>
    ------------------WSDL END
    ------------------BPEL START
              <assign name="assign-1">
                   <copy>
                        <from variable="input" part="payload" query="/tns:justDoItRequest/tns:input1"></from>
                        <to variable="req" part="parameter" query="/parameter/param1"/>
                   </copy>
              </assign>
              <invoke name="invoke-1" partnerLink="justDoIt" portType="tns:justDoItPort" operation="justDoIt" inputVariable="req" outputVariable="res"/>
    ------------------BPEL END
    reg,
    Amitoj.

    I am also having the xml extension issue and since we are dealing with large WSDL files (exceed 40,000 lines) it would be ridiculous to move all extension classes to the base type (parent class). Has this bugged been looked at and when can it be resolved since it seems to be a major issue in JDeveloper.
    Chris

  • Change one entity bean query, errors pop up in another... why?

    In my application I have the following entity bean:
    @Entity
    @Table(name = "ventureprofile")
    @SecondaryTable(name = "venture")
    @NamedQueries( {       
            @NamedQuery(name = "Ventureprofile.findByVentureprofileid", query = "SELECT v FROM Ventureprofile v WHERE v.ventureprofileid = :ventureprofileid"),
            @NamedQuery(name = "Ventureprofile.findByVentureid", query = "SELECT v FROM Ventureprofile v WHERE v.ventureid = :ventureid"),
            @NamedQuery(name = "Ventureprofile.findByLogoimagelocation", query = "SELECT v FROM Ventureprofile v WHERE v.logoimagelocation = :logoimagelocation"),
            @NamedQuery(name = "Ventureprofile.findByVisible", query = "SELECT v FROM Ventureprofile v WHERE v.visible = :visible ORDER BY v.venturename")
            //@NamedQuery(name = "Ventureprofile.findByVisibleRange", query = "SELECT v FROM Ventureprofile v WHERE v.visible = :visible ORDER BY v.venturename OFFSET 1 LIMIT 2")
    public class Ventureprofile implements Serializable {
        @GeneratedValue(strategy=GenerationType.AUTO, generator="Ventureprofile.ventureprofileid.seq")
        @SequenceGenerator(name="Ventureprofile.ventureprofileid.seq", sequenceName="ventureprofile_ventureprofileid_seq", allocationSize=1) 
        @Column(name = "ventureprofileid", nullable = false)
        private BigInteger ventureprofileid;
        @Id
        @JoinColumn(name = "ventureid", referencedColumnName = "ventureid")
        private long ventureid;  
        @Lob
        @Column(name = "venturesummary")
        private String venturesummary;
        @Column(name = "logoimagelocation")
        private String logoimagelocation;
        @Column(name = "visible", nullable = false)
        private char visible;
        @Column(table = "venture", name = "venturename", insertable=false)
        private String venturename;
    ...and another entity bean:
    @Entity
    @Table(name = "venturelink")
    @NamedQueries( {
            @NamedQuery(name = "Venturelink.findByVenturelinkid", query = "SELECT v FROM Venturelink v WHERE v.venturelinkid = :venturelinkid"),
            @NamedQuery(name = "Venturelink.findByVentureprofileid", query = "SELECT v FROM Venturelink v WHERE v.ventureprofileid = :ventureprofileid"),
            @NamedQuery(name = "Venturelink.findByLinkurl", query = "SELECT v FROM Venturelink v WHERE v.linkurl = :linkurl"),
            @NamedQuery(name = "Venturelink.findByLinkname", query = "SELECT v FROM Venturelink v WHERE v.linkname = :linkname")
    public class Venturelink implements Serializable {
        @Id
        @GeneratedValue(strategy=GenerationType.AUTO, generator="Venturelink.venturelinkid.seq")
        @SequenceGenerator(name="Venturelink.venturelinkid.seq", sequenceName="venturelink_venturelinkid_seq", allocationSize=1)  
        @Column(name = "venturelinkid", nullable = false)
        private BigInteger venturelinkid;
        @Column(name = "ventureprofileid", nullable = false)
        private long ventureprofileid;
        @Column(name = "linkurl", nullable = false)
        private String linkurl;
        @Column(name = "linkname", nullable = false)
        private String linkname;
    ...    If I uncomment the last NamedQuery ~
    @NamedQuery(name = "Ventureprofile.findByVisibleRange", query = "SELECT v FROM Ventureprofile v WHERE v.visible = :visible ORDER BY v.venturename OFFSET 1 LIMIT 2")I experience the following runtime error:
    javax.ejb.EJBException: nested exception is: java.rmi.ServerException: RemoteException occurred in server thread; nested exception is:
            java.rmi.RemoteException: null; nested exception is:
            java.lang.IllegalArgumentException: NamedQuery of name: Venturelink.findByVentureprofileid not found.Container: Glassfish
    Database: Postgres
    OS: WinXP sp2
    Persistence provider: Toplink
    Note that I am not altering 'Venturelink', I am only altering 'Ventureprofile', however errors are manifesting themselves in 'Venturelink'.
    I have heard a rumor that 'OFFSET' and 'LIMIT' are not supported by the majority of persistence providers, but that would be a ridiculous situation... unless the majority of persistence providers lazy-load results when they are pulled from a result set. Is that the case?
    Regardless, this is making me guess what is going on behind the scenes. So, at some point during runtime, is all of my EJB query code getting munged together? And then whatever interprets them into SQL is giving up as soon as it finds a query it doesn't like? That's what this smells like, at least...
    Typically, I find weird errors like this crop up when I'm doing something stupid that a system wasn't designed to handle. So I'd really appreciate it if someone would set me straight.

    Solution, via Doug Clarke at the TopLink forums:
    2. JPA's query language does not support OFFSET and LIMIT. Instead JPA offers setFirstResult(int) and setMaxResult(int) on the Query object. In TopLink this generally translates to JDBC calls of similar names.
    (Thanks Doug!)
    so yeah... unless somebody knows otherwise and I'm wrong (often the case), looks like things choke across multiple entity beans as soon as one malformed query is found. Not good.

  • SSCM Query to export Microsoft Products installed along with the last logged username and other infos

    Dear all, for a SAM Activity, I'm going to Asset a customer of us for the Microsoft products installed on their Workstations
    The customer has SSCM and I would like to provide him a Custom Query to "have what we need" , mainly a report with : "Device NetBIOS name" "Last Logged Username" "Operating System" "Installed Microsoft Product"
    "last SSCM contact day" .
    Finally , if possible , I would like to exclude from the report the installed Patches and MS Updates.
    As I'm a beginner with SSCM, has anyone of you already experienced such kind of needs ?
    Many thanks to all for the help
    Regards,
    Davide

    If the built-in report don't give you exact that then clone them and the extra fields.
    However I have a feel that your team is looking for the "ridiculous" Software list report as such make sure that you read this blog post.
    http://be.enhansoft.com/post/2009/10/26/How-to-Perform-a-Basic-Software-Audit.aspx
    Garth Jones | My blogs: Enhansoft and
    Old Blog site | Twitter:
    @GarthMJ

Maybe you are looking for

  • TS2570 Disk no longer recognized in disk utility

    Long story. I updated OS X to Lion 2 weeks ago on my Mid 2009 13" Macbook Pro and everything seemed to be in perfect order.  3 days ago, when I opened the Macbook to check my email, the screen was frozen.  No problem right?  Just reboot.  The only th

  • Will my 5th generation Mac with purchased upgrades support Diablo 3?

    Hello Mac family!! I have a question that I have been getting multiple answers to.  My 5th generation Mac has the following specs: OS X: 10.5.8 Leopard but I will be purchased Snow Leopard and Lion this weekend. Hard Drive: 50 GB Processor: 2.1 GHz I

  • Dear Adobe, please patch Premiere Pro CS4

    Dear Adobe How come there's a minor patch to Adobe Reader or to Flash every few weeks, yet no patches at all in the automatic updates for Premiere Pro CS4? I am extremely annoyed with Premiere Pro CS4. It crashes for some dumb reason or another every

  • Editing video on mac desktop. In iPhoto and can't edit.

    How can I edit video on my Mac desktop? Video is currently in iPhoto and I can't edit it. I need to rotate it.

  • CONTAINS clause and keywords

    We have an application which uses context indexes, and a user ran the following query today which resulted in an error: select * from paghtrd a where CONTAINS(A.NAME, 'ALL ABOUT AUTOS') > 0; SQL Error: ORA-29902: error in executing ODCIIndexStart() r