CFOUTPUT GROUP ing max results ??

Hello All,
Just a question, is there a max number of rows etc that cold
fusion can handle in outputing by group from a large query?
I have a query that returns 45631 rows in about 5 seconds it
has some joined tables and grouping in the query.
I output the query using two nested groups
CFOUTPUT query="get_data" group="suppname"
Group title
CFOUTPUT group="mpart"
relevant part
CFOUTPUT
all lines from query for that part
/CFOUTPUT
/CFOUTPUT
/CFOUTPUT
My page only seems to output 9999 results when I use GROUP in
the output, but if I just output the query with no GROUPING then I
get all 45631 results.
Does CF have a limit on the results when GROUPING in a
CFOUTPUT?
Kind Regards Guy

How are you counting your outputted results?

Similar Messages

  • Count(*) with group by max(date)

    SQL> select xdesc,xcust,xdate from coba1 order by xdesc,xcust,xdate;
    XDESC XCUST XDATE
    RUB-A 11026 01-JAN-06
    RUB-A 11026 05-JAN-06
    RUB-A 11026 08-JAN-06
    RUB-A 11027 10-JAN-06
    RUB-B 11026 02-JAN-06
    RUB-B 11026 08-JAN-06
    RUB-B 11026 09-JAN-06
    RUB-C 11027 08-JAN-06
    I want to make sql that result :
    XDESC     COUNT(*)
    RUB-A     2
    RUB-B 1
    RUB-C 1
    Criteria : GROUPING: XDESC XCUST AND MAX(DATE)
    bellow mark *** that was selected in count.
    XDESC XCUST XDATE
    RUB-A 11026 01-JAN-06
    RUB-A 11026 05-JAN-06
    RUB-A 11026 08-JAN-06 ***
    RUB-A 11027 10-JAN-06 ***
    ---------------------------------------------------------COUNT RUB-A = 2
    RUB-B 11026 02-JAN-06
    RUB-B 11026 08-JAN-06
    RUB-B 11026 09-JAN-06 ***
    ---------------------------------------------------------COUNT RUB-B = 1
    RUB-C 11027 08-JAN-06 ***
    --------------------------------------------------------COUNT RUB-C = 1
    Can Anybody help ?
    I tried :
    select xdesc,max(xdate),count(max(xdate)) from coba1 group by xdesc
    ERROR at line 1:
    ORA-00937: not a single-group group function
    Thank

    This one is duplicate. see the following link
    Count(*) with group by max(date)
    Thanks

  • Multiple cfoutput groups

    I have a report and need the results displayed in multiple
    groups. From one query result set I am using the cfoutput tag with
    the query and group attributes to display the results (types and
    number of workers) by mealtime (Lunch and Dinner). This works fine
    but I also need the types of workers grouped (with each mealtime)
    by which part of the restaurant they work in (front, back,
    management). To do this within the aforementioned cfoutput tag I
    added another cfoutput tag with a second group option. This almost
    worked but the problem is that I am only getting the first row of
    data from each one of the subgroups (where they work) and not all
    of the rows. I tried adding the query attribute to the nested
    cfquery attribute but that is not allowed.
    Has any one used multiple group attributes in nested cfoutput
    tags? Can it be done? If so, what am I don't wrong. If not is there
    another way?
    Thanks,
    Jason

    jasonpresley wrote:
    > Has any one used multiple group attributes in nested
    cfoutput tags?
    Yes, many times.
    > Can it be done?
    Of course, that is the whole point of the group attribute.
    > If so, what am I don't wrong.
    I don't know. You did not show what you actually did!
    > If not is there another way?
    Several, but it does not sound like they are necessary.
    If I had to *guess* from your imprecise description, it
    sounds like you
    have forgotten the final inner <cfoutput> block without
    any group
    parameter.
    The basic structure is as follows:
    <cfoutput query="aQuery" group="aColumn">
    Stuff to output once per value of aColumn
    <cfoutput group="bColumn">
    Stuff to output once per value of bColumn
    <cfoutput group="nColumn">
    Stuff to output once per value of nColumn
    <cfoutput>
    Stuff to output for every record in the query
    </cfoutput>
    Stuff to output oncer per value of nColumn
    </cfoutput>
    Stuff to output once per value of bColumn
    </cfoutput>
    Stuff to output once per value of aColumn
    </cfoutput>

  • Getting Subtotals in a cfoutput group

    Hi, I'm trying to create a report that will show call data (#
    calls, avg duration, etc.). It will output a section for each rep
    and within that section it will break down by market and finally by
    date. I also want to include subtotals for each rep and market
    (this might make more sense when you look at the code below).
    Right now I'm using cfoutput grouping with a couple q of q's
    inside to get the subtotals. It's almost working, except the q of q
    for the market subtotals isn't returning anything. If anyone has
    suggestions for a more efficient way to do this, I'm open to that
    too.
    Thanks in advance for any help. Here's my code:

    Hi Dan, thank you for the reply. Your solution would normally
    work, but I'm actually displaying the section totals at the top of
    the detail, so I can't take advantage of the existing looping
    structure. However, your idea did get me thinking and I found a
    solution.
    I moved the innermost Q of Q outside of the cfoutput block so
    it creates a result set with a row for each market. Then within the
    cfoutput I reference the query using
    #qryRepMarketCalls.ColName[loopcount]# syntax to output the correct
    subtotals for that row.
    It works perfectly and cuts down on the number of Q of Qs to
    about 1/6th of what I needed before (since it's outside the market
    level grouping). In case my rambling didn't make sense, I'll also
    post the final code.

  • Max results returned by a search

    Hi there,
    I have been struggling in the Content DB Api to find out about the max results returned when performing a search.
    Let's say for instance, I am doing a search on attribute NAME = * on a library that potentially contains more than 100 000 documents. I don't mind if the operation is time consuming or not, my main concern is the actual limit. I suppose this is certainly related
    to the SOAP message limit. Does anyone know about this ?
    I forgot to mention that the search won't request any attribute but only the defaults (id, name, type and so forth ...).
    Cheers,

    Hi
    i came across the below reply for a thread with the similar query as above.
    <<
    Justin Cave
    Posts: 10,696
    From: Michigan, USA
    Registered: 10/11/99
    Re: sort data in ref cursor
    Posted: Feb 3, 2005 10:30 AM in response to: [email protected] Reply
    No. You could sort the data in the SQL statement from which the REF CURSOR was created, but once you have a REF CURSOR, you cannot do anything but fetch from it.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC
    >>
    So, the results from a ref cursor cant be sorted?? There is no way out?
    Kindly advise.

  • Reduce max. results in SRM F4-search

    Dear SRM gurus,
    in SRM 5.0 in tab "Cost assignment". Is there a possibility to set a value for the max. results in F4 help "Find Account Assignment Data" and in "G/L Account"
    By standard it is set to 500. Because we're facing performance issues is there a possibility to reduce it to a maximum of 200 results?
    Thanks for your help in advance.
    Kind regards,
    Henning

    Hi Hennig,
    I feel for this case you need to make template change as value of 500 is populated into Web template.
    Just right click on SRM webtemplete and do a view source and this will open code in notepad and make changes where you are getting this search help and look for "Maximum No. of Hits" . You will get this value over there...
    Ask someone technical in team to make changes in template and again publish the files..
    It will solve your issue.
    Regards,Nishant

  • IGroup.getMembers max Result = 1500?

    Hello,
    I have a group from LDAP, with 1700 users in it. When using the Code
    IGroupFactory gf2 = UMFactory.getGroupFactory();
    IGroup grp2 = gf2.getGroup(GroupName);
    Iterator gi2 = null;
    gi2 = grp2.getMembers(false);
    the Iterator contains only 1500 Members. The users are all in this Group, so "getMembers(true)" doesn't make any difference. I know, I can set the maximum result, when using a search filter, but I couldn't find anything about a limitation for getMembers.
    Has anyone an idea?
    Best regards,
    Christian Schebesta

    Hi,
    I just lost a few hours on that same problem and found two places where this limit can be coming from:
    1) UME Configuration:
    In this [help file |http://help.sap.com/saphelp_nw04s/helpdata/en/91/646d498fd94142a37e90a3b848e45e/content.htm]check the UME property ume.admin.search_maxhits.
    This property can be edited from the J2EE Visual Administrator, or from the Portal, in
    System Admin --> System Config --> UME Config --> select User admin UI tab, then
    under Search results and display tables, change the maximum number of search results and restart the server
    2) AD Configuration:
    In this AD [help file |http://support.microsoft.com/kb/315071]we can read the following:
    MaxPageSize - This value controls the maximum number of objects that are returned in a single search result, independent of how large each returned object is. To perform a search where the result might exceed this number of objects, the client must specify the paged search control. This is to group the returned results in groups that are no larger than the MaxPageSize value. To summarize, MaxPageSize controls the number of objects that are returned in a single search result.
    Hope this helps.
    Martin

  • GROUP and MAX Query

    Hi
    I have two tables that store following information
    CREATE TABLE T_FEED (FEED_ID NUMBER, GRP_NUM NUMBER);
    CREATE TABLE T_FEED_RCV (FEED_ID NUMBER, RCV_DT DATE);
    INSERT INTO T_FEED VALUES (1, 1);
    INSERT INTO T_FEED VALUES (2, 1);
    INSERT INTO T_FEED VALUES (3, 2);
    INSERT INTO T_FEED VALUES (4, NULL);
    INSERT INTO T_FEED VALUES (5, NULL);
    INSERT INTO T_FEED_RCV VALUES (2, '1-MAY-2009');
    INSERT INTO T_FEED_RCV VALUES (3, '1-FEB-2009');
    INSERT INTO T_FEED_RCV VALUES (4, '12-MAY-2009');
    COMMIT;
    I join these tables using the following query to return all the feeds and check when each feed was received:
    SELECT
    F.FEED_ID,
    F.GRP_NUM,
    FR.RCV_DT
    FROM T_FEED F
    LEFT OUTER JOIN T_FEED_RCV FR
    ON F.FEED_ID = FR.FEED_ID
    ORDER BY GRP_NUM, RCV_DT DESC;
    Output
    Line: ----------
    FEED_ID     GRP_NUM     RCV_DT
    1     1     
    2     1     5/1/2009
    3     2     2/1/2009
    5          
    4          5/12/2009
    Actually I want the maximum date of when we received the feed. Grp_Num tells which feeds are grouped together. NULL grp_num means they are not grouped so treat them as individual group. In the example - Feeds 1 and 2 are in one group and any one of the feed is required. Feed 3, 4 and 5 are individual groups and all the three are required.
    I need a single query that should return the maximum date for the feeds. For the example the result should be NULL because.. out of feed 1 and 2 the max date is 5/1/2009. For feed 3 the max date is 2/1/2009, for feed 4 it is 5/12/2009 and for feed 4 it is NULL. Since one of the required feed is null so the results should be null.
    DELETE FROM T_FEED;
    DELETE FROM T_FEED_RCV;
    COMMIT;
    INSERT INTO T_FEED VALUES (1, 1);
    INSERT INTO T_FEED VALUES (2, 1);
    INSERT INTO T_FEED VALUES (3, NULL);
    INSERT INTO T_FEED VALUES (4, NULL);
    INSERT INTO T_FEED_RCV VALUES (2, '1-MAY-2009');
    INSERT INTO T_FEED_RCV VALUES (3, '1-FEB-2009');
    INSERT INTO T_FEED_RCV VALUES (4, '12-MAY-2009');
    COMMIT;
    For above inserts, the result should be for feed 1 and 2 - 5/1/2009, feed 3 - 2/1/2009 and feed 4 - 5/12/2009. So the max of these dates is 5/12/2009.
    I tried using MAX function grouped by GRP_NUM and also tried using DENSE_RANK but unable to resolve the issue. I am not sure how can I use the same query to return - not null value for same group and null (if any) for those that doesn't belong to any group. Appreciate if anyone can help me..

    Hi,
    Kuul13 wrote:
    Thanks Frank!
    Appreciate your time and solution. I tweaked your earlier solution which was more cleaner and simple and built the following query to resolve the prblem.
    SELECT * FROM (
    SELECT NVL (F.GRP_NUM, F.CARR_ID || F.FEED_ID || TO_CHAR(EFF_DT, 'MMDDYYYY')) AS GRP_ID
    ,MAX (FR.RCV_DT) AS MAX_DT
    FROM T_FEED F
    LEFT OUTER JOIN T_FEED_RCV FR ON F.FEED_ID = FR.FEED_ID
    GROUP BY NVL (F.GRP_NUM, F.CARR_ID || F.FEED_ID || TO_CHAR(EFF_DT, 'MMDDYYYY'))
    ORDER BY MAX_DT DESC NULLS FIRST)
    WHERE ROWNUM=1;
    I hope there are no hidden issues with this query than the later one provided by you.Actually, I can see 4 issues with this. I admit that some of them are unlikely, but why take any chances?
    (1) The first argument to NVL is a NUMBER, the second (being the result of ||) is a VARCHAR2. That means one of them will be implicitly converted to the type of the other. This is just the kind of thing that behaves differently in different versions or Oracle, so it may work fine for a year or two, and then, when you change to another version, mysteriously quit wiorking. When you have to convert from one type of data to another, always do an explicit conversion, using TO_CHAR (for example).
    (2)
    F.CARR_ID || F.FEED_ID || TO_CHAR(EFF_DT, 'MMDDYYYY)'will produce a key like '123405202009'. grp_num is a NUMBER with no restriction on the number of digits, so it could conceivably be 123405202009. The made-up grp_ids must never be the same any real grp_num.
    (3) The combination (carr_id, feed_id, eff_dt) is unique, but using TO_CHAR(EFF_DT, 'MMDDYYYY) assumes that the combination (carr_id, feed_id, TRUNC (eff_dt)) is unique. Even if eff_dt is always entered as (say) midnight (00:00:00) now, you may decide to start using the time of day sometime in the future. What are the chances that you'll remember to change this query when you do? Not very likely. If multiple rows from the same day are relatively rare, this is the kind of error that could go on for months before you even realize that there is an error.
    (4) Say you have this data in t_feed:
    carr_id      feed_id  eff_dt       grp_num
    1        234      20-May-2009  NULL
    12       34       20-May-2009  NULL
    123      4        20-May-2009  NULLAll of these rows will produce the same grp_id: 123405202009.
    Using NVL, as you are doing, allows you to get by with just one sub-query, which is nice.
    You can do that and still address all the problems above:
    SELECT  *
    FROM     (
         SELECT  NVL2 ( F.GRP_NUM
                   , 'A' || TO_CHAR (f.grp_num)
                   , 'B' || TO_CHAR (f.carr_id) || ':' ||
                                TO_CHAR (f.feed_id) || ':' ||
                                TO_CHAR ( f.eff_dt
                                            , 'MMDDYYYYHH24MISS'
                   ) AS grp_id
         ,     MAX (FR.RCV_DT) AS MAX_DT
         FROM               T_FEED      F
         LEFT OUTER JOIN  T_FEED_RCV  FR        ON  F.FEED_ID = FR.FEED_ID
         GROUP BY  NVL2 ( F.GRP_NUM
                     , 'A' || TO_CHAR (f.grp_num)
                     , 'B' || TO_CHAR (f.carr_id) || ':' ||
                                     TO_CHAR (f.feed_id) || ':' ||
                                     TO_CHAR ( f.eff_dt
                                                 , 'MMDDYYYYHH24MISS'
         ORDER BY  MAX_DT   DESC   NULLS FIRST
    WHERE      ROWNUM = 1;I would still use two sub-queries, adding one to compute grp_id, so we don't have to repeat the NVL2 expression. I would also use a WITH clause rather than in-line views.
    Do you find it easier to read the query above, or the simpler query you posted in your last message?
    Please make things easy on yourself and the people who want to help you. Always format your code so that the way the code looks on the screen makes it clear what the code is doing.
    In particular, the formatting should make clear
    (a) where each clause (SELECT, FROM, WHERE, ...) of each query begins
    (b) where sub-queries begin and end
    (c) what each argument to functions is
    (d) the scope of parentheses
    When you post formatted text on this site, type these 6 characters:
    before and after the formatted text, to preserve spacing.
    The way you post the DDL (CREATE TABLE ...)  and DML (INSERT ...) statements is great: I wish more people were as helpful as you.
    There's no need to format the DDL and DML.  (If you want to, then go ahead: it does help a little.)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Grouping by the results of a group by

    I want to group the results of a query that itself is a group by (I have simplified it).
    select Id, MAX(type), MIN(type), COUNT(*)
    from aa
    group by Id
    gives
    1 03 01 2
    2 03 01 2
    3 01 01 1
    4 03 01 2
    5 03 01 3
    and so on (there are over 100,000 rows in my actual results)
    now I want to group by the Max, Min and Count columns to give (continuing the example):
    Max, Min, Count, New Count
    01 01 1 1
    03 01 2 3
    03 01 3 1
    I could load the results of the first group by into a temporary table and then do a second (group by) query on the temporary table.
    Has anyone got a better way?
    Richard Body

    SELECT max_type,
           min_type,
           count_id,
           COUNT(*)
    FROM
      select Id, MAX(type) max_type,
                 MIN(type) min_type,
                 COUNT(*)  count_id
      from aa
      group by Id
    GROUP BY max_type,
             min_type,
             count_id

  • What is the best way of returning group-by sql results in Toplink?

    I have many-to-many relationship between Employee and Project; so,
    a Employee can have many Projects, and a Project can be owned by many Employees.
    I have three tables in the database:
    Employee(id int, name varchar(32)),
    Project(id int, name varchar(32)), and
    Employee_Project(employee_id int, project_id int), which is the join-table between Employee and Project.
    Now, I want to find out for each employee, how many projects does the employee has.
    The sql query that achieves what I want would look like this:
    select e.id, count(*) as numProjects
    from employee e, employee_project ep
    where e.id = ep.employee_id
    group by e.id
    Just for information, currently I am using a named ReadAllQuery and I write my own sql in
    the Workbench rather than using the ExpressionBuilder.
    Now, my two questions are :
    1. Since there is a "group by e.id" on the query, only e.id can appear in the select clause.
    This prevent me from returning the full Employee pojo using ReadAllQuery.
    I can change the query to a nested query like this
    select e.eid, e.name, emp.cnt as numProjects
    from employee e,
    (select e_inner.id, count(*) as cnt
    from employee e_inner, employee_project ep_inner
    where e_inner.id = ep_inner.employee_id
    group by e_inner.id) emp
    where e.id = emp.id
    but, I don't like the complication of having extra join because of the nested query. Is there a
    better way of doing something like this?
    2. The second question is what is the best way of returning the count(*) or the numProjects.
    What I did right now is that I have a ReadAllQuery that returns a List<Employee>; then for
    each returned Employee pojo, I call a method getNumProjects() to get the count(*) information.
    I had an extra column "numProjects" in the Employee table and in the Employee descriptor, and
    I set this attribute to be "ReadOnly" on the Workbench; (the value for this dummy "numProjects"
    column in the database is always 0). So far this works ok. However, since the numProjects is
    transient, I need to set the query to refreshIdentityMapResult() or otherwise the Employee object
    in the cache could contain stale numProjects information. What I worry is that refreshIdentityMapResult()
    will cause the query to always hit the database and beat the purpose of having a cache. Also, if
    there are multiple concurrent queries to the database, I worry that there will be a race condition
    of updating this transient "numProjects" attribute. What are the better way of returning this kind
    of transient information such as count(*)? Can I have the query to return something like a tuple
    containing the Employee pojo and an int for the count(*), rather than just a Employee pojo with the
    transient int inside the pojo? Please advise.
    I greatly appreciate any help.
    Thanks,
    Frans

    No I don't want to modify the set of attributes after TopLink returns it to me. But I don't
    quite understand why this matters?
    I understand that I can use ReportQuery to return all the Employee's attributes plus the int count(*)
    and then I can iterate through the list of ReportQueryResult to construct the Employee pojo myself.
    I was hesitant of doing this because I think there will be a performance cost of not being able to
    use lazy fetching. For example, in the case of large result sets and the client only needs a few of them,
    if we use the above aproach, we need to iterate through all of them and wastefully create all the Employee
    pojos. On the other hand, if we let Toplink directly return a list of Employee pojo, then we can tell
    Toplink to use ScrollableCursor and to fetch only the first several rows. Please advise.
    Thanks.

  • RoboHelp HTML 9 Search Pane Max results controls

    RoboHelp HTML 9 with WebHelp as output...
    I have a two part question. By default when our users do a search in compile dhelp, they see 10 search results per page. Is there a place in my RoboHelp interface where I can change that to larger number? I have found a line of code in a *.js file where it looks like I could manually change it, but would prefer not to have to do it that route in case that causes unintended hiccups elsewhere.
    var nMaxResult = 10 ; 
    Second part of my question is related to statement in release notes/new features notes for RoboHelp 9 that state users can control the max search results per page on-the-fly as they use the Search pane. This feature only applies to WebHelp or WebHelp Pro and my output is WebHelp. I can see a line of code for that feature in the whfform.htm file but nothing shows up like that in the Search pane within compiled help. Is this a bug or is there a control somewhere in RoboHelp project settings that allows me to enable that option for users? The last 2 lines below would seem to control this area. The Search pane items referenced in first 4 lines do show up ok in my Search pane.
    gsTitle = "Type in the word(s) to search for:";
    gsTitle = "Type in the word(s) to search for:";
    gsHiliteSearchTitle = "Highlight search results";
    gsHiliteSearchTitle = "Highlight search results";
    gsMaxSearchTitle = "Search results per page" ;
    gsMaxSearchTitle = "Search results per page";
    Thanks for any insight you can provide,
    KF

    I know this is an old topic, but I was trying to figure this out myself today and finally found it. In case others have been looking for this, it's in in this file:
    whform.js
    find this variable:
    var gnMaxRslt = 10;
    and then change the 10 to the number you want.
    If you do that in the generated help, the number will change in the Search pane, but it will still display only 10 results. To get it to actually display more than 10 results, I had to edit the RoboHelp file, not the generated help. I made a copy of C:\Program Files (x86)\Adobe\Adobe RoboHelp 10\RoboHTML\WebHelp5Ext\template_stock\whform.js before I edited it. (Right-click the file, click Copy, then right-click in the folder and click Paste. Edit the one that doesn't say Copy in the filename. Then if you want to back it out, delete the one you edited and remove Copy from the original.) If you have your project open in RoboHelp, close it, then reopen it so it can "grab" the new info.
    This ought to be in the WebHelp Setting dialog box on the Search pane.
    --Karla--

  • [sql performance] inline view , group by , max, join

    Hi. everyone.
    I have a question with regard to "group by" inline view ,
    max value, join, and sql performance.
    I will give you simple table definitions in order for you
    to understand my intention.
    Table A (parent)
    C1
    C2
    C3
    Table B (child)
    C1
    C2
    C3
    C4 number type(sequence number)
    1. c1, c2, c3 are the key columns of tabla A.
    2. c1, c2, c3, c4 are the key columns of table B.
    3. table A is the parent table of Table B.
    4. c4 column of table b is the serial number.
    (c4 increases from 1 by "1" regarding every (c1,c2,c3)
    the following is the simple example of the sql query.
    select .................................
    from table_a,
    (select c1, c2, c3, max(c4)
    from table_b
    group by c1, c2, c3) table_c
    where table_a.c1 = table_c.c1
    and table_a.c2 = table_c.c2
    and table_a.c3 = table_c.c3
    The real query is not simple as above. More tables come
    after "the from clause".
    Table A and table B are big tables, which have more than
    100,000,000 rows.
    The response time of this sql is very very slow
    as everyone can expect.
    Are there any solutions or sql-tips about the late response-time?
    I am considering adding a new column into "Table B" in
    order to identify the row, which has max serial number.
    At this point, I am not sure adding a column is a good
    thing in terms of every aspect.
    I will be waiting for your advice and every response
    will be appreciated even if it is not the solution.
    Have a good day.
    HO.
    Message was edited by:
    user507290

    For such big sources check that
    1) you use full scans, hash joins or at least merge joins
    2) you scan your source data as less as possible. In the best case each necessary table only once (for example not using exists clause to effectively scan all table via index scan).
    3) how much time you are spending on sorts and hash joins (either from v$session_longops directly or some tool that visualises this info). If you are using workarea_size_policy = auto, probably you can switch to manual for this particular select and adjust sort_area_size and hash_area_size big enough to do as less as possible sorts on disk
    4) if you have enough free resources i.e. big box probably you can consider using some parallelism
    5) if your full scans are taking big time check what is your db_file_multiblock_read_count, probably increasing it for this select will give some gain.
    6) run trace and check on what are you waiting for
    7) most probably your problem is IO bound so probably you can do something from OS side to make IO faster
    8) if your query now is optimized as much as you can, disks are running as mad and you are using all RAM then probably it is the most you can get out of your box :)
    9) if nothing helps then you can start thinking about precalculations either using your idea about derived column or some materialized views.
    10) I hope you have a test box and at least point (9) do firstly on it and see whether it helps.
    Gints Plivna
    http://www.gplivna.eu

  • Message tracking max results returned

    Hi All,
    On message tracking page the predefined max rsults returned value only have 3 options 250, 500 and 1000. May i know how can i increase query setting more than 1000 results return.
    Regards,
    Rock

    Hi Rock,
    You cannot increase the query to more than 1000, but you can change your other parameters.  For example, you could change the date scope until you get down to under 1000 results. 
    You will have to run multiple queries to pull all the data you're looking for, but unless/until they change the maximum results this will give you a workaround.
    Thanks,
    Rachel

  • Group by in Result List

    Hi,
    Is it possible to use Group By in the Result list? I can't find it in the configuration panel, but maybe this is possible in another way.
    Thanks!

    Result Lists can only show data at the grain of the Endeca record, so no.  Usually Results Lists are used to show descriptions or other unstructured portions of the record.   It sounds as if you may have your unstructured data duplicated across multiple records after denormalizing.  Is this the case?  Is there an option to keep the unstructured attribute(s) from duplicating?

  • Grouping refinement while grouping the search results in sku based indexing

    Hi,
    We are doing sku based indexing and for a business requirement we had to group the results by product. We were able to achieve this by setting the sorting attribute in the search request.
    sorting=property
    sortProperty=string:$repositoryId:1
    But in the search response though the results are grouped by product the refinements count for the facet created on sku property is greater than the total results count. To group the refinements as well, we have used refineCount=group in the search request but did not find any difference in the response or refinements count. Even though refineCount=group is present in the search request, it is not showing up in the search response.
    Is there a way to group the refinements when the results are grouped in sku based indexing?

    Hi,
    What is your ATG version?
    Regards,
    Jai

Maybe you are looking for

  • Multiple iMac users and iTunes

    I have setup our iMac with multiple users. I migrated all our digital music to one user and found all the cover art, etc to make it work well. Now I want to allow all the other users to access that music library. I have tried everything from adding t

  • TS1277 where is the "authorize computer" selection located?

    how do i authorize my new computer to play past itunes purchases? cant find the "authorize computer" selection

  • Some videos stop playing for a while, Ipod 60 GB

    Hi, My problem is that I loaded into my iPod a lot of videos, and they in iTunes work fine, but, in my iPod, some of them, after a few seconds, stop playing, then continue, but without sound, if I go back or forward, the sound appear again, but then

  • How should I share a cluster across several sub .vis? Best practices? Current problems...

    I'm trying to share a cluster across several sub .vis.  I've written a save / load settings function that basically stores these settings in a static variable (full .vi attached): Questions:    1. Is this an appropriate practice?  (using a sub vi to 

  • ColdFusion AIR Synchronization - Best practices

    I'm a long time ColdFusion developer who has put off learning Flex because I never had a need (not to mention the bad taste Flash left in my mouth in its early stages).  Now I'm jumping in because of its integration with Air+SQLite and Coldfusion9 th