Too many rows in collection cose Siena rapidly slowing down

Is this a know issue? If I add more than 1000 rows in collection so the siena start to rapidly slowing down. I don't know If I will have this problem in published app or this is just a sienna issue? Thanks for answers

I believe for example in the Excel Microsoft has currently set the limit on 15000 records.
Have you checked if it's faster if you use loadData SaveDate and create the collection from the file in the appspace. That may speed things up too (maybe). My post in
http://social.technet.microsoft.com/Forums/en-US/1226fb54-bfb4-4eaa-8a7f-7bd0a270ea4d/using-loaddata-and-savedata-yet-again?forum=projectsiena#c5018405-f066-4422-bebb-a2d8acb4eb7c may give you some pointers if you need them.
You can also use the function explained in
http://social.technet.microsoft.com/Forums/en-US/3920f6b9-db62-4530-b891-4f9c1ac4832d/record-filtering-wiithin-gallery?forum=projectsiena to move in blocks of a few 100 per click if you want.
Regards
StonyArc

Similar Messages

  • Result set does not fit; it contains too many rows

    Dear All,
    We are in BI7 and running reports on Excel 2007. Even though number of rows limitation in Excel 2007 is more than  1Million, when I try to execute a report with more than 65k records of output, system is generating output only for 65k rows with message "Result set does not fit; it contains too many rows".
    Our Patch levels:
    GUI - 7.10
    Patch level is 11
    Is there any way to generate more than 65000 rows in Bex?
    Thanks in advance...
    regards,
    Raju
    Dear Gurus,
    Could you please shed some light on this issue?
    thanks and regards,
    Raju
    Edited by: VaraPrasadraju Potturi on Apr 14, 2009 3:13 AM

    Vara Prasad,
    This has been discussed on the forums - for reasons of backward compatibility I do not think BEx supports more that 65000 rows .... I am still not sure about the same since I have not tried out a query with more that 65K rows on excel 2007 but I think this is not possible...

  • List of Value: Best practice when there are too many rows.

    Hi,
    I am working in JDev12c. Imagine the following scenario. We have an employee table and the organization_id as one of its attributes. I want to set up a LOV for this attribute. For what I understand, if the Organization table contains too many rows, this will create an extreme overhead (like 3000 rows), also, would be impossible to scroll down in a simple LOV. So, I have decided the obvious option; to use the LOV as a Combo Box with List of Values. Great so far.
    That LOV will be use for each user, but it doesn't really depend of the user and the list of organization will rarely change. I have a sharedApplicationModule that I am using to retrieve lookup values from DB. Do you think would be OK to put my ORGANIZATION VO in there and create the View Accessor for my LOV in the Employees View?
    What considerations should I take in term of TUNING the Organization VO?
    Regards

    Hi Raghava,
    as I said, "Preparation Failed" may be (if I recall correctly) as early as the HTTP request to even get the document for indexing. If this is not possible for TREX, then of course the indexing fails.
    What I suggested was a manual reproduction. So log on to the TREX host (preferrably with the user that TREX uses to access the documents) and then simply try to open one of the docs with the "failed" status by pasting its address in the browser. If this does not work, you have a pretty good idea what's happening.
    Unfortunately, if that were the case, this would the be some issue in network communications or ticketing and authorizatuions, which I can not tell you from here how to solve.
    In any case, I would advise to open a support message to SAP - probably rather under the portal component than under TREX, as I do not assume that this stage of a queue error has anything to do with the actual engine.
    Best,
    Karsten

  • Exception too many rows...

    Hi
    I am getting two different outputs with following code depending upon i declare the variable in first or second way...
    when i declare the variable v_empno as number(10) and too many rows exception is raised....and after that i dbms this variable..it is null...
    but when i declare the same variable as table.column%type....and the similar scenario happens and i dbms the value of variable...it is not null...rather the first value from output of the query...
    declare
    --v_empno number(10);
    v_empno emp.empno%type;
    begin
    dbms_output.put_line('before '||v_empno );
    select empno into v_empno from emp;
    dbms_output.put_line('first '||v_empno);
    exception when too_many_rows then
    dbms_output.put_line('second '||v_empno);
    dbms_output.put_line('exception'||sqlerrm);
    end;
    is there any specific reason for this....
    ur comments plz
    Thanks
    Sidhu

    In 9i:
    SQL> declare
      2  --v_empno number(10);
      3  v_empno emp.empno%type;
      4  begin
      5  dbms_output.put_line('before '||v_empno );
      6  select empno into v_empno from emp;
      7  dbms_output.put_line('first '||v_empno);
      8  exception when too_many_rows then
      9  dbms_output.put_line('second '||v_empno);
    10  dbms_output.put_line('exception'||sqlerrm);
    11  end;
    12  /
    before
    second 7369
    exceptionORA-01422: exact fetch returns more than requested number of rows
    PL/SQL procedure successfully completed.
    SQL> declare
      2  v_empno number;
      3  --v_empno emp.empno%type;
      4  begin
      5  dbms_output.put_line('before '||v_empno );
      6  select empno into v_empno from emp;
      7  dbms_output.put_line('first '||v_empno);
      8  exception when too_many_rows then
      9  dbms_output.put_line('second '||v_empno);
    10  dbms_output.put_line('exception'||sqlerrm);
    11  end;
    12  /
    before
    second
    exceptionORA-01422: exact fetch returns more than requested number of rows
    PL/SQL procedure successfully completed.
    SQL> edit
    Wrote file afiedt.buf
      1  declare
      2  v_empno number(10);
      3  --v_empno emp.empno%type;
      4  begin
      5  dbms_output.put_line('before '||v_empno );
      6  select empno into v_empno from emp;
      7  dbms_output.put_line('first '||v_empno);
      8  exception when too_many_rows then
      9  dbms_output.put_line('second '||v_empno);
    10  dbms_output.put_line('exception'||sqlerrm);
    11* end;
    SQL> /
    before
    second 7369
    exceptionORA-01422: exact fetch returns more than requested number of rows
    PL/SQL procedure successfully completed.In 10G:
    SQL> declare
      2  v_empno number(10);
      3  --v_empno emp.empno%type;
      4  begin
      5  dbms_output.put_line('before '||v_empno );
      6  select empno into v_empno from emp;
      7  dbms_output.put_line('first '||v_empno);
      8  exception when too_many_rows then
      9  dbms_output.put_line('second '||v_empno);
    10  dbms_output.put_line('exception'||sqlerrm);
    11  end;
    12  /
    before
    second 7369
    exceptionORA-01422: exact fetch returns more than requested number of rows
    PL/SQL procedure successfully completed.
    SQL> edit
    Wrote file afiedt.buf
      1  declare
      2  v_empno number;
      3  --v_empno emp.empno%type;
      4  begin
      5  dbms_output.put_line('before '||v_empno );
      6  select empno into v_empno from emp;
      7  dbms_output.put_line('first '||v_empno);
      8  exception when too_many_rows then
      9  dbms_output.put_line('second '||v_empno);
    10  dbms_output.put_line('exception'||sqlerrm);
    11* end;
    SQL> /
    before
    second 7369
    exceptionORA-01422: exact fetch returns more than requested number of rows
    PL/SQL procedure successfully completed.
    SQL> edit
    Wrote file afiedt.buf
      1  declare
      2  --v_empno number;
      3  v_empno emp.empno%type;
      4  begin
      5  dbms_output.put_line('before '||v_empno );
      6  select empno into v_empno from emp;
      7  dbms_output.put_line('first '||v_empno);
      8  exception when too_many_rows then
      9  dbms_output.put_line('second '||v_empno);
    10  dbms_output.put_line('exception'||sqlerrm);
    11* end;
    SQL> /
    before
    second 7369
    exceptionORA-01422: exact fetch returns more than requested number of rows
    PL/SQL procedure successfully completed.Anyhow you should not rely on the fact Oracle fetches the first value into variable
    and keeps it when the excaprion is raised.
    Tom Kyte discusses the SELECT INTO issue here:
    http://asktom.oracle.com/pls/ask/f?p=4950:8:7849913143702726938::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:1205168148688
    Rgds.

  • WUT-113 Too many rows when using Webutil to upload a doc or pdf

    Hi All,
    I am using Webutil to upload and view files and am receiving the error msg 'WUT-113 Too many rows matched the supplied where clause'
    The process works fine for uploading photos but not when I try and upload documents and pdf files.
    I am on v 10.1.2.0.2 and using XP.
    The code example (for documents) is outlined below but think that the issue must be to do with some sort of incorrect Webutil configuration
    on my machine.
    declare
         vfilename varchar2(3000);
         vboolean boolean;
    begin
         vfilename := client_get_file_name('c:\',file_filter => 'Document files (*.doc)|*.doc|');
         vboolean := webutil_file_transfer.Client_To_DB_With_Progress
    ( vfilename,
    'lobs_table',
    'word_blob',
    'blob_id = '||:blob_id,
    'Progress',
    'Uploading File '||vfilename,
    true,
    'CHECK_LOB_PROGRESS');
    end;
    Any assistance in this matter would be appreciated.
    kind regards,
    Tom

    Hi Sarah,
    Many thanks for the reply.
    All I'm trying to do is to click on the Browse Document button in a form to upload
    a document (in this example) from my machine and save it to the db table called lobs_table
    using the webutil_file_transfer.Client_To_DB_With_Progress program.
    When I first access the form the field :blob_id is populated (by a When-create-Record trigger)
    with a value made up of sysdate in NUMBER format as DDMMHHMISS e.g. 0106101025
    When I press 'Browse Document' - a button in the form) and the dialog box is opened and
    I select a document and click ok then I see the error message 'WUT-113 Too many rows matched the supplied where clause' and yet the where clause element of the call to webutil_file_transfer.Client_To_DB_With_Progress program
    should be the :blob_id value ('blob_id = '||:blob_id) i.e. should be a single value populated in the field when I first access the form - so why am I seeing the when too many rows error?
    I may be missing something obvious as I've only just started using Webutil.
    Kind regards,
    Tom

  • TOO MANY ROWS error experienced during DBMS_DATA_MINING.APPLY

    On a test database that is running 10.1.0.2 when I execute APPLY specifying a data_table_name from another schema [data_schema_name], I get an ORA-1422, too many rows fetched.
    However, when I exercise this same option [data table in another schema] on version 10.1.0.3, I get no error and the APPLY works.
    Could someone tell me if this is an error that was fixed between the two versions and point me to the documentation that supports this?
    Thanks,
    Dianna

    Dianna,
    The behavior you observed could be as a result of some of the security bug fixes that our group have back ported to 10.1.0.3 patchset release. ST releases security bundle patches periodically and some of them were back ported to earlier releases.
    There are no specific documentation regarding security bundle patch contents for security reasons.
    Regards,
    Xiafang

  • Too many rows found

    I have two data blocks, one data block joins two tables and second datablock is based on one table.
    first datablock has all fields with 1:1 relationship with Packing_id and second data block details has multiple rows
    for every Packing_id. I wrote 2 procs for 2 datablocks are called in respective Post-Query trigger.
    My problem is when I am running forms it gives error Message('too many rows found_orders_begin');
    Here are my codes.
    PROCEDURE post_query IS
    CURSOR mast_cur IS
    SELECT pa.ship_to_last_name,
    pa.ship_to_first_name,
    pa.ship_to_address1,
    pa.ship_to_address2,
    pa.ship_to_city,
    p.packing_id,
    FROM packing_attributes pa,packing p
    WHERE p.packing_id ; = pa.packing_id
    AND p.packing_id ; = :PACKING_JOINED.PACKING_ID;
    BEGIN
    Message('too many rows found_orders_begin');
    OPEN mast_cur;
    loop
    FETCH mast_cur INTO :PACKING_JOINED.SHIP_TO_LAST_NAME,
    :PACKING_JOINED.SHIP_TO_FIRST_NAME,
    :PACKING_JOINED.SHIP_TO_ADDRESS1,
    :PACKING_JOINED.SHIP_TO_ADDRESS2,
    :PACKING_JOINED.SHIP_TO_CITY,
    :PACKING_JOINED.PACKING_ID,
    end loop;
    CLOSE mast_cur;
    EXCEPTION
    WHEN too_many_rows THEN
    Message('too many rows found');
    WHEN no_data_found THEN
    Message('no data was found there');
    WHEN OTHERS THEN
    Message('do something else');
    END post_query;
    Detail proc
    PROCEDURE post_query IS
    CURSOR det_cur IS
    SELECT pd.quantity,
    pd.stock_number,
    FROM packing_details pd,packing p
    WHERE p.packing_id ; = pd.packing_id
    AND pd.packing_id = :PACKING_JOINED.PACKING_ID;
    BEGIN
    Message('too many rows found_pack_begin');
    OPEN det_cur;
    FETCH det_cur INTO
    :DETAILS.QUANTITY,
    :DETAILS.STOCK_NUMBER,
    CLOSE det_cur;
    EXCEPTION
    WHEN too_many_rows THEN
    Message('too many rows found');
    WHEN no_data_found THEN
    Message('no data was found there');
    WHEN OTHERS THEN
    Message('do something else');
    END post_query;
    Thanks in advance for your help.
    Sandy

    Thanks for reply.
    Maybe it gives this message because you have programmed to show this message ?
    I intentionally gave this message to see how far my code is working,if I don't give this message and execute query I get FRM-41050:You cannot update this record.
    Even though I am not updating record(I am querying record) and data block UPdate Allowed is set to NO.
    Some additional comments on your code:
    What is the loop supposed to do? You just fill the same fields in forms repeating with the values of your cursor, so after the loop the last record from your query will be shown. In general, in POST-QUERY you read Lookup's, not details.
    Sorry but I have no idea how to show detail records,thats why i tried with loop. In first proc I will have only 1 row returned so I guess I don't need loop in that proc?
    In second there will be multiple rows for one packing_id(packing_id is common column for both block), please let me know how to do that?
    Your exception-handler for NO_DATA_FOUND and TOO_MANY_ROWS are useless, for these errors cannot be raised using a cursor-for-loop
    I will remove these. Thanks
    Sandy
    Edited by: sandy162 on Apr 2, 2009 1:28 PM

  • Can you have too many rows in a table?

    How many rows would you consider to be too many for a single table and how would you re-arrange the data if
    asked?
    any answers?
    sukai

    I have some tables with over 100 million rows that still perform well, and I'm sure much larger tables are possible.  The exact number of rows would vary significantly depending on a number of factors including:
    Power of the underlying hardware
    Use of the table – frequency of queries and updates
    Number of columns and data types of the columns
    Number of indexes
    Ultimately the answer probably comes down to performance – if queries, updates, inserts, index rebuilds, backups, etc. all perform well, then you do not yet have too many rows.
    The best way to rearrange the data would be horizontal partitioning.  It distributes the rows into multiple files; which provides a number of advantages including the potential to perform well with larger number of rows.
    http://msdn.microsoft.com/en-us/library/ms190787.aspx

  • Too many BPM data collection jobs on backend system

    Hi all,
    We find about 40,000 data collection jobs running on our ECC6 system, far too many.
    We run about 12 solutions, all linked to the same backend ECC6 system. Most probably this is part of the problem. We plan to scale down to 1 solution rather than the country-based approach.
    But here we are now, and I have these questions.
    1. How can I relate a BPM_DATA_COLLECTION job on ECC6 back to a particular solution ? The job log give me monitor-id, but I can't relate that back to a solution.
    2. If I deactivate a solution in the solution overview, does that immediately cancel the data collection for that solution ?
    3. In the monitoring schedule on a business process step we sometimes have intervals defined as 5 minutes, sometimes 60. Strange thing is that the drop-down of that field does not always give us the same list of values. Even within a solution I see that in one step I have the choice of a long list of intervals, in the next step in that same business process I can only choose between blank and 5 minutes.
    How is this defined ?
    Thanks in advance,
    Rad.

    Hi,
    How did you managed to get rid of this issue. i am facing the same.
    Thanks,
    Manan

  • SQL subquery returning too many rows with Max function

    Hello, I hope someone can help me, I been working on this all day. I need to get max value, and the date and id where that max value is associated with between specific date ranges. Here is my code , and I have tried many different version but it still returning
    more than one ID and date
    Thanks in advance
    SELECT
      distinctbw_s.id, 
    avs.carProd,cd_s.RecordDate,
    cd_s.milkProductionasMilkProd,
    cd_s.WaterProductionasWaterProd
    FROMtblTestbw_s
    INNERJOINtblTestCpcd_sWITH(NOLOCK)
    ONbw_s.id=cd_s.id   
    ANDcd_s.recorddateBETWEEN'08/06/2014'AND'10/05/2014'
    InnerJoin
    (selectid,max(CarVol)ascarProd
    fromtblTestCp
    whererecorddateBETWEEN'08/06/2014'AND'10/05/2014'
     groupby 
    id)avs
    onavs.id=bw_s.id
    id RecordDate carProd       MilkProd WaterProd
    47790 2014-10-05   132155   0 225
    47790 2014-10-01   13444    0 0
    47790 2014-08-06   132111    10 100
    47790 2014-09-05   10000    500 145
    47790 2014-09-20   10000    800 500
    47791 2014-09-20   10000    300 500
    47791 2014-09-21   10001    400 500
    47791 2014-08-21   20001    600 500
    And the result should be ( max carprod)
    id RecordDate carProd       MilkProd WaterProd
    47790 2014-10-05   132155  0 225
    47791 2014-08-21   20001    600 500

    Help your readers help you.  Remember that we cannot see your screen, do not know your data, do not understand your schema, and cannot test a query without a complete script.  So - remove the derived table (to which you gave the alias "avs")
    and the associated columns from your query.  Does that generate the correct results?  I have my doubts since you say "too many" and the derived table will generate a single row per ID.  That suggests that your join between the first
    2 tables is the source of the problem.  In addition, the use of DISTINCT is generally a sign that the query logic is incorrect, that there is a schema issue, or that there is a misunderstanding of the schema. 

  • Tag Query History mode returning too many rows of data

    I am running a Tag Query from HQ to a plant site and want to limit the amount of data that returns to the minimum required to display trends on a chart.  The minimum required is subjective, but will be somewhere between 22 and 169 data points for a weeks data.  Testing and viewing the result is needed to determine what is an acceptable minimum. 
    I build a Tag Query with a single tag and set it to History Mode.  I set a seven day period going midnight to midnight.  And I set the row count to 22.  When I execute the query it returns 22 data points.  But when I go to visualization, I get 565 datapoints.  So obviously that is not what I want as I want a very slim dataset coming back from the IP21 server (to minimize the load on the pipe). 
    Any suggestions?

    Hi Michael,
    it looks to me like you have enabled the "Use Screen Resolution" option in your display template or in the applet HTML. Setting this option makes the display template fetch as many rows as there are pixels in the chart area. Like setting a rowcount in the applet HTML as a param, this will override any rowcount limitations you have set at the Query Template level...
    Hope this helps,
    Sascha

  • Java exception "Too many rows" when launching

    Hi,
    I'm trying to run SQLDeveloper on AIX 5.2
    The Java software is the JAVA 1.5 :
    $ ./java -version
    java version "1.5.0"
    Java(TM) 2 Runtime Environment, Standard Edition (build pap64devifx-20070725 (SR5a))
    IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 AIX ppc64-64 j9vmap6423-20070426 (JIT enabled)
    J9VM - 20070420_12448_BHdSMr
    JIT - 20070419_1806_r8
    GC - 200704_19)
    JCL - 20070725
    When launching sqldeveloper, I get more than 3000 error lines...
    Here are the beginning :
    java.io.FileNotFoundException: <INSTALLATION-PATH>/jdev/extensions/oracle.jdeveloper.db.sqlplus.jar (Too many open files)
    at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
    at oracle.ide.boot.JarDirs.getDirsImpl(JarDirs.java:75)
    at oracle.ide.boot.JarDirs.<init>(JarDirs.java:63)
    at oracle.ide.boot.SharedJarByPackage.<init>(SharedJarByPackage.java:19)
    at oracle.ide.boot.IdeSharedCodeSourceFactory.createCodeSource(IdeSharedCodeSourceFactory.java:30)
    at oracle.classloader.SharedCodeSourceFactory.create(SharedCodeSourceFactory.java:43)
    at oracle.classloader.SharedCodeSourceSet.subscribe(SharedCodeSourceSet.java:315)
    at oracle.classloader.SharedCodeSourceSet.subscribe(SharedCodeSourceSet.java:174)
    at oracle.classloader.SharedCodeSourceSet.subscribe(SharedCodeSourceSet.java:148)
    at oracle.classloader.SharedCodeSourceSet.subscribe(SharedCodeSourceSet.java:215)
    at oracle.classloader.PolicyClassLoader.addCodeSource(PolicyClassLoader.java:874)
    at oracle.ideimpl.extension.ExtensionManagerImpl.addToPolicyClassLoader(ExtensionManagerImpl.java:1464)
    at oracle.ideimpl.extension.ExtensionManagerImpl.addURLToClassPath(ExtensionManagerImpl.java:1401)
    at oracle.ideimpl.extension.ExtensionManagerImpl.mav$addURLToClassPath(ExtensionManagerImpl.java:116)
    at oracle.ideimpl.extension.ExtensionManagerImpl$4.addToClasspath(ExtensionManagerImpl.java:690)
    at javax.ide.extension.spi.BaseExtensionVisitor.addExtensionSourceToClasspath(BaseExtensionVisitor.java:278)
    If it could help you, the splash screen is display a short time and the progression bar is beginning but for a short time only, after what the process is dying.
    Thanks,
    Christian

    java.io.FileNotFoundException: <INSTALLATION-PATH>/jdev/extensions/oracle.jdeveloper.db.sqlplus.jar (Too many open files)
    You have exceeded your max open files limit as determined by your operating system. You need to talk to your system administrators.

  • Too many open files in system cause database goes down

    Hello experts I am very worry because of the following problems. I really hope you can help me.
    some server features
    OS: Suse Linux Enterprise 10
    RAM: 32 GB
    CPU: intel QUAD-CORE
    DB: There is 3 instances RAC databases (version 11.1.0.7) in the same host.
    Problem: The database instances begin to report Error message: Linux-x86_64 Error: 23: Too many open files in system
    and here you are other error messages:
    ORA-27505: IPC error destroying a port
    ORA-27300: OS system dependent operation:close failed with status: 9
    ORA-27301: OS failure message: Bad file descriptor
    ORA-27302: failure occurred at: skgxpdelpt1
    ORA-01115: IO error reading block from file 105 (block # 18845)
    ORA-01110: data file 105: '+DATOS/dac/datafile/auditoria.519.738586803'
    ORA-15081: failed to submit an I/O operation to a disk
    At the same time I search into the /var/log/messages as root user and I the error notice me the same problem:
    Feb 7 11:03:58 bls3-1-1 syslog-ng[3346]: Cannot open file /var/log/mail.err for
    writing (Too many open files in system)
    Feb 7 11:04:56 bls3-1-1 kernel: VFS: file-max limit 131072 reached
    Feb 7 11:05:05 bls3-1-1 kernel: oracle[12766]: segfault at fffffffffffffff0 rip
    0000000007c76323 rsp 00007fff466dc780 error 4
    I think I get clear about the cause, maybe I need to increase the fs.file-max kernel parameter but I do not know how to set a good value. Here you are my sysctl.conf file and the limits.conf file:
    sysctl.conf
    kernel.shmall = 2097152
    kernel.shmmax = 17179869184
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max = 6553600
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 4194304
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max = 4194304
    limits.conf
    oracle soft nproc 2047
    oracle hard nproc 16384
    oracle soft nofile 1024
    oracle hard nofile 65536

    process limit
    bcm@bcm-laptop:~$ ulimit -a
    core file size          (blocks, -c) 0
    data seg size           (kbytes, -d) unlimited
    scheduling priority             (-e) 20
    file size               (blocks, -f) unlimited
    pending signals                 (-i) 16382
    max locked memory       (kbytes, -l) 64
    max memory size         (kbytes, -m) unlimited
    open files                      (-n) 1024
    pipe size            (512 bytes, -p) 8
    POSIX message queues     (bytes, -q) 819200
    real-time priority              (-r) 0
    stack size              (kbytes, -s) 8192
    cpu time               (seconds, -t) unlimited
    max user processes              (-u) unlimited
    virtual memory          (kbytes, -v) unlimited
    file locks                      (-x) unlimited

  • Shared components - Report Queries - Too many rows?

    I have a query that returns 70,000 rows that the business owner wants to be printed as a reference manual. I built the reprot template (*.rtf) and saved the query. If I run the query with a limiting where clause for only 1000 records everything is fine. However, as I continue to increase the number to 10,000 records I no longer get a report. I get an blank IE page with no content.
    Is this a timing issue where it jus takes too long to generate the report? Or, is there a limit to the number of rows that can be returned in a report Query/template?
    I am not running a report to export as CSV - I am aware of the small int limitation of Excel that prevents greater than ~65,000 rows.
    Thanks in advance!
    -JT

    Hi,
    (APEX 3.1)
    I was getting the same error. It was because I was trying to insert into a view on a synonym pointing to a table across a database link. This would also fire a trigger.
    A simpler test case of the same architecture produced the same error. As there was no attempt to bind longs I believe the error is misleading.
    I removed some of the layers by creating a table which accepted the insert from the form, with an insert trigger that relayed the insert to the table in the second database, firing the trigger that did the work that was the real point of the exercise.
    hope that may be of some use

  • Too Many Rows?

    We had this error in our portal from a user executig a query:
    not enough memory.;in executor::Executor in cube: pb1_zbm_c009
    The cube has 26M rows in the fact table and we found this in the TrexIndexServer.trc file
    [1199630656] 2009-08-06 09:57:36.830 e QMediator    QueryMediator.cpp(00324) : 6952; Error executing physical plan: AttributeEngine: not enough memory.;in executor::Executor in cube: pb1_zbm_c009
    [1199630656] 2009-08-06 09:57:36.830 e SERVER_TRACE TRexApiSearch.cpp(05162) : IndexID: pb1_zbm_c009, QueryMediator failed executing query, reason: Error executing physical plan: AttributeEngine: not enough memory.;in executor::Executor in cube: pb1_zbm_c009
    [1149274432] 2009-08-06 13:13:01.136 e attributes   AggregateCalculator.cpp(00169) : AggregateCalculator returning out of memory with hash table size 11461113, key figures 4
    [1216416064] 2009-08-06 13:13:01.199 e executor     PlanExecutor.cpp(00273) : plan <plan_1247870610064+160@bwaprod2:35803> failed with rc 6952; AttributeEngine: not enough memory.
    [1216416064] 2009-08-06 13:13:01.199 e executor     PlanExecutor.cpp(00273) : -- returns for <plan_1247870610064+160@bwaprod2:35803>:
    Our basis group validated that we had the parameter max_cells_one_index = 40000000 was set which is the default.
    So what I'm wondering is did my user actually request ( 11,461,113 x 4 ) or 45,844,452 cells and so it exceeded the 40M  limit?
    Mike

    Hi Michael,
    that´s possible, but can you get mor information about your memory consumption?
    Go in the standalone tool trexadmin (on OS) to Landscape -> Services -> Load
    If the data are too old, have a look at the latest alerts in transaction TREXADMIN or in the standalone tool mentioned above.
    You can also check this query via transaction rsrt and debug it. Try query with and without BIA.
    Best regards,
    Jens

Maybe you are looking for

  • No Phone Service - Internet and TV Work

    It's been 36 hours now and my FiOS Phone Service is still not working.  I called customer service more times than I can count and they haven't been able to do anything.  Nor have they called back when they promised to.  I finally got a tehcnician out

  • DVD and CD not working right

        When I play a DVD on my PC, I get talking before the people start talking. The picture stops and go. Then when I put a CD in, there is no sound. Everything worked good before. Could you tell me what is wrong.

  • Display Field "Purchasing Doc No" & "Vendor" in GL Line item of FAGL003 rep

    Dear Experts, To display of Purchasing Doc No (EBELN) & Vendor(LIFNR) in GL line item of FAGLL03 report we have added both the field in the customization setting of Special Fields in GL Line Item using the following path:-   Financial Accounting(New)

  • Right-click: move "to Dropbox" down away from "to Trash"??

    Dropbox was improved recently. Now, I see in the contextual menu (right-click menu) immediately below the clickable "Move to Trash" appears the clickable "Move to Dropbox" which is stupid How do I rearrange the right-click menu? I want "Move to Dropb

  • Network error when connecting to wifi

    I just got my Apple TV in the mail and went o configure it and it won't connect to my wifi.  I get an ERROR 5 message each time.  All of my other devices work when connecting to my wifi, so I'm not sure if my actual wifi is the issue? HELP!