Pipelined function with lagre amount of data

We would like to use pipelined functions as source of the select statements instead of tables. Thus we can easily switch from our tables to the structures with data from external module due to the need for integration with other systems.
We know these functions are used in situations such as data warehousing to apply multiple transformations to data but what will be the performance in real time access.
Does anyone have any experience using pipelined function with large amounts of data in the interface systems?

It looks like you have already determined that the datatable object will be the best way to do this. When you are creating the object, you must enter the absolute path to your spreadsheet file. Then, you have to create some type of connection (i.e. a pushbutton or timer) that will send a true to the import data member of the datatable object. After these two things have been done, you will be able to access the data using the A3 - K133 data members.
Regards,
Michael Shasteen
Applications Engineering
National Instruments
www.ni.com/ask
1-866-ASK-MY-NI

Similar Messages

  • Pipelined function with huge volume

    Hi all,
    I have a table of 5 million rows with an average length of 1K for each row (dss system).
    My SGA is 2G and PGA 1G.
    I wonder if a pipelined function could support a such volume ?
    Does anyone have already experienced a pipelined function with a huge volume ?
    TIA
    Yang

    Hello
    Well, just over a month later and we're pretty much sorted. Our pipelined functions were not the cause of the excessive memory consumption and the processes are are now no longer consuming as much PGA as they were previously. Here's what I've learnt.
    1. Direct write operations on partitioned tables require two direct write buffers to be allocated per partition. By default, these buffers are 256K each so it's 512K per partition. We had a table with 241 partitions which meant we inadvertently allocating 120MB of PGA without even trying. This is not a problem with pipelined functions.
    2. In 10.2 the total size of the buffers will be kept below the pga_aggregate_target, or 64k per buffer, whichever is higher. This is next to useless though as to really constrain the size of the buffers at all, you need a ridiculously small pga_aggregate_target.
    3. The size of the buffers can be as low as 32k and can be set using an undocumented parameter "_ldr_io_size". In our environment (10.2.0.2 Win2003 using AWE) I've set it to 32k meaning there will be 64k worth of buffers allocated to each partition significantly reducing the amount of PGA required.
    4. I never want to speak to Oracle support again. We had a severity 1 SR open for over a month and it took the development team 18 days to get round to looking at the test case I supplied. Once they'd looked at it, they came back with the undocumented parameter which worked, and the ridiculous suggestion that I set the PGA aggregate target to 50MB on a production box with 4GB and 300+ dedicated connections. No only that, they told me that a pga_aggregate_target of 50MB was sensible and did so in the most patronising way. Muppets.
    So in sum, our pipelined functions are working exceptionally well. We had some scary moments as we saw huge amounts of PGA being allocated and then 4030 memory errors but now it's sorted and chugging along nicely. The throughput is blistering especially when running in parallel - 200m rows generated and inserted in around 1 hour.
    To give some background on what we're using pipelined functions for....
    We have a list of master records that have schedules associated with them. Those schedules need to be exploded out to an hourly level and customised calendars need to be applied to them along with custom time-zone-style calculations. There are various lookups that need to be applied to the exploded schedules and a number of value calculations based on various rules. I did originally implement this in straight SQL but it was monsterous and ran like a dog. The SQL was highly complex and quite quickly became unmanageable. I decided to use pipelined functions because
    a) It immensely simplified the logic
    b) It gave us a very neat way to centralise the logic so it can be easily used by other systems - PL/SQL and SQL
    c) We can easily see what it is doing and make changes to the logic without affecting execution plans etc
    d) Its been exceptionally easy to tune using DBMS_PROFILER
    So that's that. I hope it's of use to anyone who's interested.
    I'm off to get a "pipelined fuinctions rule" tattoo on my face.
    David

  • How do I pause an iCloud restore for app with large amounts of data?

    I am using an iPhone app which is holding 10 Gb of data (media files) .
    Unfortunately, although all data was backed up, my iPhone 4 was faulty and needed to be replaced with a new handset. On restore, the 10Gb of data takes a very long time to restore over wi-fi. If interrupted (I reached the halfway point during the night) to go to work or take the dog for a walk, I end up of course on 3G for a short period of time.
    Next time I am in a wi-fi zone the app is restoring again right from the beginning
    How does anyone restore an app with large amounts of data or pause a restore?

    You can use classifications but there is no auto feature to archive like that on web apps.
    In terms of the blog, Like I have said to everyone that has posted about blog preview images:
    http://www.prettypollution.com.au/business-catalyst-blog
    Just one example of an image at the start of the blog post rendering out, not hard at all.

  • Working with large amount of data

    Hi! I am writing an peer-to-peer video player and I need to operate with huge amounts of data. The downloaded data should be stored in memory for sharing with other peers. Some video files can be up to 2 GB and more. So keep all this data in RAM - not the best solution i think =)
    Since the flash player does not have access to the file system I can not save this data in temporary files.
    Is there a solution to this problem?

    No ideas? very sad ((

  • Pipelined functions with spatial data

    hi,
    i've been trying to use pipelined functions (using the TABLE and CAST operators to query data from them) to retrieve large amounts of spatial data.
    i've followed the examples on metalink, and they work fine. my problem arises when i apply similar functions to query data using SDO_FILTER, i've been trying to pipe a mdsys.sdo_geometry datatype (ref cursor) into the function - returns null.
    are spatial datatypes supported for use in pipelined functions, and using the table and cast operators?
    if they are, where can i find further reading/reference on the subject?
    thanks
    santosh sewlal

    Check out http://otn.oracle.com/products/spatial/pdf/mapviewerfaq_31.pdf
    or
    You can look for a third party solution that can draw maps.
    Then you call out to this component from Forms.

  • Using pipelined functions with bind variables in Apex...

    Hy all:
    I have a table which has about 10 million records and it is hanging up the system when it is trying to retrieve the data from that table... so what I have done is I created a pripelined
    function and then trying to retrieve data using an SQL statement ... when I try to use a bind variable to filter by the date and location it is binding according to the location
    but not by date ... can anyone help me in this please!!
    Help greatly appreciated !
    Thanks in advance !

    Hi Denes:
    Create or replace type ohe1 as object (
    IMLITM NCHAR(50), IMAITM NCHAR(50), IMDSC1 NCHAR(60), COUNCS NUMBER(22), LIPQOH NUMBER(22),
    LIMCU NCHAR(24), LILOCN NCHAR(40), LILOTN NCHAR(60), LILOTS NCHAR(2), IOMMEJ NUMBER(22))
    CREATE OR REPLACE TYPE OHE AS TABLE OF Ohe1
    CREATE OR REPLACE FUNCTION GET_ohe
    return OHE PIPELINED
    IS
    m_rec ohe1:= ohe1 (NULL,NULL,NULL,0,0,NULL,NULL,NULL,NULL,0);
    begin
    for c in (select f1.LITM LITM, F1.AITM AITM, F1.DSC1 DSC1, F5.UNCS UNCS,
    F21.QOH QOH, F21.MCU MCU, F21.LOCN LOCN, F21.LOTN LOTN, F21.LOTS LOTS,
    F8.MEJ MEJ FROM F1 F1, F2 F2, F21 F21, F5 F5, F8 F8
    WHERE (F5.EDG='07') AND (F21.QOH != 0) AND F2.IBITM = F1.IMITM
    AND F21.ITM = F2.ITM AND F21.ITM = F5.ITM AND F21.MCU = F5.MCU
    AND F21.MCU = F2.MCU AND F21.LOTN = F8.LOTN AND F21.MCU = F8.MCU)
    loop
    m_rec.LITM:=c.LITM;
    m_rec.AITM:=c.AITM;
    m_rec.DSC1:=c.DSC1;
    m_rec.UNCS:=c.UNCS;
    my_record.QOH:=c.QOH;
    my_record.MCU:=c.MCU;
    my_record.LOCN:=c.LOCN;
    my_record.LOTN:=c.LOTN;
    my_record.LOTS:=c.LOTS;
    my_record.MEJ:=c.MEJ;
    PIPE ROW (my_record);
    end loop;
    return;
    end;
    select LITM , AITM , DSC1 , UNCS*.0001 UNCS, QOH*.0001 QOH, (UNCS*.0001)*(QOH*.0001) AMOUNT,
    MCU MCU, LOCN LOCN, LOTN LOTN, LOTS LOTS, jdate(DECODE(MEJ,0,100001,MEJ)) MEJ FROM
    TABLE (GET_ohe)
    WHERE trim(LIMCU)= TRIM(:OHE_BRANCHID)
    AND (jdate(DECODE(MEJ,0,10001,MEJ)) >=:FROMEXPDT
    AND jdate(DECODE(MEJ,0,10001,MEJ)) <=:TOEXPDATE)
    The MEJ is a julian date and I am trying to convert it into a date ..... using the function jdate! and the pipelined function is created without any errors
    and I am able to get the data with correct branch location bind variable but only problem is it is not binding the date filters in the sql.....
    Thanks
    Edited by: user10183758 on Oct 16, 2008 8:17 AM

  • Dealing with large amounts of data

    Hi
    I am new to using Flex and BlazeDS. I can see in the FAQ that binary data transfer from server to Flex app is more efficient. My question is: is there a way to build a Flex databound control (e.g. datagrid) which binds to a SQL query or web service or remoting on the server side and then display infinite amount of data as user pages or scrolls using scrollbar? or does the developer have to write code from scratch to deal with paging or deal with infinite scrollbar by asking server for chunks of data at a time?

    You have to write your own paginating grid. It's easy to do, just make a canvas, throw a grid and some buttons on it, than when user click to the next page, you make a request to the server and when you have a result, set it as a new data model for the grid.
    I would discourage you to return infinite amount of data to the user, provide a search functionality plus pagination.
    Hope that helps.

  • Pipelined Function with execute immediate

    Hello Experts,
    I have created a Pipe lined function with execute immediate, due to below requirement;
    1) Columns in where clause is passed dynamically.
    2) I want to know the data stored into above dynamic columns.
    3) I want to use it in report, so I don't want to insert it into a table.
    I have created a TYPE, then through execute immediate i have got the query and result of that query will be stored in TYPE.
    But when calling the function i am getting
    ORA-00932: inconsistent datatypes: expected - got -
    Below is my function and type, let me know i am going wrong, and is my logic correct.
    CREATE OR REPLACE TYPE OBJ_FPD AS OBJECT
                      (LOW_PLAN_NO VARCHAR2 (40),
                       FPD VARCHAR2 (5),
                       SERIAL_NO NUMBER,
                       CEDIA_CODE VARCHAR2 (2),
                       DT DATE);
    CREATE OR REPLACE TYPE FPD_TBL_TYPE AS TABLE OF OBJ_FPD;
    CREATE OR REPLACE FUNCTION FUNC_GET_FPD_DATE (P_LOW_PLAN_NO    VARCHAR2,
                                                  P_CEDIA_CODE     VARCHAR2,
                                                  P_SERIAL_NO      NUMBER)
       RETURN FPD_TBL_TYPE
       PIPELINED
    AS
       CURSOR C1
       IS
              SELECT 'FPD' || LEVEL TBL_COL
                FROM DUAL
          CONNECT BY LEVEL <= 31;
       V_STR        VARCHAR2 (5000);
       V_TBL_TYPE   FPD_TBL_TYPE;
    BEGIN
       FOR X IN C1
       LOOP
          V_STR :=
                'SELECT A.low_PLAN_NO,
               A.FPD,
               A.SERIAL_NO,
               A.cedia_code,
               TO_DATE (
                     SUBSTR (FPD, 4, 5)
                  || ''/''
                  || TO_CHAR (C.low_PLAN_PERIOD_FROM, ''MM'')
                  || ''/''
                  || TO_CHAR (C.low_PLAN_PERIOD_FROM, ''RRRR''),
                  ''DD/MM/RRRR'')
                  DT FROM ( SELECT low_PLAN_NO, '
             || ''''
             || X.TBL_COL
             || ''''
             || ' FPD, '
             || X.TBL_COL
             || ' SPTS, SERIAL_NO, cedia_code FROM M_low_PLAN_DETAILS WHERE NVL('
             || X.TBL_COL
             || ',0) > 0 AND SERIAL_NO = '
             || P_SERIAL_NO
             || ' AND cedia_code = '
             || ''''
             || P_CEDIA_CODE
             || ''''
             || ' AND low_PLAN_NO = '
             || ''''
             || P_LOW_PLAN_NO
             || ''''
             || ') A,
               M_low_PLAN_DETAILS B,
               M_low_PLAN_MSTR C
         WHERE     A.low_PLAN_NO = B.low_PLAN_NO
               AND A.cedia_code = B.cedia_code
               AND A.SERIAL_NO = B.SERIAL_NO
               AND B.low_PLAN_NO = C.low_PLAN_NO
               AND B.CLIENT_CODE = C.CLIENT_CODE
               AND B.VARIANT_CODE = C.VARIANT_CODE
    CONNECT BY LEVEL <= SPTS';
          EXECUTE IMMEDIATE V_STR INTO V_TBL_TYPE;
          FOR I IN 1 .. V_TBL_TYPE.COUNT
          LOOP
             PIPE ROW (OBJ_FPD (V_TBL_TYPE (I).LOW_PLAN_NO,
                                V_TBL_TYPE (I).FPD,
                                V_TBL_TYPE (I).SERIAL_NO,
                                V_TBL_TYPE (I).CEDIA_CODE,
                                V_TBL_TYPE (I).DT));
          END LOOP;
       END LOOP;
       RETURN;
    EXCEPTION
       WHEN OTHERS
       THEN
          RAISE_APPLICATION_ERROR (-20000, SQLCODE || ' ' || SQLERRM);
          RAISE;
    END;Waiting for your views.
    Regards,

    Ora Ash wrote:
    Hello Experts,
    I have created a Pipe lined function with execute immediate, due to below requirement;
    1) Columns in where clause is passed dynamically.No, that's something you've introduced, and is due to poor database design. You appear to have columns on your table called FPD1, FPD2 ... FPD31. The columns do not need to be 'passed dynamically'
    2) I want to know the data stored into above dynamic columns.And you can know the data without it being dynamic.
    3) I want to use it in report, so I don't want to insert it into a table.That's fine, though there's no reason to use a pipelined function.
    You also have an pointless exception handler, which masks any real errors.
    I'm not quite sure what the point of your "connect by" is in your query as we don't have your tables or data or know for sure what the expected output is.
    However, in terms of handling the 'dynamic' part that you've introduced, then you would be looking at doing something along the following lines, using a static query that requires no poor dynamic code, and no pipelined function...
    with x as (select level as dy from dual connect by level <= 31)
    select a.low_plan_no
          ,a.fpd
          ,a.serial_no
          ,a.cedia_code
          ,trunc(c.low_plan_period_from)+a.dy-1 as dt
    from  (select low_plan_no
                 ,dy
                 ,'FPD'||dy as fpd
                 ,spts
                 ,serial_no
                 ,cedia_code
           from (
                 select low_plan_no
                       ,x.dy
                       ,case x.dy when 1 then fpd1
                                  when 2 then fpd2
                                  when 3 then fpd3
                                  when 4 then fpd4
                                  when 5 then fpd5
                                  when 6 then fpd6
                                  when 7 then fpd7
                                  when 8 then fpd8
                                  when 9 then fpd9
                                  when 10 then fpd10
                                  when 11 then fpd11
                                  when 12 then fpd12
                                  when 13 then fpd13
                                  when 14 then fpd14
                                  when 15 then fpd15
                                  when 16 then fpd16
                                  when 17 then fpd17
                                  when 18 then fpd18
                                  when 19 then fpd19
                                  when 20 then fpd20
                                  when 21 then fpd21
                                  when 22 then fpd22
                                  when 23 then fpd23
                                  when 24 then fpd24
                                  when 25 then fpd25
                                  when 26 then fpd26
                                  when 27 then fpd27
                                  when 28 then fpd28
                                  when 29 then fpd29
                                  when 30 then fpd30
                                  when 31 then fpd31
                        else null
                        end as spts
                       ,serial_no
                       ,cedia_code
                 from   x cross join m_low_plan_details
                 where  serial_no = p_serial_no
                 and    cedia_code = p_cedia_code
                 and    low_plan_no = p_low_plan_no
           where  nvl(spts,0) > 0
          ) A
          join m_low_plan_details B on (    A.low_plan_no = B.low_plan_no
                                        and A.cedia_code = B.cedia_code
                                        and A.serial_no = B.serial_no
          join m_low_plan_mstr C on (    B.low_plan_no = C.low_plan_no
                                     and B.client_code = C.client_code
                                     and B.variant_code = C.variant_code
    connect by level <= spts;... so just remind us again why you think it needs to be dynamic?

  • Putting different tables with large amounts of data together

    Hi,
    I need to put different kinds of tables together:
    Example 'table1' with columns: DATE, IP, TYPEN, X1, X2, X3
    To 'table0' with columns DATE, IP, TYPENUMBER.
    TYPEN in table1 needs to be inserted into TYPENUMBER in table0, but through a function which transforms it to some other value.
    There are several other tables like 'table1', but with slighty different columns, that needs to be inserted into the same table ('table0').
    The amount of data in each table is quite huge, so the procedure should be done in small pieces and efficiently.
    Should/Could I use data pump for this?
    Thank you!

    user13036557 wrote:
    How should I continue with this then?
    Should I delete the columns I don't need and transform the data in the table first and then use data pump,
    or should I simply make a procedure going through every row (in smaller pieces) of 'table1' and inserting it to 'table0'?You have both the options .. Please test both of them , calculate time to complete and implement the best .
    Regards
    Rajesh

  • Pipelined function with Union clause(Invalid Number(ORA-01722) error )

    Hi,
    I have a pipelined function.
    I am fetching the data with the use of a REF cursor.
    The query has two part: query1 and query2
    when I am opeing the cursor with query1 UNION query2,it is giving me Invalid Number(ORA-01722) error.
    but when I open the cursor separately for query1 and query2,I am getting the resultset for each of the query.
    query1 and query2 are running fine with UNION in the sql editor.
    But,when I put them into two variables (var1 :='query1' and var2:='query2' and try to open the cursor with var1 UNION var2 , it's giving me the error.
    But when I put query2 as 'Select ..... from dual' and then try to open the cursor for query1 UNION query2,
    then getting the result of query1 and one extra row for query2(dual).
    The FROM table set is same for query1 and query2..
    can anyone please help me , why this Invalid Number(ORA-01722) error is coming.

    query1 := 'SELECT to_char(st.store_no) STORE_NUMBER,pp.name PROGRAM_NAME, ''HBS'' DISPENSING_SYSTEM,
    pt.PATIENT_SRC_NO SOURCE_ID, null RX2000_PATIENT_IDENTIFIER, pt.last_name PATIENT_LAST_NAME, pt.first_name
    PATIENT_FIRST_NAME, to_char(LPAD (pppd.rx_number, 7, '' '')|| ''-''|| LPAD (pppd.refill_number, 2, 0)) RX_REFILL
    FROM
    pega_patient_program_dispense pppd,pega_patient_program_reg pppr,health_care_profs hcp1,
    health_care_profs hcp2,prescription_dispenses pd,prescriptions p, pega_program_status_reason ppsr ,master_doctors md,
    prescription_disp_orders pdo, hbs_shipment_extract hse,stores st,master_stores ms,pega_programs pp,patients pt,master_patients mpt
    WHERE
    pppr.referring_hcp_id=md.id(+) and md.dispensing_system_id=hcp1.hcp_id and
    pppr.ID=pppd.PEGA_PATIENT_PROGRAM_REG_ID and pppd.prescription_dispense_id=pd.id and pd.prescriptions_id=p.id and
    p.hcp_id=hcp2.hcp_id and ppsr.ID=pppd.PEGA_PROGRAM_STATUS_REASON_ID and pppd.prescription_dispense_id = pdo.prescriptions_dispenses_id(+) AND
    pdo.shipment_number = hse.shp_no(+) and pppd.store_id=ms.id and st.id=ms.dispensing_system_id and pppr.pega_program_id=pp.id
    and pppr.patient_id=mpt.id and pt.patient_id=mpt.dispensing_system_id and pppd.dispensing_system=''HBS'' and pp.name LIKE ''REV%''
    AND
    ((pppd.prescription_dispense_id IS NOT NULL AND pppd.quantity_dispensed &gt; 0 AND pppd.reported_date IS NULL AND
    pppd.src_delete_date IS NULL AND ((pppr.program_specific_patient_code IS NULL ) OR (hcp1.last_name IS NULL) OR
    (hcp1.first_name IS NULL) OR (hcp2.first_name IS NULL) OR (hcp2.last_name IS NULL) OR (hcp2.address_1 IS NULL) OR
    (hcp2.city IS NULL) OR (hcp2.state_cd IS NULL) OR (hcp2.zip_1 IS NULL)OR (pppr.referral_date is NULL) OR
    (hcp1.state_cd IS NULL) OR (hcp1.zip_1 IS NULL)
    OR (hcp2.zip_1 IS NULL) OR (pppr.last_status_change_date IS NULL) OR (pppr.varchar_1 IS NULL) OR
    (pppr.varchar_7 IS NULL OR pppr.varchar_7 LIKE ''NOT AVAILABLE AT REFERRAL%'') OR (pppr.varchar_5 IS NULL OR
    pppr.varchar_5 LIKE ''NOT AVAILABLE AT REFERRAL%'')) ) AND ppsr.INCLUDE_ON_EXCEPTION_RPT_IND=''Y'')
    OR
    ((pppd.authorization_number IS NULL OR NVL(TRUNC(pdo.shipped_date),TRUNC(hse.shp_act_date_dt)) NOT BETWEEN
    TRUNC(pppd.date_1) AND TRUNC(pppd.date_2)) OR (pppd.varchar_4 IS NULL OR NVL(TRUNC(pdo.shipped_date),
    TRUNC(hse.shp_act_date_dt)) NOT BETWEEN TRUNC(pppd.date_3) AND TRUNC(pppd.date_5)))
    OR
    (pppr.pega_program_competitors_id IS NULL AND (ppsr.manufacturer_status=''TRIAGE'' AND ppsr.manufacturer_reason=''COMPETITOR''))
    OR
    (ppsr.manufacturer_reason IS NULL)
    query2 := ' SELECT ''150'' STORE_NUMBER,''test'' PROGRAM_NAME , ''RX2000'' DISPENSING_SYSTEM
    ,''01'' SOURCE_ID, null RX2000_PATIENT_IDENTIFIER, ''test'' PATIENT_LAST_NAME, ''test'' PATIENT_FIRST_NAME,
    ''rx01'' RX_REFILL
    FROM
    pega_patient_program_reg pppr,health_care_profs hcp1, pega_program_status_reason ppsr ,
    master_doctors md,stores st,master_stores ms,pega_programs pp,patients pt,master_patients mpt
    WHERE
    pppr.referring_hcp_id=md.id(+) and md.dispensing_system_id=hcp1.hcp_id and
    ppsr.ID=pppr.REFERRAL_STATUS_ID and st.store_type=''M'' and pppr.store_id=ms.id and st.id=ms.dispensing_system_id
    and pppr.pega_program_id=pp.id and pppr.patient_id=mpt.id and pt.patient_id=mpt.dispensing_system_id and pp.name LIKE ''REV%''
    AND
    ((((pppr.program_specific_patient_code IS NULL ) OR (hcp1.last_name IS NULL) OR (hcp1.first_name IS NULL) OR
    (pppr.referral_date is NULL) OR (hcp1.state_cd IS NULL) OR (hcp1.zip_1 IS NULL) OR (pppr.last_status_change_date IS NULL)
    OR (pppr.varchar_1 IS NULL) OR (pppr.varchar_7 IS NULL) OR (pppr.varchar_5 IS NULL OR pppr.varchar_5 LIKE
    ''NOT AVAILABLE AT REFERRAL%'')) ) AND ppsr.INCLUDE_ON_EXCEPTION_RPT_IND=''Y'')
    OR
    (pppr.pega_program_competitors_id IS NULL AND (ppsr.manufacturer_status=''TRIAGE'' AND ppsr.manufacturer_reason=''COMPETITOR''))
    OR
    (ppsr.manufacturer_reason IS NULL)
    AND
    pppr.id NOT IN(select pppd.PEGA_PATIENT_PROGRAM_REG_ID from PEGA_PATIENT_PROGRAM_DISPENSE pppd)';
    BUT IF I put
    query2 := ' SELECT ''150'' STORE_NUMBER,''test'' PROGRAM_NAME , ''RX2000'' DISPENSING_SYSTEM
    ,''01'' SOURCE_ID, null RX2000_PATIENT_IDENTIFIER, ''test'' PATIENT_LAST_NAME, ''test'' PATIENT_FIRST_NAME,
    ''rx01'' RX_REFILL
    FROM DUAL';
    then it is giving result.

  • Error in Generating reports with large amount of data using OBIR

    Hi all,
    we hve integrated OBIR (Oracle BI Reporting) with OIM (Oracle Identity management) to generate the custom reports. Some of the custom reports contain a large amount of data (approx 80-90K rows with 7-8 columns) and the query of these reports basically use the audit tables and resource form tables primarily. Now when we try to generate the report, it is working fine with HTML where report directly generate on console but the same report when we tried to generate and save in pdf or Excel it gave up with the following error.
    [120509_133712190][][STATEMENT] Generating page [1314]
    [120509_133712193][][STATEMENT] Phase2 time used: 3ms
    [120509_133712193][][STATEMENT] Total time used: 41269ms for processing XSL-FO
    [120509_133712846][oracle.apps.xdo.common.font.FontFactory][STATEMENT] type1.Helvetica closed.
    [120509_133712846][oracle.apps.xdo.common.font.FontFactory][STATEMENT] type1.Times-Roman closed.
    [120509_133712848][][PROCEDURE] FO+Gen time used: 41924 msecs
    [120509_133712848][oracle.apps.xdo.template.FOProcessor][STATEMENT] clearInputs(Object) is called.
    [120509_133712850][oracle.apps.xdo.template.FOProcessor][STATEMENT] clearInputs(Object) done. All inputs are cleared.
    [120509_133712850][oracle.apps.xdo.template.FOProcessor][STATEMENT] End Memory: max=496MB, total=496MB, free=121MB
    [120509_133818606][][EXCEPTION] java.net.SocketException: Socket closed
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
    at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
    at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
    at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
    at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
    at weblogic.servlet.internal.ChunkOutput.write(ChunkOutput.java:304)
    at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:139)
    at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:169)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
    at oracle.apps.xdo.servlet.util.IOUtil.readWrite(IOUtil.java:47)
    at oracle.apps.xdo.servlet.CoreProcessor.process(CoreProcessor.java:280)
    at oracle.apps.xdo.servlet.CoreProcessor.generateDocument(CoreProcessor.java:82)
    at oracle.apps.xdo.servlet.ReportImpl.renderBodyHTTP(ReportImpl.java:562)
    at oracle.apps.xdo.servlet.ReportImpl.renderReportBodyHTTP(ReportImpl.java:265)
    at oracle.apps.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:270)
    at oracle.apps.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:250)
    at oracle.apps.xdo.servlet.XDOServlet.doGet(XDOServlet.java:178)
    at oracle.apps.xdo.servlet.XDOServlet.doPost(XDOServlet.java:201)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
    at oracle.apps.xdo.servlet.security.SecurityFilter.doFilter(SecurityFilter.java:97)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(Unknown Source)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    It seems where the querry processing is taking some time we are facing this issue.Do i need to perform any additional configuration to generate such reports?

    java.net.SocketException: Socket closed
         at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
         at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
         at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
         at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
         at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
         at weblogic.servlet.internal.CharsetChunkOutput.flush(CharsetChunkOutput.java:249)
         at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
         at weblogic.servlet.internal.CharsetChunkOutput.implWrite(CharsetChunkOutput.java:396)
         at weblogic.servlet.internal.CharsetChunkOutput.write(CharsetChunkOutput.java:198)
         at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:139)
         at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:169)
         at com.tej.systemi.util.AroundData.copyStream(AroundData.java:311)
         at com.tej.systemi.client.servlet.servant.Newdownloadsingle.producePageData(Newdownloadsingle.java:108)
         at com.tej.systemi.client.servlet.servant.BaseViewController.serve(BaseViewController.java:542)
         at com.tej.systemi.client.servlet.FrontController.doRequest(FrontController.java:226)
         at com.tej.systemi.client.servlet.FrontController.doPost(FrontController.java:128)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3498)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(Unknown Source)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:17
    (Please help finding a solution in this issue its in production and we need to ASAP)
    Thanks in Advance
    Edited by: 909601 on Jan 23, 2012 2:05 AM

  • How to deal with large amount of data

    Hi folks,
    I have a web page that needs 3 maps to serve. Each map will hold about 150,000 entries, and all of them will use about 100MB total. For some users with lots of data (e.g. 1,000,0000 entries), it may use up to 1GB of memory. If few of these high-volume users log in at the same time, it can bring the server down. The data is from the files, I cannot read it on demand because it will be too slow. Loading the data to maps offers me very good performance, but it does not scale. I am thinking to serialize the maps and deserialize them when I need. Is it my only option?
    Thanks in advance!

    JoachimSauer wrote:
    I don't buy the "too slow" argument.
    I'd put the data into a database. They are built to handle that amount of data.++

  • XY graphing with large amount of data

    I was looking into graphing a fairly substantial amount of data.  It is bursted across serial.
    What I have is 30 values corresponding to remote data sensors.  The data for each comes across together, so I have no problem having the data grouped.  It is, effectively, in an array of size 30.  I've been wanting to place this data in a graph.
    The period varies between 1/5 sec and 2 minutes between receptions (its wireless and mobile, so signal strength varies).  This eliminates waveform graph as the time isn't constant and there's alot of data(So no random NaN insertion).  This leaves me with an XY graph.
    Primary interest is with the last 10 minutes or so, with a desire to see up to an hour or two back.
    The data is farily similar, and I'm tring to possibly split it into groups of 4 or 5 sets of ordered pairs per graph.
    Problems:
    1.  If data comes in slow enough, everything is ok, but the time needed to synchrounously update the graph(s) often can exceed the time it would take to fully recieve the chunk of data which contains these data points.  Thinking asynchrounously is useless, as the graphs need to be reasonably in tune with the most recent data recieved.  I can't have the an exponential growth in the delta of time represented on the graph and the time the last bit of data was recieved.
    2.  I could use some advice on making older data points more sparse to allow for older data to be viewed, but with a sort of 'decay' of old data I don't value that 1/5 second resolution at all.
    I'm most concerned with solving problem 1, but random suggestions on 2 are most welcome.

    I didn't quite get the first question. Could you try to find out where
    exactly the time is consumed in your program and then refine your
    question.
    To the second question (which may also solve the first question). You
    can store all the data to a file as it arrives. Keep the most recent
    data in a let's say shift register of the update loop. Add a data point
    corresponding the start of the mesurement to the data and wire it to a
    XY graph. Make X Scrollbar visible. Handle the event for XY graph:X
    scale change. When the scale is changed i.e. the user scrolls the
    scroll bar, load data from the file and display it on the XY graph.
    This was a little simplified, it's a little more complicated.
    In my project, I am writing an XY graph X Control to handle a bit
    similar issue. I have huge data sets i.e. multichannel recordings 
    with millisecond resolution over many hours. One data set may be
    several gigabytes, so the data won't fit on the main memory of the
    computer at once. I wrote an X control which contains a XY graph. Each
    time the time scale is changed i.e. the scroll bar is scrolled, the X
    control generates a user event if it doesn't have enough data to
    display the time slice requested. The user event is handled outside the
    X Control by loading the appropriate set of data from the disk and
    displaying it on the X Control. The user event can be generated already
    a while before the X Control is out of data. This way the data is
    loaded a bit in advance, which allows seamles scrolling on the XY
    graph. One must notice that the front panel updates must be turned off
    in the X Control when the data is updated and back on after the update
    has finnished. Otherwise the XY graph will flicker annoingly.
    Tomi Maila

  • Apex3.2.1 pipelined functions with parameters send cpu to 100 percent

    I have just installed 11g and exported a schema from 10g into it.
    When I run Apex3.2.1 and open a page with a flash chart that running off a table object populated by a pipelined function the CPU goes in overload 100%.
    I have tracked the problem to parameters on the pipelined function if I use bind parameters( :P640_YEARS, :P640_EMPLOYER_ID) then the problem happens but if I hard code values all is fine (2010,99).
    This happens with every flash chart we have, which makes our application and database unusable.
    Any ideas.
    Derek

    Thanks very much for the posts, I took the opportunity to upgrade to Apex 4.0.2 as suggested by Patrick and the application works fine again now.
    Interestingly I did re-analyse the statistics but only as a reaction to the problem, should have been done regardless.
    I did consider re-writing the pipelined functions but to be honest it would have been difficult as they are extremely complex, I wasn't looking forward to the task. I have become quite a fan of pipelined functions for building flash charts, dash boards extra, its as always about tuning the sql.
    Thanks very to all
    Derek
    Edited by: 835735 on Feb 10, 2011 1:50 PM

  • Ni reports crashes with large amounts of data

    I'm using labview 6.02 and am using some of the Ni reports tools to generate a report. I create, what can be, a large data array which is then sent to the "append table to report" vi and printed. The program works fine, although it is slow, as long as the data that I send to the "append table to report" vi is less than about 30kb. If the amount of data is too large Labview terminates execution with no error code or message displayed. Has anyone else had a similar problem? Does anyone know what is going on or better yet how to fix it?

    Hello,
    I was able to print a 100x100 element array of 5-character strings (~50 kB of data) without receiving a crash or error message. However, it did take a LONG time...about 15 minutes for the VI to run, and another 10 minutes for the printer to start printing. This makes sense, because 100x100 elements is a gigantic amount of data to send into the NI-Reports ActiveX object that is used for printing. You may want to consider breaking up your data into smaller arrays and printing those individually, instead of trying to print the giant array at once.
    I hope these suggestions help you out. Good luck with your application, and have a pleasant day.
    Sincerely,
    Darren N.
    NI Applications Engineer
    Darren Nattinger, CLA
    LabVIEW Artisan and Nugget Penman

Maybe you are looking for

  • OID instance not showing up on the Internet Directory page

    The customer is trying to get iasconsole to display performance metrics, but OID instance does not show up on Internet Directory page. This is a clustered OID install and this only appears to occur on the clustered OID consoles. Fanout nodes display

  • Can't open file "may be truncated or missing"

    I have several photos that won't open. They can't even be found by spotlight. I have run disk utility and everything checks out normal. I am getting the following error message from Preview 3.0.4: Couldn't open the file. It may be corrupt or a file f

  • Xcode won't install

    No matter what I do I can not get Xcode 4.2.1 to install it gets to a certain point then stops and gives me crap. Anyone else have this issue? I'm on Lion 10.7.2. I've tried making a new admin account and installing that way like I saw on another for

  • Leopard and Airport

    Some may have noticed that there are a number of threads now dealing with essentially the same issue of problematic connectivity using Airport on Leopard. Lots of different platforms, lots of different attempts at a work-arounds... and not much luck.

  • Latest Update of Premiere Pro CC no longer supports FLV

    Why? Where's the wisdom in having an editing program that no longer supports FLV and SWF? I was editing on a project for a client using a FLV video file and had invested over 20 hours. I did an update and when I returned to the timeline I was unable