Bulk Collect with FORALL not working - Not enough values error

Hi,
I am trying to copy data from one table to another which are having different number of columns. I am doing the following. But it threw not enough values error.
Table A has more than 10 millions of records. So I am using bulk collect instead of using insert into select from.
TABLE A (has more columns - like 25)
c1 Number
c2 number
c3 varchar2
c4 varchar2
c25 varchar2
TABLE B (has less columns - like 7)
c1 Number
c2 number
c3 varchar2
c4 varchar2
c5 number
c7 date
c10 varchar2
declare
TYPE c IS REF CURSOR;
v_c c;
v_Sql VARCHAR2(2000);
TYPE array is table of B%ROWTYPE;
l_data array;
begin
v_Sql := 'SELECT c1, c2, c3, c4, c5, c7, c10 FROM A ORDER BY c1';
OPEN v_c FOR v_Sql;
LOOP
FETCH v_c BULK COLLECT INTO ldata LIMIT 100000;
FORALL i in 1 .. ldata.count
INSERT
INTO B
VALUES ldata(i);
END LOOP;
COMMIT;
exception
WHEN OTHERS THEN
ROLLBACK;
dbms_output.put_line('Exception Occurred' || SQLERRM);
END;
When I execute this, I am getting
PL/SQL: ORA-00947: not enough values
Any suggestions please. Thanks in advance.

Table A has more than 10 millions of records. So I am using bulk collect instead of using insert into select from.That doesn't make sense to me. An INSERT ... SELECT is going to be more efficient, more maintainable, easier to write, and easier to understand.
INSERT INTO b( c1, c2, c3, c4, c5, c7, c10 )
  SELECT c1, c2, c3, c4, c5, c7, c10
    FROM a;is going to be faster, use fewer resources, be far less error-prone, and have a far more obvious purpose when some maintenance programmer comes along than any PL/SQL block that does the same thing.
If you insist on using PL/SQL, what version of Oracle are you using? You should be able to do something like
DECLARE
  TYPE b_tbl IS TABLE OF b%rowtype;
  l_array b_tbl;
  CURSOR a_cursor
      IS SELECT c1, c2, c3, c4, c5, c7, c10 FROM A;
BEGIN
  OPEN a_cursor;
  LOOP
    FETCH a_cursor
     BULK COLLECT INTO l_array
    LIMIT 10000;
    EXIT WHEN l_array.COUNT = 0;
    FORALL i IN l_array.FIRST .. l_array.LAST
      INSERT INTO b
        VALUES l_array(i);
  END LOOP;
  COMMIT;
END;That at least eliminates the infinite loop and the unnecessary dynamic SQL. If you're using older versions of Oracle (it's always helpful to post that information up front), the code may need to be a bit more complex.
Justin
Edited by: Justin Cave on Jan 19, 2011 5:46 PM

Similar Messages

  • Apple remote won't work with Keynote + "not enough RAM" error

    Of course it works with Keynote 6.0 ('13), but not Keynote 5.3. It also works with iTunes and volume during Keynot '13 but will not advance the slide/animation.
    Anyone else have the same problem?
    Also, ever since Mavericks update, sometimes my Keynotes will not play due to an error that says I have insufficient RAM. Even if I close all other apps. Sometimes when I quit the app and restart it, it will work?
    Anyone else? Fixes?

    Solution: Deleted Keynote '13 and the remote works with '09 again.
    Hmmm...
    We'll see if the "not enough RAM" error pops up anymore.

  • BULK COLLECT and FORALL with dynamic INSERT.

    Hello,
    I want to apply BULK COLLECT and FORALL feature for a insert statement in my procedure for performance improvements as it has to insert a huge amount of data.
    But the problem is that the insert statement gets generated dynamically and even the table name is found at the run-time ... so i am not able to apply the performance tuning concepts.
    See below the code
    PROCEDURE STP_MES_INSERT_GLOBAL_TO_MAIN
      (P_IN_SRC_TABLE_NAME                  VARCHAR2 ,
      P_IN_TRG_TABLE_NAME                  VARCHAR2 ,
      P_IN_ED_TRIG_ALARM_ID                 NUMBER ,
      P_IN_ED_CATG_ID                 NUMBER ,
      P_IN_IS_PIECEID_ALARM IN CHAR,
      P_IN_IS_LAST_RECORD IN CHAR
    IS
          V_START_DATA_ID                 NUMBER;
          V_STOP_DATA_ID                   NUMBER;
          V_FROM_DATA_ID                   NUMBER;
          V_TO_DATA_ID                       NUMBER;
          V_MAX_REC_IN_LOOP              NUMBER := 30000;
          V_QRY1         VARCHAR2(32767);
    BEGIN
                EXECUTE IMMEDIATE 'SELECT MIN(ED_DATA_ID), MAX(ED_DATA_ID) FROM '|| P_IN_SRC_TABLE_NAME INTO V_START_DATA_ID , V_STOP_DATA_ID;
                --DBMS_OUTPUT.PUT_LINE('ORIGINAL START ID := '||V_START_DATA_ID ||' ORIGINAL  STOP ID := ' || V_STOP_DATA_ID);
                V_FROM_DATA_ID := V_START_DATA_ID ;
                IF (V_STOP_DATA_ID - V_START_DATA_ID ) > V_MAX_REC_IN_LOOP THEN
                        V_TO_DATA_ID := V_START_DATA_ID + V_MAX_REC_IN_LOOP;
                ELSE
                       V_TO_DATA_ID := V_STOP_DATA_ID;
                END IF;
              LOOP
                    BEGIN
                 LOOP      
                            V_QRY1 := ' INSERT INTO '||P_IN_TRG_TABLE_NAME||
                            ' SELECT * FROM '||P_IN_SRC_TABLE_NAME ||
                            ' WHERE ED_DATA_ID BETWEEN ' || V_FROM_DATA_ID ||' AND ' || V_TO_DATA_ID;
                    EXECUTE IMMEDIATE V_QRY1;
    commit;
                                     V_FROM_DATA_ID :=  V_TO_DATA_ID + 1;
                            IF  ( V_STOP_DATA_ID - V_TO_DATA_ID > V_MAX_REC_IN_LOOP ) THEN
                                  V_TO_DATA_ID := V_TO_DATA_ID + V_MAX_REC_IN_LOOP;
                            ELSE
                                  V_TO_DATA_ID := V_TO_DATA_ID + (V_STOP_DATA_ID - V_TO_DATA_ID);
                            END IF;
                    EXCEPTION
                             WHEN OTHERS THEN.............
    ....................so on Now you can observer here that P_IN_SRC_TABLE_NAME is the source table name which we get as a parameter at run-time. I have used 2 table in the insert statement P_IN_TRG_TABLE_NAME (in which i have to insert data) and P_IN_SRC_TABLE_NAME(from where i have to insert data)
      V_QRY1 := ' INSERT INTO '||P_IN_TRG_TABLE_NAME||
                            ' SELECT * FROM '||P_IN_SRC_TABLE_NAME ||
                            ' WHERE ED_DATA_ID BETWEEN ' || V_FROM_DATA_ID ||' AND ' || V_TO_DATA_ID;
                    EXECUTE IMMEDIATE V_QRY1;now when i appy the bulk collect and forall feature i am facing the out of scope problem....see the code below ...
    BEGIN
                EXECUTE IMMEDIATE 'SELECT MIN(ED_DATA_ID), MAX(ED_DATA_ID) FROM '|| P_IN_SRC_TABLE_NAME INTO V_START_DATA_ID , V_STOP_DATA_ID;
                --DBMS_OUTPUT.PUT_LINE('ORIGINAL START ID := '||V_START_DATA_ID ||' ORIGINAL  STOP ID := ' || V_STOP_DATA_ID);
                V_FROM_DATA_ID := V_START_DATA_ID ;
                IF (V_STOP_DATA_ID - V_START_DATA_ID ) > V_MAX_REC_IN_LOOP THEN
                        V_TO_DATA_ID := V_START_DATA_ID + V_MAX_REC_IN_LOOP;
                ELSE
                       V_TO_DATA_ID := V_STOP_DATA_ID;
                END IF;
              LOOP
                    DECLARE
                     TYPE TRG_TABLE_TYPE IS TABLE OF P_IN_SRC_TABLE_NAME%ROWTYPE;
                     V_TRG_TABLE_TYPE TRG_TABLE_TYPE;
                     CURSOR TRG_TAB_CUR IS
                     SELECT * FROM P_IN_SRC_TABLE_NAME
                     WHERE ED_DATA_ID BETWEEN V_FROM_DATA_ID AND V_TO_DATA_ID;
                     V_QRY1 varchar2(32767);
                    BEGIN
                    OPEN TRG_TAB_CUR;
                    LOOP
                    FETCH TRG_TAB_CUR BULK COLLECT INTO V_TRG_TABLE_TYPE LIMIT 30000;
                    FORALL I IN 1..V_TRG_TABLE_TYPE.COUNT
                    V_QRY1 := ' INSERT INTO '||P_IN_TRG_TABLE_NAME||' VALUES V_TRG_TABLE_TYPE(I);'
                    EXECUTE IMMEDIATE V_QRY1;
                    EXIT WHEN TRG_TAB_CUR%NOTFOUND;
                    END LOOP;
                    CLOSE TRG_TAB_CUR;
                            V_FROM_DATA_ID :=  V_TO_DATA_ID + 1;
                            IF  ( V_STOP_DATA_ID - V_TO_DATA_ID > V_MAX_REC_IN_LOOP ) THEN
                                  V_TO_DATA_ID := V_TO_DATA_ID + V_MAX_REC_IN_LOOP;
                            ELSE
                                  V_TO_DATA_ID := V_TO_DATA_ID + (V_STOP_DATA_ID - V_TO_DATA_ID);
                            END IF;
                    EXCEPTION
                             WHEN OTHERS THEN.........so on
    But the above code is not helping me ,  what i am doing wrong ??? how can i tune this dynamically generated statement to use bulk collect for better performace ......
    Thanks in Advance !!!!

    Hello,
    a table name cannot be bind as a parameter in SQL, this wont't compile:
    EXECUTE IMMEDIATE ' INSERT INTO :1 VALUES ......
    USING P_IN_TRG_TABLE_NAME ...but this should work:
    EXECUTE IMMEDIATE ' INSERT INTO ' || P_IN_TRG_TABLE_NAME || ' VALUES ......You cannot declare a type that is based on a table which name is in a variable.
    PL/SQL is stronly typed language, a type must be known at compile time, a code like this is not allowed:
    PROCEDURE xx( src_table_name varchar2 )
    DECLARE
       TYPE tab IS TABLE OF src_table_name%ROWTYPE;
      ...This can be done by creating one big dynamic SQL - see example below (tested on Oracle 10 XE - this is a slightly simplified version of your procedure):
    CREATE OR REPLACE
    PROCEDURE stp1(
      p_in_src_table_name                  VARCHAR2 ,
      p_in_trg_table_name                  VARCHAR2 ,
      v_from_data_id     NUMBER := 100,
      v_to_data_id       NUMBER := 100000
    IS
    BEGIN
      EXECUTE IMMEDIATE q'{
      DECLARE
         TYPE trg_table_type IS TABLE OF }' || p_in_src_table_name || q'{%ROWTYPE;
         V_TRG_TABLE_TYPE TRG_TABLE_TYPE;
         CURSOR TRG_TAB_CUR IS
         SELECT * FROM }' || p_in_src_table_name ||
         q'{ WHERE ED_DATA_ID BETWEEN :V_FROM_DATA_ID AND :V_TO_DATA_ID;
      BEGIN
          OPEN TRG_TAB_CUR;
          LOOP
                FETCH TRG_TAB_CUR BULK COLLECT INTO V_TRG_TABLE_TYPE LIMIT 30000;
                FORALL I IN 1 .. V_TRG_TABLE_TYPE.COUNT
                INSERT INTO }' || p_in_trg_table_name || q'{ VALUES V_TRG_TABLE_TYPE( I );
                EXIT WHEN TRG_TAB_CUR%NOTFOUND;
          END LOOP;
          CLOSE TRG_TAB_CUR;
      END; }'
      USING v_from_data_id, v_to_data_id;
      COMMIT;
    END;But this probably won't give any performace improvements. Bulk collect and forall can give performance improvements when there is a DML operation inside a loop,
    and this one single DML operates on only one record or relatively small number of records, and this DML is repeated many many times in the loop.
    I guess that your code is opposite to this - it contains insert statement that operates on many records (one single insert ~ 30000 records),
    and you are trying to replace it with bulk collect/forall - INSERT INTO ... SELECT FROM will almost alwayst be faster than bulk collect/forall.
    Look at simple test - below is a procedure that uses INSERT ... SELECT :
    CREATE OR REPLACE
    PROCEDURE stp(
      p_in_src_table_name                  VARCHAR2 ,
      p_in_trg_table_name                  VARCHAR2 ,
      v_from_data_id     NUMBER := 100,
      v_to_data_id       NUMBER := 100000
    IS
    V_QRY1 VARCHAR2(32767);
    BEGIN
          V_QRY1 := ' INSERT INTO '||   P_IN_TRG_TABLE_NAME ||
                    ' SELECT * FROM '|| P_IN_SRC_TABLE_NAME ||
                    ' WHERE ed_data_id BETWEEN :f AND :t ';
          EXECUTE IMMEDIATE V_QRY1
          USING V_FROM_DATA_ID, V_TO_DATA_ID;
          COMMIT;
    END;
    /and we can compare both procedures:
    SQL> CREATE TABLE test333
      2  AS SELECT level ed_data_id ,
      3            'XXX ' || LEVEL x,
      4            'YYY ' || 2 * LEVEL y
      5  FROM dual
      6  CONNECT BY LEVEL <= 1000000;
    Table created.
    SQL> CREATE TABLE test333_dst AS
      2  SELECT * FROM test333 WHERE 1 = 0;
    Table created.
    SQL> set timing on
    SQL> ed
    Wrote file afiedt.buf
      1  BEGIN
      2     FOR i IN 1 .. 100 LOOP
      3        stp1( 'test333', 'test333_dst', 1000, 31000 );
      4     END LOOP;
      5* END;
    SQL> /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:22.12
    SQL> ed
    Wrote file afiedt.buf
      1  BEGIN
      2     FOR i IN 1 .. 100 LOOP
      3        stp( 'test333', 'test333_dst', 1000, 31000 );
      4     END LOOP;
      5* END;
    SQL> /
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:14.86without bulk collect ~ 15 sec.
    bulk collect version ~ 22 sec. .... 7 sec longer / 15 sec. = about 45% performance decrease.

  • Not enough values for FORALL

    Hi,
    I have written this procedure and I am getting
    LINE/COL ERROR
    69/5 PL/SQL: SQL Statement ignored
    83/11 PL/SQL: ORA-00947: not enough values
    CREATE OR REPLACE PROCEDURE f_t_s (P_F IN VARCHAR2)
    IS
    TYPE quote_cols_rt IS RECORD
    twitter_gid twitter.twitter_gid%TYPE,
    twitter_xid twitter.twitter_xid%TYPE,
    origin_location_type twitter.origin_location_type%TYPE,
    destination_location_type twitter.destination_location_type%TYPE,
    is_hazardous twitter.is_hazardous%TYPE,
    perspective twitter.perspective%TYPE,
    twitter_option twitter.twitter_option%TYPE,
    servprov_gid twitter.servprov_gid%TYPE,
    origin_search_value twitter.origin_search_value%TYPE,
    destination_search_value twitter.destination_search_value%TYPE,
    domain_name twitter.domain_name%TYPE,
    is_customer_rates_only twitter.is_customer_rates_only%TYPE
    TYPE twitter_cols_tt IS TABLE OF twitter_cols_rt INDEX BY PLS_INTEGER;
    twitter_cols_t twitter_cols_tt;
    BEGIN
    SELECT all_spec twitter_gid,
    SUBSTR(all_spec,7) twitter_xid,
    'location_code' origin_location_type,
    'location_code' destination_location_type,
    'N' is_hazardous,
    'B' perspective,
    'O' twitter_option,
    'INDIA.'||servprov_xid servprov_gid,
    SUBSTR(tm,1,2) ||':'||SUBSTR(tm,3) origin_search_value,
    Location_xid destination_search_value,
    'INDIA' domain_name,
    'N' is_customer_rates_only
    BULK COLLECT INTO twitter_cols_t
    FROM
    (SELECT x.*,
    Q.twitter_gid
    FROM
    (SELECT 'INDIA.FS_'||H.servprov_xid||'_'||D.Location_xid||'_'||T.tm all_spec,
    servprov_xid ,
    tm ,
    Location_xid
    FROM
    (SELECT Location_xid
    FROM location
    WHERE location_xid IN ('1')
    AND domain_name ='INDIA'
    ) D,
    (SELECT servprov_xid
    FROM servprov
    WHERE(servprov_gid LIKE 'P_F%'
    AND SUBSTR(servprov_xid,2,1) IN ('2'))
    OR servprov_gid ='INDIA.FRESHMISC'
    ) H,
    (SELECT lpad(rownum-1,2,'0') ||'00' tm FROM sea WHERE rownum<25
    UNION
    SELECT lpad(rownum-1,2,'0') ||'30' tm FROM sea WHERE rownum<25
    ) T
    ) x,
    (SELECT * FROM twitter WHERE twitter_gid LIKE 'INDIA.FS%'
    ) Q
    WHERE x.all_spec=Q.twitter_gid(+)
    WHERE twitter_gid IS NULL;
    FORALL i in 1 .. twitter_cols_t.count
    --LOOP
    INSERT
    INTO twitter
    (twitter_gid ,
    twitter_xid ,
    origin_location_type ,
    destination_location_type,
    is_hazardous ,
    perspective ,
    twitter_option ,
    servprov_gid ,
    origin_search_value ,
    destination_search_value ,
    domain_name ,
    is_customer_rates_only)
    VALUES
    twitter_cols_t (i);
    --END LOOP;
    END f_t_s;
    Any idea where I am doing mistake
    Thanks in Advance,
    DIDI

    Hi,
    That was my typing mistake....
    This is correct code
    CREATE OR REPLACE PROCEDURE f_t_s (P_F IN VARCHAR2)
    IS
    TYPE twitter_cols_rt IS RECORD
    twitter_gid twitter.twitter_gid%TYPE,
    twitter_xid twitter.twitter_xid%TYPE,
    origin_location_type twitter.origin_location_type%TYPE,
    destination_location_type twitter.destination_location_type%TYPE,
    is_hazardous twitter.is_hazardous%TYPE,
    perspective twitter.perspective%TYPE,
    twitter_option twitter.twitter_option%TYPE,
    servprov_gid twitter.servprov_gid%TYPE,
    origin_search_value twitter.origin_search_value%TYPE,
    destination_search_value twitter.destination_search_value%TYPE,
    domain_name twitter.domain_name%TYPE,
    is_customer_rates_only twitter.is_customer_rates_only%TYPE
    TYPE twitter_cols_tt IS TABLE OF twitter_cols_rt INDEX BY PLS_INTEGER;
    twitter_cols_t twitter_cols_tt;
    BEGIN
    SELECT all_spec twitter_gid,
    SUBSTR(all_spec,7) twitter_xid,
    'location_code' origin_location_type,
    'location_code' destination_location_type,
    'N' is_hazardous,
    'B' perspective,
    'O' twitter_option,
    'INDIA.'||servprov_xid servprov_gid,
    SUBSTR(tm,1,2) ||':'||SUBSTR(tm,3) origin_search_value,
    Location_xid destination_search_value,
    'INDIA' domain_name,
    'N' is_customer_rates_only
    BULK COLLECT INTO twitter_cols_t
    FROM
    (SELECT x.*,
    Q.twitter_gid
    FROM
    (SELECT 'INDIA.FS_'||H.servprov_xid||'_'||D.Location_xid||'_'||T.tm all_spec,
    servprov_xid ,
    tm ,
    Location_xid
    FROM
    (SELECT Location_xid
    FROM location
    WHERE location_xid IN ('1')
    AND domain_name ='INDIA'
    ) D,
    (SELECT servprov_xid
    FROM servprov
    WHERE(servprov_gid LIKE 'P_F%'
    AND SUBSTR(servprov_xid,2,1) IN ('2'))
    OR servprov_gid ='INDIA.FRESHMISC'
    ) H,
    (SELECT lpad(rownum-1,2,'0') ||'00' tm FROM sea WHERE rownum<25
    UNION
    SELECT lpad(rownum-1,2,'0') ||'30' tm FROM sea WHERE rownum<25
    ) T
    ) x,
    (SELECT * FROM twitter WHERE twitter_gid LIKE 'INDIA.FS%'
    ) Q
    WHERE x.all_spec=Q.twitter_gid(+)
    WHERE twitter_gid IS NULL;
    FORALL i in 1 .. twitter_cols_t.count
    --LOOP
    INSERT
    INTO twitter
    (twitter_gid ,
    twitter_xid ,
    origin_location_type ,
    destination_location_type,
    is_hazardous ,
    perspective ,
    twitter_option ,
    servprov_gid ,
    origin_search_value ,
    destination_search_value ,
    domain_name ,
    is_customer_rates_only)
    VALUES
    twitter_cols_t (i);
    --END LOOP;
    END f_t_s;
    Thanks,
    DIDI

  • XML generated with Warning: Not enough storage is available to complete ..

    Hello Experts,
    Using XML publisher to generate a report. Report is generated when data is less. I get Not enough storage is available to complete this operation warning when there is a huge data.
    The same problem is mentioned here XML Publisher report ends up with Warning: not enough storage is available

    Hi Fabien,
    If Uint8Array works fine on the other computer, it should not be the problem of the API, but instead it could be the setting or configuration for IE.
    Actually using XHR for 500MB zip file is not suggested, base on the documentation:
    How to download a file, XHR wraps an
    XMLHttpRequest call in a promise, which is not a good approach for big item download, please use
    Background Transfer instead, which is designed to receive big items.
    Simply search on the Internet, and looks like the not enough storage error is a potential issue while using XMLHttpRequest:
    http://forums.asp.net/p/1985921/5692494.aspx?PRB+XMLHttpRequest+returns+error+Not+enough+storage+is+available+to+complete+this+operation, however I'm not familiar with how to solve the XMLHttpRequest issues.
    --James
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • After Effects give "not enough vram" error with GTX690

    I have a Mac Pro with 2 GTX 690's. Both have 4 Gigs of memory. To be more precise each card has 2 processors so each processor only uses 2 GB. When I open AE I get a "not enough VRAM" error so I can't run ray-tracing. 2 GB should be plenty of VRAM, so I don't get why it would say that. When I go to GPU information in the Previews preference window, I have enable unsupported GPU checked, but "GPU" is still greyed out. In the OpenGL section, it reads that I have 2 GB of vram. What gives? How come After Effects isn't seeing the VRAM for CUDA? Very very frustrating. The cards work fine in Premiere and EVERY OTHER APPLICATION that works using CUDA. After Effects is the only one that freaks out. Need a fix for this.

    Hum, that seems weird. I mean, I CUDA is sooooo very much faster than regular CPU rendering that C4D does. The ideal situation would be to have the Cineware plugin actually be able to use any of renderers in C4D, not just the standard. Then I could select my Octane renderer and have screaming fast 3D live in AE.

  • Downloading via XHR and writing large files using WinJS fails with message "Not enough storage is available to complete this operation"

    Hello,
    I have an issue that some user are experiencing but I can't reproduce it myself on my laptop. What I am trying to do it grab a file (zip file) via XHR. The file can be quite big, like 500Mb. Then, I want to write it on the user's storage.
    Here is the code I use:
    DownloadOperation.prototype.onXHRResult = function (file, result) {
    var status = result.srcElement.status;
    if (status == 200) {
    var bytes = null;
    try{
    bytes = new Uint8Array(result.srcElement.response, 0, result.srcElement.response.byteLength);
    } catch (e) {
    try {
    Utils.logError(e);
    var message = "Error while extracting the file " + this.fileName + ". Try emptying your windows bin.";
    if (e && e.message){
    message += " Error message: " + e.message;
    var popup = new Windows.UI.Popups.MessageDialog(message);
    popup.showAsync();
    } catch (e) { }
    this.onWriteFileError(e);
    return;
    Windows.Storage.FileIO.writeBytesAsync(file, bytes).then(
    this.onWriteFileComplete.bind(this, file),
    this.onWriteFileError.bind(this)
    } else if (status > 400) {
    this.error(null);
    The error happens at this line:
    bytes = new Uint8Array(result.srcElement.response, 0, result.srcElement.response.byteLength);
    With description "Not enough storage is available to complete this operation". The user has only a C drive with plenty of space available, so I believe the error message given by IE might be a little wrong. Maybe in some situations, Uint8Array
    can't handle such large file? The program fails on a "ASUSTek T100TA" but not on my laptop (standard one)
    Can somebody help me with that? Is there a better way to write a downloaded binary file to the disk not passing via a Uint8Array?
    Thanks a lot,
    Fabien

    Hi Fabien,
    If Uint8Array works fine on the other computer, it should not be the problem of the API, but instead it could be the setting or configuration for IE.
    Actually using XHR for 500MB zip file is not suggested, base on the documentation:
    How to download a file, XHR wraps an
    XMLHttpRequest call in a promise, which is not a good approach for big item download, please use
    Background Transfer instead, which is designed to receive big items.
    Simply search on the Internet, and looks like the not enough storage error is a potential issue while using XMLHttpRequest:
    http://forums.asp.net/p/1985921/5692494.aspx?PRB+XMLHttpRequest+returns+error+Not+enough+storage+is+available+to+complete+this+operation, however I'm not familiar with how to solve the XMLHttpRequest issues.
    --James
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Import errors with numbers, not enough memory

    Import errors with numbers, not enough memory

    Hi heiko,
    From your system profile line, it appears you should be asking this question in the iWork For iOS forum, not this one, which deals with Numbers for Mac OS X. I'd also suggest being a bit less cryptic in writing your question.
    The link will take you to the iWork for iOS forum.
    Regards,
    Barry

  • Problem with BULK COLLECT with million rows - Oracle 9.0.1.4

    We have a requirement where are supposed to load 58 millions of rows into a FACT Table in our DATA WAREHOUSE. We initially planned to use Oracle Warehouse Builder but due to performance reasons, decided to write custom code. We wrote a custome procedure which opens a simple cursor and reads all the 58 million rows from the SOURCE Table and in a loop processes the rows and inserts the records into a TARGET Table. The logic works fine but it took 20hrs to complete the load.
    We then tried to leverage the BULK COLLECT and FORALL and PARALLEL options and modified our PL/SQL code completely to reflect these. Our code looks very simple.
    1. We declared PL/SQL BINARY_INDEXed Tables to store the data in memory.
    2. We used BULK COLLECT into FETCH the data.
    3. We used FORALL statement while inserting the data.
    We did not introduce any of our transformation logic yet.
    We tried with the 600,000 records first and it completed in 1 min and 29 sec with no problems. We then doubled the no. of rows to 1.2 million and the program crashed with the following error:
    ERROR at line 1:
    ORA-04030: out of process memory when trying to allocate 16408 bytes (koh-kghu
    call ,pmucalm coll)
    ORA-06512: at "VVA.BULKLOAD", line 66
    ORA-06512: at line 1
    We got the same error even with 1 million rows.
    We do have the following configuration:
    SGA - 8.2 GB
    PGA
    - Aggregate Target - 3GB
    - Current Allocated - 439444KB (439 MB)
    - Maximum allocated - 2695753 KB (2.6 GB)
    Temp Table Space - 60.9 GB (Total)
    - 20 GB (Available approximately)
    I think we do have more than enough memory to process the 1 million rows!!
    Also, some times the same program results in the following error:
    SQL> exec bulkload
    BEGIN bulkload; END;
    ERROR at line 1:
    ORA-03113: end-of-file on communication channel
    We did not even attempt the full load. Also, we are not using the PARALLEL option yet.
    Are we hitting any bug here? Or PL/SQL is not capable of mass loads? I would appreciate any thoughts on this?
    Thanks,
    Haranadh
    Following is the code:
    set echo off
    set timing on
    create or replace procedure bulkload as
    -- SOURCE --
    TYPE src_cpd_dt IS TABLE OF ima_ama_acct.cpd_dt%TYPE;
    TYPE src_acqr_ctry_cd IS TABLE OF ima_ama_acct.acqr_ctry_cd%TYPE;
    TYPE src_acqr_pcr_ctry_cd IS TABLE OF ima_ama_acct.acqr_pcr_ctry_cd%TYPE;
    TYPE src_issr_bin IS TABLE OF ima_ama_acct.issr_bin%TYPE;
    TYPE src_mrch_locn_ref_id IS TABLE OF ima_ama_acct.mrch_locn_ref_id%TYPE;
    TYPE src_ntwrk_id IS TABLE OF ima_ama_acct.ntwrk_id%TYPE;
    TYPE src_stip_advc_cd IS TABLE OF ima_ama_acct.stip_advc_cd%TYPE;
    TYPE src_authn_resp_cd IS TABLE OF ima_ama_acct.authn_resp_cd%TYPE;
    TYPE src_authn_actvy_cd IS TABLE OF ima_ama_acct.authn_actvy_cd%TYPE;
    TYPE src_resp_tm_id IS TABLE OF ima_ama_acct.resp_tm_id%TYPE;
    TYPE src_mrch_ref_id IS TABLE OF ima_ama_acct.mrch_ref_id%TYPE;
    TYPE src_issr_pcr IS TABLE OF ima_ama_acct.issr_pcr%TYPE;
    TYPE src_issr_ctry_cd IS TABLE OF ima_ama_acct.issr_ctry_cd%TYPE;
    TYPE src_acct_num IS TABLE OF ima_ama_acct.acct_num%TYPE;
    TYPE src_tran_cnt IS TABLE OF ima_ama_acct.tran_cnt%TYPE;
    TYPE src_usd_tran_amt IS TABLE OF ima_ama_acct.usd_tran_amt%TYPE;
    src_cpd_dt_array src_cpd_dt;
    src_acqr_ctry_cd_array      src_acqr_ctry_cd;
    src_acqr_pcr_ctry_cd_array     src_acqr_pcr_ctry_cd;
    src_issr_bin_array      src_issr_bin;
    src_mrch_locn_ref_id_array     src_mrch_locn_ref_id;
    src_ntwrk_id_array      src_ntwrk_id;
    src_stip_advc_cd_array      src_stip_advc_cd;
    src_authn_resp_cd_array      src_authn_resp_cd;
    src_authn_actvy_cd_array      src_authn_actvy_cd;
    src_resp_tm_id_array      src_resp_tm_id;
    src_mrch_ref_id_array      src_mrch_ref_id;
    src_issr_pcr_array      src_issr_pcr;
    src_issr_ctry_cd_array      src_issr_ctry_cd;
    src_acct_num_array      src_acct_num;
    src_tran_cnt_array      src_tran_cnt;
    src_usd_tran_amt_array      src_usd_tran_amt;
    j number := 1;
    CURSOR c1 IS
    SELECT
    cpd_dt,
    acqr_ctry_cd ,
    acqr_pcr_ctry_cd,
    issr_bin,
    mrch_locn_ref_id,
    ntwrk_id,
    stip_advc_cd,
    authn_resp_cd,
    authn_actvy_cd,
    resp_tm_id,
    mrch_ref_id,
    issr_pcr,
    issr_ctry_cd,
    acct_num,
    tran_cnt,
    usd_tran_amt
    FROM ima_ama_acct ima_ama_acct
    ORDER BY issr_bin;
    BEGIN
    OPEN c1;
    FETCH c1 bulk collect into
    src_cpd_dt_array ,
    src_acqr_ctry_cd_array ,
    src_acqr_pcr_ctry_cd_array,
    src_issr_bin_array ,
    src_mrch_locn_ref_id_array,
    src_ntwrk_id_array ,
    src_stip_advc_cd_array ,
    src_authn_resp_cd_array ,
    src_authn_actvy_cd_array ,
    src_resp_tm_id_array ,
    src_mrch_ref_id_array ,
    src_issr_pcr_array ,
    src_issr_ctry_cd_array ,
    src_acct_num_array ,
    src_tran_cnt_array ,
    src_usd_tran_amt_array ;
    CLOSE C1;
    FORALL j in 1 .. src_cpd_dt_array.count
    INSERT INTO ima_dly_acct (
         CPD_DT,
         ACQR_CTRY_CD,
         ACQR_TIER_CD,
         ACQR_PCR_CTRY_CD,
         ACQR_PCR_TIER_CD,
         ISSR_BIN,
         OWNR_BUS_ID,
         USER_BUS_ID,
         MRCH_LOCN_REF_ID,
         NTWRK_ID,
         STIP_ADVC_CD,
         AUTHN_RESP_CD,
         AUTHN_ACTVY_CD,
         RESP_TM_ID,
         PROD_REF_ID,
         MRCH_REF_ID,
         ISSR_PCR,
         ISSR_CTRY_CD,
         ACCT_NUM,
         TRAN_CNT,
         USD_TRAN_AMT)
         VALUES (
         src_cpd_dt_array(j),
         src_acqr_ctry_cd_array(j),
         null,
         src_acqr_pcr_ctry_cd_array(j),
              null,
              src_issr_bin_array(j),
              null,
              null,
              src_mrch_locn_ref_id_array(j),
              src_ntwrk_id_array(j),
              src_stip_advc_cd_array(j),
              src_authn_resp_cd_array(j),
              src_authn_actvy_cd_array(j),
              src_resp_tm_id_array(j),
              null,
              src_mrch_ref_id_array(j),
              src_issr_pcr_array(j),
              src_issr_ctry_cd_array(j),
              src_acct_num_array(j),
              src_tran_cnt_array(j),
              src_usd_tran_amt_array(j));
    COMMIT;
    END bulkload;
    SHOW ERRORS
    -----------------------------------------------------------------------------

    do you have a unique key available in the rows you are fetching?
    It seems a cursor with 20 million rows that is as wide as all the columnsyou want to work with is a lot of memory for the server to use at once. You may be able to do this with parallel processing (dop over 8) and a lot of memory for the warehouse box (and the box you are extracting data from)...but is this the most efficient (and thereby fastest) way to do it?
    What if you used a cursor to select a unique key only, and then during the cursor loop fetch each record, transform it, and insert it into the target?
    Its a different way to do a lot at once, but it cuts down on the overall memory overhead for the process.
    I know this isnt as elegant as a single insert to do it all at once, but sometimes trimming a process down so it takes less resources at any given moment is much faster than trying to do the whole thing at once.
    My solution is probably biased by transaction systems, so I would be interested in what the data warehouse community thinks of this.
    For example:
    source table my_transactions (tx_seq_id number, tx_fact1 varchar2(10), tx_fact2 varchar2(20), tx_fact3 number, ...)
    select a cursor of tx_seq_id only (even at 20 million rows this is not much)
    you could then either use a for loop or even bulk collect into a plsql collection or table
    then process individually like this:
    procedure process_a_tx(p_tx_seq_id in number)
    is
    rTX my_transactions%rowtype;
    begin
    select * into rTX from my_transactions where tx_seq_id = p_tx_seq_id;
    --modify values as needed
    insert into my_target(a, b, c) values (rtx.fact_1, rtx.fact2, rtx.fact3);
    commit;
    exception
    when others
    rollback;
    --write to a log or raise and exception
    end process_a_tx;
    procedure collect_tx
    is
    cursor tx is
    select tx_seq_id from my_transactions;
    begin
    for rTx in cTx loop
    process_a_tx(rtx.tx_seq_id);
    end loop;
    end collect_tx;

  • A FIX for error message: When I try to open Snood (it's a game) I get this message.  Not enough memory {Error # :: 0, in sound.cp@line 101  Can you help?

    After years of playing Snood, w/o problems, I started getting this error message, on my iMac, OS 10.5.8,
    with 4 GB of memory when opening Snood:  Not enough memory {Error # :: 0, in sound.cp@line 101
    My MacBook Pro w. Mac OS 10.6.8 did not have this problem.
    Initially I thought that Snood raised its minimum requirement to Mac OS 10.6.
    I had several correspondences with Snood. Their tech support is great. Quick and thorough responses.
    They thought the issue was in Mac's system preferences/ Sound. It was.
    I didn't realize that my sound input and output devices were gone.
    The fix was resetting the PRAM. I found this advice on MacFixIt.com.
    MacFixIt help with volume:   http://reviews.cnet.com/8301-13727_7-10415659-263.html
    Resetting the PRAM is on Apple support:   http://support.apple.com/kb/HT1379
    My sound (music!) is back, along with Snood. So glad I reset the PRAM before reinstalling the OS software!
    Thank you to Snood, MacFixIt and Apple.
    Happy new year all!

    Good work, nice post/tip, thanks!

  • Photoshop CS2 Not Enough RAM error message (wrong forum)

    For some strange reason, my copy of Photoshop CS2 (macintosh) has started giving me an error message after program start up: could not complete your request because there is not enough RAM. I don't have to do anything. This message pops up about 1 minute after I start the program. The program works fine. I just close the window and go on. I haven't installed any new plug ins or changes to my prefs or hardware configuration. This just started for no apparent reason. It doesnt't refuse to work, it's just annoying. I've tried deleting the Photoshop prefs file and reinstalling the program from the master disc. Didn't help. I'm running OS 10.4.2 on a dual 1.8 gz processor mac with 4 gigs of RAM; Photoshop set to retain 60% of that. Anyone with any ideas about this, or should I just quit whining and go back to work...

    Can you tell us which API is failing (error code) and what amount of memory <br />you are requesting?<br /><br />I've seen 'not enough ram' when the entrypoint gets messed up and you <br />actually have a Pipl problem. Is the filters running on smaller images?<br /><br /><[email protected]> wrote in message <br />news:[email protected]..<br />>I have a pc running xp pro sp2. cpu is amd x2 5200, 4 GB ram, 8800gts <br />>graphics card, and also have the same problem trying to run my plugins in <br />>photoshop cs. My windows page file is 6 Gigabytes, and I have cs set to use <br />>90% of resources with no noticeable improvement.Task Manager only shows 711 <br />>megabytes of the page file being used. I get the 'not enough ram' error <br />>message on several of my filters. Anyone have a decent workaround yet? <br />>Unplugging my plugins isn't an option. What's the point in having plugins <br />>if you can't use them?<br />><br />> One more thing: the file I'm working on is only 50 megabytes. With a <br />> second layer open it is close to 100 MB. This is a small file, <br />> comparatively speaking.

  • INTERNAL ERROR & NOT ENOUGH MEMORY error

    Somehow 4 gig of memory is not enough!!
    I am on Win Vista ultimate. intel Core 2 Duo 2.66 4 BG of
    Ram.
    And I'm working on a local hard drive with fat all else
    running
    I constantly get "Internal Error occurred...could not complete
    request"
    particularly alarming if that request was to save!!!!. I have
    been using the 'fireworks autobackup" air app...save me heaps of
    time...But its taking longer and longer to save the file as I work
    on it.
    If i save every 10 minutes !!!! its taking 1:20 to save..this
    is 12% of my time...FW STILL sucks memory somehow..
    any one with solution to this!!!???
    Ive also had the "not enough memory" error a few times..In
    which every thing done on the file is lost since last it was
    opened...whether I saved it a million times or not. I've started
    making copy backups and restarting FW every few hours to purge the
    memory..but this sucks. This memory leak seemed to be mostly gone
    in CS3 according to others so assumed CS4 had it sorted

    One of our users is running across a similar issue using the
    pen tool. A file that's 3000x2000 having sections cut out using the
    pen tool will spike FW up to 2gig of used ram. Eventually the
    Internal Error messages will start but, you can still trace your
    path. The issue is it pops up every time you click a new point. The
    only current work around I have for the user is to shrink the file
    or decrease the dpi. Has anyone else run across this?
    Specs
    XP SP3
    Core2 3.0ghz
    4 gig ram.

  • Bulk collect with Nested loops

    Hi I've a requirement like this
    I need to pull request nos from table a(Master table)
    For every request no I need to pull request details from table b(Detail Table)
    For every request no I need to pull contact details from table c
    For every request no I need to pull customer data table d and I need to create a flat file with that data so I'm using utl_file in normal query criterion because of nested loops it's taking lot of time so I want to use bulk collect with dynamic query option:
    Sample code
    =======
    create or replace procedure test(region varchar2) as
    type tablea_request_typ is table of varchar2(10);
    tablea_data tablea_request_typ;
    type tableb_request_typ is table of varchar2(1000);
    tableb_data tableb_request_typ;
    type tablec_request_typ is table of varchar2(1000);
    tablec_data tablec_request_typ;
    type tabled_request_typ is table of varchar2(1000);
    tabled_data tabled_request_typ;
    stmta varchar2(32000);
    stmtb varchar2(32000);
    stmtc varchar2(32000);
    stmtd varchar2(32000);
    rcura SYS_REFCURSOR;
    rcurb SYS_REFCURSOR;
    rcurc SYS_REFCURSOR;
    rcurd SYS_REFCURSOR;
    begin
    stmta:='select  request_no from tablea where :region'||'='NE';
    stmtb:='select  request_no||request_detail1||request_detail2 stringb  from table b where :region'||'='NE';
    stmtc:='select contact1||contact2||contact3||contact4  stringc from table c where :region'||'='NE';
    stmtd:='select customer1||customer2||customer3||customer4  stringd  from table c where :region'||'='NE';
    OPEN rcura for stmta;
      LOOP
      FETCH rcura BULK COLLECT INTO request_no
      LIMIT 1000;
      FOR  f in 1..request_no.count
    LOOP
    --Tableb
        OPEN rcurb for stmtb USING substr(request_no(f),1,14);
      LOOP
      FETCH rcurb BULK COLLECT INTO tableb_data
    for i in 1..tableb_data.count
    LOOP
    utl_file(...,tableb_data(i));
    END LOOP;
        EXIT WHEN rcurb%NOTFOUND;
      END LOOP;
    -- Tablec
    OPEN rcurc for stmtc USING substr(request_no(f),1,14);
      LOOP
      FETCH rcurb BULK COLLECT INTO tablec_data
    for i in 1..tablec_data.count
    LOOP
    utl_file(...,tablec_data(i));
    END LOOP;
        EXIT WHEN rcurc%NOTFOUND;
      END LOOP;
    -- Tabled
    OPEN rcurd for stmtd USING substr(request_no(f),1,14);
      LOOP
      FETCH rcurd BULK COLLECT INTO tabled_data
    for i in 1..tabled_data.count
    LOOP
    utl_file(...,tabled_data(i));
    END LOOP;
        EXIT WHEN rcurd%NOTFOUND;
      END LOOP;
      END LOOP;
        EXIT WHEN rcura%NOTFOUND;
      END LOOP;
    exception
    when other then
    dbms_output.put_line(sqlerrm);
    end;I 'm using code mentioned above but request nos are repeating as if it's an infinete loop ?for ex if request no is 222 It should run once but here it's running more than once?
    How to pass bind parameters say in my case region?
    Are there any alternate solutions to run it faster apart from using bulk collect?
    Right now I'm using explicit cursor with for loop which is taking lot of time ?so is this better sol?
    Thanks,
    Mahender
    Edited by: BluShadow on 24-Aug-2011 08:52
    added {noformat}{noformat} tags. Please read {message:id=9360002} to learn to format your code/data yourself.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

        Use Parameterized cursor :
    CREATE OR REPLACE PROCEDURE test(region varchar2)
    AS
      type tablea_request_typ is table of varchar2(10);
      type tableb_request_typ is table of varchar2(1000); 
      type tablec_request_typ is table of varchar2(1000);
      type tabled_request_typ is table of varchar2(1000);
      tablea_data tablea_request_typ;
      tableb_data tableb_request_typ;
      tablec_data tablec_request_typ;
      tabled_data tabled_request_typ;
       CURSOR rcura(v_region  VARCHAR2(100))
       IS
       select  request_no from tablea where region = v_region;
       CURSOR rcurb(v_input  VARCHAR2(100))
       IS
       select  request_no||request_detail1||request_detail2 stringb  from table b where request_num = v_input;
       CURSOR rcurc(v_input  VARCHAR2(100))
       IS
       select  select contact1||contact2||contact3||contact4  stringc from table c where request_num = v_input;
       CURSOR rcurd(v_input  VARCHAR2(100))
       IS
       select  select customer1||customer2||customer3||customer4  stringd  from table c where request_num = v_input;
    BEGIN
    OPEN rcura('NE');
    LOOP
        FETCH rcura BULK COLLECT INTO request_no  LIMIT 1000;
        FOR  f in 1..request_no.count
        LOOP
           --Tableb
           OPEN rcurb(substr(request_no(f),1,14));
           LOOP
              FETCH rcurb BULK COLLECT INTO tableb_data
              for i in 1..tableb_data.count
              LOOP
                  utl_file(...,tableb_data(i));
              END LOOP;
              EXIT WHEN rcurb%NOTFOUND;
           END LOOP;
           -- Tablec
           OPEN rcurc (substr(request_no(f),1,14));
           LOOP
              FETCH rcurb BULK COLLECT INTO tablec_data
              for i in 1..tablec_data.count
              LOOP
                 utl_file(...,tablec_data(i));
              END LOOP;
              EXIT WHEN rcurc%NOTFOUND;
           END LOOP;
           -- Tabled
           OPEN rcurd ( substr(request_no(f),1,14) );
           LOOP
              FETCH rcurd BULK COLLECT INTO tabled_data
              for i in 1..tabled_data.count
              LOOP
               utl_file(...,tabled_data(i));
              END LOOP;
              EXIT WHEN rcurd%NOTFOUND;
           END LOOP;
      END LOOP;
    EXIT WHEN rcura%NOTFOUND;
    END LOOP;
    EXCEPTION
    WHEN OTHERS THEN
       dbms_output.put_line(dbms_utility.format_error_backtrace);
    END;
    /Hope this helps. If not, post your table structures...

  • How to use Bulk Collect and Forall

    Hi all,
    We are on Oracle 10g. I have a requirement to read from table A and then for each record in table A, find matching rows in table B and then write the identified information in table B to the target table (table C). In the past, I had used two ‘cursor for loops’ to achieve that. To make the new procedure, more efficient, I would like to learn to use ‘bulk collect’ and ‘forall’.
    Here is what I have so far:
    DECLARE
    TYPE employee_array IS TABLE OF EMPLOYEES%ROWTYPE;
    employee_data  employee_array;
    TYPE job_history_array IS TABLE OF JOB_HISTORY%ROWTYPE;
    Job_history_data   job_history_array;
    BatchSize CONSTANT POSITIVE := 5;
    -- Read from File A
    CURSOR c_get_employees IS
             SELECT  Employee_id,
                       first_name,
                       last_name,
                       hire_date,
                       job_id
              FROM EMPLOYEES;
    -- Read from File B based on employee ID in File A
    CURSOR c_get_job_history (p_employee_id number) IS
             select start_date,
                      end_date,
                      job_id,
                      department_id
             FROM JOB_HISTORY
             WHERE employee_id = p_employee_id;
    BEGIN
        OPEN c_get_employees;
        LOOP
            FETCH c_get_employees BULK COLLECT INTO employee_data.employee_id.LAST,
                                                                              employee_data.first_name.LAST,
                                                                              employee_data.last_name.LAST,
                                                                              employee_data.hire_date.LAST,
                                                                              employee_data.job_id.LAST
             LIMIT BatchSize;
            FORALL i in 1.. employee_data.COUNT
                    Open c_get_job_history (employee_data(i).employee_id);
                    FETCH c_get_job_history BULKCOLLECT INTO job_history_array LIMIT BatchSize;
                             FORALL k in 1.. Job_history_data.COUNT LOOP
                                            -- insert into FILE C
                                              INSERT INTO MY_TEST(employee_id, first_name, last_name, hire_date, job_id)
                                                                values (job_history_array(k).employee_id, job_history_array(k).first_name,
                                                                          job_history_array(k).last_name, job_history_array(k).hire_date,
                                                                          job_history_array(k).job_id);
                                             EXIT WHEN job_ history_data.count < BatchSize                        
                             END LOOP;                          
                             CLOSE c_get_job_history;                          
                     EXIT WHEN employee_data.COUNT < BatchSize;
           END LOOP;
            COMMIT;
            CLOSE c_get_employees;
    END;
                     When I run this script, I get
    [Error] Execution (47: 17): ORA-06550: line 47, column 17:
    PLS-00103: Encountered the symbol "OPEN" when expecting one of the following:
       . ( * @ % & - + / at mod remainder rem select update with
       <an exponent (**)> delete insert || execute multiset save
       merge
    ORA-06550: line 48, column 17:
    PLS-00103: Encountered the symbol "FETCH" when expecting one of the following:
       begin function package pragma procedure subtype type use
       <an identifier> <a double-quoted delimited-identifier> form
       current cursorWhat is the best way to code this? Once, I learn how to do this, I apply the knowledge to the real application in which file A would have around 200 rows and file B would have hundreds of thousands of rows.
    Thank you for your guidance,
    Seyed

    Hello BlueShadow,
    Following your advice, I modified a stored procedure that initially was using two cursor for loops to read from tables A and B to write to table C to use instead something like your suggestion listed below:
    INSERT INTO tableC
    SELECT …
    FROM tableA JOIN tableB on (join condition).I tried this change on a procedure writing to tableC with keys disabled. I will try this against the real table that has primary key and indexes and report the result later.
    Thank you very much,
    Seyed

  • ERROR in PROCEDURE  PL/SQL: ORA-00947: not enough values

    Hi all i am creating a Procedure in which i am getting very strange ERROR
    h4. i am using Oralce 11g, SQL Developer 3
    my scenario is :
    CREATE SEQUENCE tt_TMPMEASURESOURCE_ID
    START WITH 1
    INCREMENT BY 1;
    CREATE TABLE tt_TMPMEASURESOURCE
    ID NUMBER(10,0) ,
    OBJNAME VARCHAR2(200) ,
    PRSP NUMBER(10,0) ,
    SOURCEID NUMBER(10,0) ,
    SOURCENAME VARCHAR2(100) ,
    FIX NUMBER(3,0) ,
    MNAME VARCHAR2(1000) ,
    MDESC VARCHAR2(1000)
    CREATE OR REPLACE TRIGGER tt_TMPMEASURESOURCE_ID_TRG
    BEFORE INSERT
    ON tt_TMPMEASURESOURCE
    FOR EACH ROW
    BEGIN
    SELECT tt_TMPMEASURESOURCE_ID.NEXTVAL INTO :NEW.ID
    FROM DUAL;
    END;
    INSERT INTO TT_TMPMEASURESOURCE
    *(OBJNAME,PRSP,SOURCEID,SOURCENAME,FIX,MNAME,MDESC)*
    values ('rajnish',43,54,'anish',4,'apple','kumar');
    output :
    +1 row inserted+
    h4. select * from TT_TMPMEASURESOURCE;
    h3. 1     rajnish     43     54     anish     4     apple     kumar
    creating and compiling Procedure
    create or replace procedure tem_test
    as
    begin
    INSERT INTO tt_TmpMeasureSource
    *(OBJNAME,PRSP,SOURCEID,SOURCENAME,FIX,MNAME,MDESC)*
    VALUES ( v_ObjName, v_PrespectiveID, v_SourceID, v_SourceName, v_FIX, v_MName, v_MDesc );
    when compiling
    Error(63,16): PL/SQL: SQL Statement ignored
    Error(63,28): PL/SQL: ORA-00947: not enough values
    i am do not understand that when Trigger A trigger is allready there for 1st column
    then why its giving me this error?
    please help me out
    thanks

    i got my solutions
    creating and compiling Procedure
    create or replace procedure tem_test
    as
    begin
    INSERT INTO tt_TmpMeasureSource
    *(OBJNAME,PRSP,SOURCEID,SOURCENAME,FIX,MNAME,MDESC)*
    VALUES ( v_ObjName, v_PrespectiveID, v_SourceID, v_SourceName, v_FIX, v_MName, v_MDesc );
    it was a typography mistake
    this line was missing in Original procedure
    *(OBJNAME,PRSP,SOURCEID,SOURCENAME,FIX,MNAME,MDESC)*thanks

Maybe you are looking for

  • How to display the output of report(9i) using Report Background Engine

    Hi, We are converting the forms and reports from 6i to 9i. we could run the report in the browser from form using WEB.SHOW_DOCUMENT built-in, but we are not able to display the report output from form using the Report Background Engine.Let us know if

  • In ESS we are getting SAP GUI screens instead of web screens??

    Hi, How the problem has come up--- We have transferred the production data and OS into test server and changed the test server IP address to production IP address after shutting down the production server. Which means that now ITS/ESS will be pointin

  • Cross tab design

    Hi to all,            I add a table,fieldobjects dynamically during runtime by program to crystal report, now all the records are displaying in this model HeaderText1             HeaderText2 *Data1                                                  Dat

  • Not allowing to save changes in program

    When i try to make some changes in Standard program and then save it, i create a new request for it and then try to save. It flashes a msg : "SAP names file for TCS_ABAP locked by another process" ...and does not let me save changes.But it does creat

  • How to Design Optimal Infocube

    Hi, Can any one explain how to design a optimal infocube, means how many dimensional tables to be maintain, and on wht basis r how to decide wch infoobjects should be maintained in dimensional table, and whn we go for line item dim., etc..... Regards