Problem in getting correct update records count while getUpdateCount()

hi,
I have used "Select in Insert" queries for migrating data from one table to another.
like
INSERT INTO TABLE1 (COL1, COL2)
SELECT COL1,COL2,... FROM TABLE2
WHERE COL1 = ... AND COL2 = ...;
Case 1:
I added these statements as addBatch() & at the end i execute method " executeBatch()".
Then i execute
getUpdateCount() method
on that prepareStatement ,but that count was not correct all the time.
Case 2:
If i run the same code with executeUpdate() method , it returned me the correct number of records counts that are inserted into the table.
I cudn't able to understand that it is failing for case1.
Can anybody tell the reason for this behaviour .......................
Edited by: user11187328 on Mar 18, 2010 4:52 AM

hi,
Thanks again for a correct reply but can u also tell me tht which jar i needs to included.
There are so many jar files & should i remove old jar files or jvm auto picks the updated jar file.
Following jar files are shown on the link:::
ojdbc5.jar (1,996,228 bytes) - Classes for use with JDK 1.5. It contains the JDBC driver classes, except classes for NLS support in Oracle Object and Collection types.
ojdbc5_g.jar (3,081,328 bytes) - Same as ojdbc5.jar, except that classes were compiled with "javac -g" and contain tracing code.
ojdbc6.jar (2,111,220 bytes) - Classes for use with JDK 1.6. It contains the JDBC driver classes except classes for NLS support in Oracle Object and Collection types.
ojdbc6_g.jar (3,401,519 bytes) - Same as ojdbc6.jar except compiled with "javac -g" and contains tracing code.
ojdbc5dms.jar (2,429,777 bytes) - Same as ojdbc5.jar, except that it contains instrumentation to support DMS and limited java.util.logging calls.
ojdbc5dms_g.jar (3,101,875 bytes) - Same as ojdbc5_g.jar, except that it contains instrumentation to support DMS.
ojdbc6dms.jar (2,655,741 bytes) - Same as ojdbc6.jar, except that it contains instrumentation to support DMS and limited java.util.logging calls.
ojdbc6dms_g.jar (3,423,263 bytes) - Same as ojdbc6_g.jar except that it contains instrumentation to support DMS.
orai18n.jar (1,656,280 bytes) - NLS classes for use with JDK 1.5, and 1.6. It contains classes for NLS support in Oracle Object and Collection types. This jar file replaces the old nls_charset jar/zip files.
demo.zip (603,363 bytes) - contains sample JDBC programs.

Similar Messages

  • Problem in getting update records count while executeBatch()

    hi,
    I have used "Select in Insert" queries for migrating data from one table to another.
    like
    INSERT INTO TABLE1 (COL1, COL2)
    SELECT COL1,COL2,... FROM TABLE2
    WHERE COL1 = ... AND COL2 = ...;
    Case 1:
    I added these statements as addBatch() & at the end i execute method " executeBatch()". It returned the array of integer having record count for each query respectively.
    But that count was always be -2 i.e. SUCCESS_NO_INFO.
    Case 2:
    If i run the same code with executeUpdate() method , it returned me the correct number of records counts that are inserted into the table.
    I cudn't able to understand that it is failing for case1.
    Can anybody tell the reason for this behaviour .......................
    Edited by: user11187328 on Mar 17, 2010 3:45 AM
    Edited by: user11187328 on Mar 17, 2010 3:46 AM

    hi,
    Thanks again for a correct reply but can u also tell me tht which jar i needs to included.
    There are so many jar files & should i remove old jar files or jvm auto picks the updated jar file.
    Following jar files are shown on the link:::
    ojdbc5.jar (1,996,228 bytes) - Classes for use with JDK 1.5. It contains the JDBC driver classes, except classes for NLS support in Oracle Object and Collection types.
    ojdbc5_g.jar (3,081,328 bytes) - Same as ojdbc5.jar, except that classes were compiled with "javac -g" and contain tracing code.
    ojdbc6.jar (2,111,220 bytes) - Classes for use with JDK 1.6. It contains the JDBC driver classes except classes for NLS support in Oracle Object and Collection types.
    ojdbc6_g.jar (3,401,519 bytes) - Same as ojdbc6.jar except compiled with "javac -g" and contains tracing code.
    ojdbc5dms.jar (2,429,777 bytes) - Same as ojdbc5.jar, except that it contains instrumentation to support DMS and limited java.util.logging calls.
    ojdbc5dms_g.jar (3,101,875 bytes) - Same as ojdbc5_g.jar, except that it contains instrumentation to support DMS.
    ojdbc6dms.jar (2,655,741 bytes) - Same as ojdbc6.jar, except that it contains instrumentation to support DMS and limited java.util.logging calls.
    ojdbc6dms_g.jar (3,423,263 bytes) - Same as ojdbc6_g.jar except that it contains instrumentation to support DMS.
    orai18n.jar (1,656,280 bytes) - NLS classes for use with JDK 1.5, and 1.6. It contains classes for NLS support in Oracle Object and Collection types. This jar file replaces the old nls_charset jar/zip files.
    demo.zip (603,363 bytes) - contains sample JDBC programs.

  • Mapping problem with compressed key update record (target format)...

    Hi Guys,
    Getting below error while replication from Source to target. Source table is having NOT NULL Column, but on target replicat process giving error about some NULL value ??
    How to overcome this issue, any idea...
    2011-08-04 10:35:04 INFO OGG-00995 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: REPLICAT RMASTRK starting.
    2011-08-04 10:35:05 INFO OGG-00996 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: REPLICAT RMASTRK started.
    2011-08-04 10:35:06 WARNING OGG-00869 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: OCI Error ORA-01407: cannot update ("INFRA"."CUST"."CODE") to NULL (status = 1407), SQL <UPDATE "INFRA"."CUST" SET "ORD_ID" = :a2,"DP_ID" = :a3,"EXCHNG_CODE" = :a4,"ORD_QTY" = :a5,"ORD_PRICE" = :a6,"CODE" = :a7,"MKRT_CODE" = :a8,"CHANN>.
    2011-08-04 10:35:06 WARNING OGG-01004 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: Aborted grouped transaction on 'INFRA.CUST', Database error 1407 (ORA-01407: cannot update ("INFRA"."CUST"."SCRP_CODE") to NULL).
    2011-08-04 10:35:06 WARNING OGG-01003 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: Repositioning to rba 44132192 in seqno 68708.
    2011-08-04 10:35:06 *WARNING OGG-01154 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: SQL error 1407 mapping INFRA.CUST to INFRA.CUST OCI Error ORA-01407:* *cannot update ("INFRA"."CUST"."SCRP_CODE") to NULL (status = 1407), SQL <UPDATE "INFRA"."CUST" SET "ORD_ID" = :a2,"DP_ID" = :a3,"EXCHNG_CODE"=:a4,"ORD_QTY"*
    *= :a5,"ORD_PRICE" = :a6,"SCRP_CODE" = :a7,"MKRT_CODE" = :a8,"CHANN>.*
    2011-08-04 10:35:06 WARNING OGG-01003 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: Repositioning to rba 44132192 in seqno 68708.
    2011-08-04 10:35:06 ERROR OGG-01296 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: Error mapping from INFRA.CUST to INFRA.CUST.
    2011-08-04 10:35:06 ERROR OGG-01668 Oracle GoldenGate Delivery for Oracle, rmastrk.prm: PROCESS ABENDING.
    Oracle GoldenGate Delivery for Oracle process started, group RMASTRK discard file opened: 2011-08-04 10:35:05
    Current time: 2011-08-04 10:35:06
    Discarded record from action ABEND on error 1407
    OCI Error ORA-01407: cannot update ("INFRA"."CUST"."SCRP_CODE") to NULL
    (status = 1407), SQL <UPDATE "INFRA"."CUST" SET "ORD_ID" = :a2,"MKRT_CODE" = :a8,"CHANN>
    Aborting transaction on ./dirdat/pm beginning at seqno 68708 rba 44132192
    error at seqno 68708 rba 44132192
    Problem replicating INFRA.CUST to INFRA.CUST
    *Mapping problem with compressed key update record (target format)...*
    ORD_QTY = 500
    ORD_PRICE = 37430
    SCRP_CODE =
    MKRT_CODE = N
    Oracle GoldenGate Delivery for Oracle process started, group RMASTRK discard file opened: 2011-08-
    04 10:35:05
    Current time: 2011-08-04 10:35:06
    Discarded record from action ABEND on error 1407
    OCI Error ORA-01407: cannot update ("INFRA"."CUST"."SCRP_CODE") to NULL
    (status = 1407), SQL <UPDATE "INFRA"."CUST" SET "ORD_ID" = :a2,"MKRT_CODE" = :a8,"CHANN>
    Aborting transaction on ./dirdat/pm beginning at seqno 68708 rba 44132192
    error at seqno 68708 rba 44132192
    Problem replicating INFRA.CUST to INFRA.CUST
    Mapping problem with compressed key update record (target format)...
    ORD_QTY = 500
    ORD_PRICE = 37430
    SCRP_CODE =
    MKRT_CODE = N
    Any inputs / help would be appreciated.
    Regards,
    Manish

    The SCRP_CODE column has a NOT NULL constraint. The ORA-01407 error is telling you that you cannot update or set a value for this column to null because of the constraint. This has absolutely nothing to do with an index. You can use a marker/sentinel value in lieu of using NULL. For a numeric field, where everything is positive, a negative value (-1) can be decoded as meaning null. For a character field, a code such as NA can represent NULL.
    This also has nothing to do (directly) with GoldenGate failing because of this error. The underlying SQL statement will fail everywhere, regardless of the tool or application. It is not a case of failing only in GoldenGate.

  • Mapping problem with compressed key update record

    Hi, could you please advise?
    I'm getting the following problem:
    About a week ago replicat abened with "Error in mapping" error. I found in discard file some record looking like:
    filed1 = NULL
    field2 =
    field3 =
    field4 =
    field5 =
    datefield = -04-09 00:00:00
    field6 =
    field8 =
    field9 = NULL
    field10 =
    Where filed9 = @GETENV("GGHEADER", "COMMITTIMESTAM"), field10 = = @GETENV("GGHEADER", "COMMITTIMESTAM"), others are table fields mapped by USEDEFAULTS
    So I got Mapping problem with compressed key update record at 2012-06-01 15:44
    I guess I need to mention that extract failed in 5 minuts before it with: VAM function VAMRead returned unexpected result: error 600 - VAM Client Report <[CFileInfo::Read] Timeout expired after 10 retries with 1000 ms delay, waiting to read transaction log or backup files. To increase the number of retries, use SETENV (GGS_CacheRetryCount = n) in Extract parameter file. To control retry delay time, use SETENV (GGS_CacheRetryDelay = n). handle: 0000000000000398 ReadFile GetLastError:997 Wait GetLastError:997>.
    I don't know if it has ther same source as data corruption, could you tell me if it is?
    Well, I created new extract, starting 2012-06-01 15:30 to check if there was something with extract at the time, but got the same error.
    If I run extract beging at 15:52 it starts and works.
    But well, I got another one today. Data didn't look that bad, but yet one column came with null value:( And I'm using it as a key column, so I got Mapping problem with compressed key update record again:(
    I'm replicating from SQL Server 2008 to Oracle 11g.
    I'm actually using NOCOMPRESSUPDATES in Extract.
    CDC is enabled for all tables replicated. The only thing is that it is enabled not by ADD TRANDATA command, but by SQL Server sys.sp_cdc_enable_table, does it matter?
    Could you please advise why does it happen?

    Well, the problem begins somewhere in extract or before extract, may be in transaction log, I don't know:(
    Here are extract parameters:
    EXTRACT ETCHECK
    TRANLOGOPTIONS MANAGESECONDARYTRUNCATIONPOINT
    SOURCEDB TEST, USERID **, PASSWORD *****
    exttrail ./dirdat/ec
    NOCOMPRESSUPDATES
    NOCOMPRESSDELETES
    TABLE tst.table1, COLS (field1, field2, field3, field4, field5, field6, field7, field8 );
    TABLE tst.table2, COLS (field1, field2, field3, field4 );
    Data pump:
    EXTRACT DTCHECK
    SOURCEDB TEST, USERID **, PASSWORD *****
    RMTHOST ***, MGRPORT 7809
    RMTTRAIL ./dirdat/dc
    TABLE tst.table1;
    TABLE tst.table2;
    Replicat:
    REPLICAT rtcheck
    USERID tst, PASSWORD ***
    DISCARDFILE ./dirrpt/rtcheck.txt, PURGE
    SOURCEDEFS ./dirdef/sourcei.def
    HANDLECOLLISIONS
    UPDATEDELETES
    MAP tst.table1, t.table1, COLMAP (USEDEFAULTS , filed9 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed10= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (field3);
    MAP dbo.TPROCPERIODCONFIRMSTAV, TARGET R_019_000001.TPROCPERIODCONFIRMSTAV, COLMAP (USEDEFAULTS , field5 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed6= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (filed1, field2, field3);
    Rpt file for replicat:
    Oracle GoldenGate Delivery for Oracle
    Version 11.1.1.1 OGGCORE_11.1.1_PLATFORMS_110421.2040
    Windows x64 (optimized), Oracle 11g on Apr 22 2011 00:34:07
    Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.
    Starting at 2012-06-05 12:49:38
    Operating System Version:
    Microsoft Windows Server 2008 R2 , on x64
    Version 6.1 (Build 7601: Service Pack 1)
    Process id: 2264
    Description:
    ** Running with the following parameters **
    REPLICAT rtcheck
    USERID tst, PASSWORD ***
    DISCARDFILE ./dirrpt/rtcheck.txt, PURGE
    SOURCEDEFS ./dirdef/sourcei.def
    HANDLECOLLISIONS
    UPDATEDELETES
    MAP tst.table1, t.table1, COLMAP (USEDEFAULTS , filed9 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed10= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (field3);
    MAP dbo.TPROCPERIODCONFIRMSTAV, TARGET R_019_000001.TPROCPERIODCONFIRMSTAV, COLMAP (USEDEFAULTS , field5 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed6= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (filed1, field2, field3);
    CACHEMGR virtual memory values (may have been adjusted)
    CACHEBUFFERSIZE: 64K
    CACHESIZE: 512M
    CACHEBUFFERSIZE (soft max): 4M
    CACHEPAGEOUTSIZE (normal): 4M
    PROCESS VM AVAIL FROM OS (min): 1G
    CACHESIZEMAX (strict force to disk): 881M
    Database Version:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE     11.2.0.1.0     Production
    TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    Database Language and Character Set:
    NLS_LANG = "AMERICAN_AMERICA.CL8MSWIN1251"
    NLS_LANGUAGE = "AMERICAN"
    NLS_TERRITORY = "AMERICA"
    NLS_CHARACTERSET = "CL8MSWIN1251"
    For further information on character set settings, please refer to user manual.
    ** Run Time Messages **
    Opened trail file ./dirdat/dc000000 at 2012-06-05 12:49:39
    2012-06-05 12:58:14 INFO OGG-01020 Processed extract process RESTART_ABEND record at seq 0, rba 925 (aborted 0 records).
    MAP resolved (entry tst.table1):
    MAP tst.table1, t.table1, COLMAP (USEDEFAULTS , filed9 = @GETENV("GGHEADER", "COMMITTIMESTAMP"), filed10= @CASE(@GETENV("GGHEADER", "OPTYPE"), "SQL COMPUPDATE", "U", "PK UPDATE", "U",@GETENV("GGHEADER", "OPTYPE")) ), KEYCOLS (field3);
    2012-06-05 12:58:14 WARNING OGG-00869 No unique key is defined for table table1. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key.
    Using the following default columns with matching names:
    field1=field1, field2=field2, field3=field3, field4=field4, field5=field5, field6=field6, field7=field7, field8=field8
    Using the following key columns for target table R_019_000001.TCALCULATE: field3.
    2012-06-05 12:58:14 WARNING OGG-01431 Aborted grouped transaction on 'tst.table1', Mapping error.
    2012-06-05 12:58:14 WARNING OGG-01003 Repositioning to rba 987 in seqno 0.
    2012-06-05 12:58:14 WARNING OGG-01151 Error mapping from tst.table1 to tst.table1.
    2012-06-05 12:58:14 WARNING OGG-01003 Repositioning to rba 987 in seqno 0.
    Source Context :
    SourceModule : [er.main]
    SourceID : [er/rep.c]
    SourceFunction : [take_rep_err_action]
    SourceLine : [16064]
    ThreadBacktrace : [8] elements
    : [C:\App\OGG\replicat.exe(ERCALLBACK+0x143034) [0x00000001402192B4]]
    : [C:\App\OGG\replicat.exe(ERCALLBACK+0x11dd44) [0x00000001401F3FC4]]
    : [C:\App\OGG\replicat.exe(<RCALLBACK+0x11dd44) [0x000000014009F102]]
    : [C:\App\OGG\replicat.exe(<RCALLBACK+0x11dd44) [0x00000001400B29CC]]
    : [C:\App\OGG\replicat.exe(<RCALLBACK+0x11dd44) [0x00000001400B8887]]
    : [C:\App\OGG\replicat.exe(releaseCProcessManagerInstance+0x25250) [0x000000014028F200]]
    : [C:\Windows\system32\kernel32.dll(BaseThreadInitThunk+0xd) [0x000000007720652D]]
    : [C:\Windows\SYSTEM32\ntdll.dll(RtlUserThreadStart+0x21) [0x000000007733C521]]
    2012-06-05 12:58:14 ERROR OGG-01296 Error mapping from tst.table1 to tst.table1.
    * ** Run Time Statistics ** *
    Last record for the last committed transaction is the following:
    Trail name : ./dirdat/dc000000
    Hdr-Ind : E (x45) Partition : . (x04)
    UndoFlag : . (x00) BeforeAfter: A (x41)
    RecLength : 249 (x00f9) IO Time : 2012-06-01 15:48:56.285333
    IOType : 115 (x73) OrigNode : 255 (xff)
    TransInd : . (x03) FormatType : R (x52)
    SyskeyLen : 0 (x00) Incomplete : . (x00)
    AuditRBA : 44 AuditPos : 71176199289771
    Continued : N (x00) RecCount : 1 (x01)
    2012-06-01 15:48:56.285333 GGSKeyFieldComp Len 249 RBA 987
    Name: DBO.TCALCULATE
    Reading ./dirdat/dc000000, current RBA 987, 0 records
    Report at 2012-06-05 12:58:14 (activity since 2012-06-05 12:58:14)
    From Table tst.table1 to tst.table1:
    # inserts: 0
    # updates: 0
    # deletes: 0
    # discards: 1
    Last log location read:
    FILE: ./dirdat/dc000000
    SEQNO: 0
    RBA: 987
    TIMESTAMP: 2012-06-01 15:48:56.285333
    EOF: NO
    READERR: 0
    2012-06-05 12:58:14 ERROR OGG-01668 PROCESS ABENDING.
    Discard file:
    Oracle GoldenGate Delivery for Oracle process started, group RTCHECK discard file opened: 2012-06-05 12:49:39
    Key column filed3 (0) is missing from update on table tst.table1
    Missing 1 key columns in update for table tst.table1.
    Current time: 2012-06-05 12:58:14
    Discarded record from action ABEND on error 0
    Aborting transaction on ./dirdat/dc beginning at seqno 0 rba 987
    error at seqno 0 rba 987
    Problem replicating tst.table1 to tst.table1
    Mapping problem with compressed key update record (target format)...
    filed1 = NULL
    field2 =
    field3 =
    field4 =
    field5 =
    datefield = -04-09 00:00:00
    field6 =
    field8 =
    field9 = NULL
    field10 =
    Process Abending : 2012-06-05 12:58:14

  • Getting Duplicate data Records error while loading the Master data.

    Hi All,
    We are getting Duplicate data Records error while loading the Profit centre Master data. Master data contains time dependent attributes.
    the load is direct update. So i made it red and tried to reloaded from PSA even though it is throwing same error.
    I checked in PSA. Showing red which records have same Profit centre.
    Could any one give us any suggestions to resolve the issues please.
    Thanks & Regards,
    Raju

    Hi Raju,
            I assume there are no routines written in the update rules and also you ae directly loading the data from R/3 ( not from any ODS). If that is the case then it could be that, the data maintained in R/3 has overlapping time intervals (since  time dependency of attributes is involved). Check your PSA if the same profit center has time intervals which over lap. In that case, you need to get this fixed in R/3. If there are no overlapping time intervals, you can simply increase the error tolerance limit in your info-package and repeat the load.
    Hope this helps you.
    Thanks & Regards,
    Nithin Reddy.

  • How to get th displaye record count through SQL*Plus without result

    set lines 155
    set pages 100
    set autoprint on
    variable cv refcursor
    set serveroutput on size 1000000
    set timing on
    set feedback on
    set echo on
    exec proc_name (input1, input2, :cv);how to get the record count without resultset display in the sql*plus promt ...?
    plz help me....

    This is my earilier code
    set lines 155
    set pages 100
    set autoprint on
    variable cv refcursor
    set serveroutput on size 1000000
    set timing on
    set feedback on
    set echo on
    exec proc_name (input1, input2, :cv);
    Then i have tried to execute like this
    declare
    disp SYS_REFCURSOR;
    cv SYS_REFCURSOR;
    cnt number :=0;
    begin
    proc_name (input1, input2, :cv);
    FOR disp in cv --here cv is the set of record set
    LOOP
    --FETCH cv INTO disp;
    EXIT WHEN cv%NOTFOUND;
    cnt := cnt + 1;
    END LOOP;
    dbms_output.put_line(cnt);
    dbms_output.put_line(cv%rowcount);
    CLOSE cv;
    end;
    getting error...
    LOOP
    ERROR at line 8:
    ORA-06550: line 8, column 2:
    PLS-00103: Encountered the symbol "LOOP" when expecting one of the following:
    . ( % ; for
    The symbol "; was inserted before "LOOP" to continue.
    ORA-06550: line 13, column 2:
    PLS-00103: Encountered the symbol "DBMS_OUTPUT"
    ORA-06550: line 13, column 27:
    PLS-00103: Encountered the symbol ";" when expecting one of the following:
    . ( , * % & - + / at mod rem <an identifier>
    <a double-quoted delimited-identifier> <an exponent (**)> as
    from into || bulk
    I have set of executable procedure script for exec procedure1(input1, input2 :cv); , exec procedure1(input1, input2 :cv);,.... like that. But i want only the record count, while we execute all these scripts in the sql promt...How to do that one.. ?

  • How to get a total record count before grouping?

    I need to group a report on a formula that does roughly the following:
    [record count] / ( [total record count] / 20 )
    What this acheives is to label each record with a number of 1 - 20 which I want to group on. Getting this figure is the easy part, what is not working is the fact that I cannot group from a formula that is calculated after grouping. I overcame a portion of this by using "Whilereadingrecords" (rather than "count", running totals, or while printing records) to acheive a record count.
    I can't figure out how to get a total record count done before grouping. Is there a way to do this with "WhileReadingRecords"?? Is this even possible?
    Thanks
    John

    Hi John, 
    The order of how Crystal does things dictates the order of which features you can use.  Crystal has a two pass method.  In the first pass it does things like passing the query, grouping, summarizing.  In the second pass it does formulas, formatting, etc. 
    Unfortunately Crystal does the Grouping before summarizing so what you want to do can't be done in Crystal.  The best way to get around this to either create a SQL Command or view/stored procedure that will do the summarizing for you.  Then in the report you can use it. 
    Hope this helps,
    Brian

  • How to get only updated records for a column using loading type INSERT

    Hi,
    Good morning all,
    I have source1 containg 3 columns bill_cd,bill_desc,bill_date and
    source2 has the columns bill_cd,bill_key,source_id.
    My target has the columns bill_cd,bill_date,bill_desc .
    Now the requirement is, bill_cd in target should not repeated when we run the mapping more than one thime. it should get only updated records not the previous records using only INSERT Loading type(for target) not to use update/insert.
    How can we achieve this logic in mapping level.
    Anybody Please give me some solution immediately.
    Thanks in Advance,
    Siv

    Thanks Herzog for your reply,
    Here bill_cd is not unique. Yes,I want only new records using INSERT as loading type. Suppose when the map runs for the first time, bill_cd is loaded with values 1 to 5.
    Now, in the source I have got new records 6 to 10 for bill_cd and when I run the mapping again I need to get only records for bill_cd from 6 to 10 using INSERT as loading type.
    Is it possible to achieve this at mapping level?
    Regards,
    Siv.

  • How to get the total record count in ODI

    Hi
    I have the interface the are file to DB.
    The format is like this..
    HEADER
    DETAIL
    TRAILER
    Now will write the contains of file to DB,
    But i have to insert the total count ie numberof record written from file to DB in my Trailer record.
    can you tell me how can i get the total count of records in file and write it to trailer?
    Also, I want the interface to rollback the data if something fails will loading the data from file., ie. if there are 100 records in file and 50 got transfer and something fails i want to rollabck the 50 records from DB.???
    Thanks :)

    Hi
    You can design a flow for Full load flow and incremental flow from flat file to Table.
    Create a table at target database like.. (create table with last_execution and palce the V_FULL_LOAD value and LAST_EXECUTION_DT columns in last_execution table)
    Add faltfile as table in model, create a variable as V_FULL_LOAD and make sure that the default values is 01-01-1900
    Create one more variable like V_LAST_EXECUTION_DATE (in this variable write a case statement that if V_FULL_LOAD value is 'Y" then full load should happen and same time you should check that V_FULL_LOAD column is balnk then write insert statment else write update statement to update last_execution_dt column, similar for 'N')
    please provide your *personal mail ID*, i will send a doc file realted to your query.
    we have to tables present in work repository (SNP_STEP and STEP_LOG tables) using tables we can get how many records are inserted/updated and we can find how many records are not transfer and gor error.
    Thanks
    Phani

  • Getting OS File Record Count

    Hi,
    I've been looking for ways to get the record count(number of lines) of a txt file from a OS using a Oracle procedure without having to loop into the file line by line as on UTL_FILE.GET_LINE proc. If anyone has a one way method to get the number of lines of a OS file thru PLSQL without using loops, please let me know.
    I appreciate your inputs on this matter.
    Thiago Santana

    Hi,
    Simple example using external tables (I have an external directory called EXT_FILES where HR user can read and write; the test.txt file is also in this directory):
    File: test.txt
    1 hi
    2
    3 how
    4
    5    are
    6
    7
    8
    9
    10you.External table creation:
    CREATE TABLE test_ext (TEXT VARCHAR2(4000))
    ORGANIZATION EXTERNAL (
    TYPE oracle_loader
    DEFAULT DIRECTORY EXT_FILES
    ACCESS PARAMETERS (
      RECORDS DELIMITED BY NEWLINE
      NOBADFILE NODISCARDFILE NOLOGFILE
      FIELDS TERMINATED BY '0x0A'
      MISSING FIELD VALUES ARE NULL)
    LOCATION ('test.txt'))
    REJECT LIMIT unlimited;Test:
    Connected to Oracle Database 10g Express Edition Release 10.2.0.1.0
    Connected as hr
    SQL> select count(*) from test_ext t;
      COUNT(*)
            10
    SQL>Regards,
    Edited by: Walter Fernández on May 11, 2009 11:35 PM - Adding line numbers to test file...

  • Problem in getting correct datain DSO although routine is working!

    I have two DSOs -DSO03 and DSO04 and in-between a transformation where a 0Order_Val field is getting calculated using the routine code as below:-
    IF ( COMM_STRUCTURE-PROCESSKEY = '001' or  
        COMM_STRUCTURE-PROCESSKEY = '011' or
        COMM_STRUCTURE-PROCESSKEY = '021' or
        COMM_STRUCTURE-PROCESSKEY = '004' or    
        COMM_STRUCTURE-PROCESSKEY = '014' or
        COMM_STRUCTURE-PROCESSKEY = '024' )
        AND COMM_STRUCTURE-BWAPPLNM EQ 'MM'
        AND COMM_STRUCTURE-ORDER_VAL <> 0.
        perFORM QUANTITY_CONVERT
           USING    COMM_STRUCTURE-ORDER_VAL
                    COMM_STRUCTURE-po_UNIT
                    COMM_STRUCTURE-base_uom
                    COMM_STRUCTURE-numerator
                    COMM_STRUCTURE-denomintr
           CHANGING RESULT.
    Problem:-
    For few doc numbers, the Quantity convert is fetching correct data, after applying the convert subroutine.
    But in some, (although it is not supposed to return 0 after applying the sub-routine of Quantity_Convert),
    it is showing wrong result as "0" in DSO04,
    (although there is value in DSO03 for the field 0Order_Val).
    Example:-
    Doc num       Field name                   DSO 3 Value           DSO4Value       Remarks
    4500000089  0Order_ Val                    3686.21                 3686.21           Correct
    4500000084  0Order_Val                      500000                        0               Wrong
    While trying to debug the ABAP routine through breakpoints, I have found that the result fetched is correct (i.e. 500000 for doc_num: 4500000084) but when displaying data in DSO04 it is showing "0".
    I have checked the input parameters and all are same.
    Only exception is difference in currency between the two records.
    One is USD (Doc num :4500000089) and the other is ESD (Doc num :4500000084).
    Although currency is not any input parameter.
    Also have tried Overwrite/summation setting in the Transformation but with no result.
    If anyone can indicate any solution to this peculiar problem then it will be very helpful.
    Regards,
    Ritu.
    email: [email protected]

    Hi,
    it is a bug in the Acrobat Reader. I had the same problem. Just add an input field called "myIndex" into the row and store there the current index when creating the line. With this solution you can programatically work with your own created index without the bug of the Acrobat Reader.
    Take care,
    Thomas

  • Problem to get Correct Value for Message Id in XI (Inbound channel)

    Hi Experts
    I have XI scenario  i.e. SOAP to RFC.
    I am calling RFC and getting Response which contais Messageid Field(Raw Data).
    But while getting Response in Inbound Channel ,I ma getting Junk Value For Message Id.
    In RFC Data element for Message id is SXMSMGUID.(data tpe Raw No Of character 16 and Output Length 32)
    I am accessing some RFC functions from XI which return parameters in the RAW format.[RAW: Uninterpreted byte string.]
    For example: If I execute a RFC from the abap system (using transaction se37), one of the results is "5ECD6F4D6C6E3242921025FE74AC5153"
    When  I call the RFC from XI, response for same  parameters is "Xs1vTWxuMkKSECX+dKxRUw==".
    Is there any way to get RAW data in correct Format?
    when i import RFc in XI it's data type becomes xsd:base64Binary.
    I created one customized data element having data type RAW (32 length) and even Character(32-50 length)
    In this case RFC gives correct value but when Sceanaro runs in XI,it get Wrong data in XI Inbound channel.
    Also disturbed value and place of other Fields.
    Thanks in advance .

    Hi
    Check this forum post.. same prob as yours
    Re: Problem in RFC Lookup UDF in getting MessageID
    fixed by changing the datatype other than RAW in FM
    also,
    Data type RAW imported to ABAP from Java
    Regards
    Vishnu

  • How to get the total record count for the report

    Hi,
    How can I get count of the total records shown in the report. When we set the report attributes, we have an option "Set Pagination from X to Y of Z"
    Does anyone know how can I get the Z value from APEX variables.
    I know we can use that query and get the count but I just want to know how we can use APEX Variables effectively.
    Thanks in advance.

    You write a loop, something like this:
    Go_block('B1');
    If not form_success then
      Raise Form_Trigger_failure;
    End if;
    First_Record;
    If not form_success then
      Raise Form_Trigger_failure;
    End if;
    Loop
      If :system.record_status in('CHANGED','INSERT') then
        -- modify the record here--
      End if;
      Exit when :System.Last_Record = 'TRUE';
      Next_Record;
    End Loop;
    First_Record;But be very careful-- If your block can fetch a large number of rows, (over 100), this loop can take a long time, and you should not use this method. The loop will continue fetching more rows from the database until all rows satisfying the query are retrieved.

  • Problem when another user update record.

    Hi All,
    i am using jdev 11.1.1.5.0
    use case: i have create one adf table which is based on vo where a column claimed_by.
    i have add a where clause in vo
    where claimed_by is null.
    default value of claimed_by is null. when user claimed particular record claimed_by contain id of that user.
    now problem is that when two user access same vo at same time. suppose first user claimed a record with id 1 then claimed_by column of table update with 1.
    but in second user window same record still show.
    so i want if record already claimed by another user and second user attempt to claim same record which is already claimed in mean time. then proper error message show to user like "this record already claimed by another user".
    is there possible with change indicator in claimed_by attribute or history column ?
    or any other way to do this.

    Hi Arun,
    i have already read this blog. And i know about JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x]
    but in my use case i don't want this exception occur that's way i have set property reset after update and insert.
    but it is not work for me.
    my concern is that-
    when one user claimed record and set claimed_by column and after that when second user claim same record a proper error message display to user.

  • To get (n-1) record count

    Hello,
    I've two levels of group fields in my report. I count total no of records at both levels. But what we really need is at level two is (total records - 1) i.e, (n-1) records to be displayed.
    How do I get this count?
    Thanks in advance and I appreciate your time.
    Arun.

    You can create a summary column Countrec, with a count function. Then create a formula column CF_1:
    function CF_1Formula return Number is
    begin
      return :Countrec - 1;
    end;Instead of displaying Countrec, you display CF_1.

Maybe you are looking for

  • Cc crash since last update

    Since the last update (apr. 3 weeks ago)  CC 2014 crashes after 8 seconds after opening. The normal Photoshop CC works fine. Who knows this issue and maybee knows what help. Service takes to long (the last time 4 weeks). Thank you for your help, H. P

  • Is iTunes match worth the money?

    ?

  • HELP!Simple MIDI player based on Sequencer:how to set Volume?

    Hi,i'm Roberto,i'm student and i'm realizing a simple midi player in java (using java1.5 and javax.sound.midi package).....the program has a gui and it can just play midi files,and the player is a class based on a Sequencer object:so the method play(

  • Best way how to write FPGA data in rt cRIO system in tdms file

    Hej, I am struggling to write measured data from an analog input (NI 9215) sampled at up to 20 kHz to a tdms file in the rt system (crio-9022). I just need to save several periods of 4 arbitrary analog signals at frequencies between 5 Hz and 1KHz. So

  • Clues on how to fix these errors??

    I am not sure if I am should post all of my code or not, but here are the errors I keep getting and I do not know how to over come them. Also I know I should be able to build the frame and do the calculations in the the same program, but I until I ge