UDT Lob buffer overflow in Initial Load

Hi All
I'm doing an intial load for one table which is having ORDIMAGE column  , Here is my prm details :
====================================================================
EXTRACT INIT_FFU
-- ENVIRONMENT PROFILES
setenv (ORACLE_SID = "trfdv")
setenv (NLS_LANG = "AMERICAN_AMERICA.AR8ISO8859P6")
setenv (ORACLE_HOME = "/u01/oracle/product/10.2.0/db_2")
SETENV (NLS_DATE_FORMAT = "DD/MM/YYYY HH24:MI:SS")
-- DATABASE LOGIN
USERID gg, PASSWORD *****
-- GG PARAMETER CONFIGURATION
RMTHOST <remote_host> , MGRPORT 7809 , TCPBUFSIZE 200000000, TCPFLUSHBYTES 200000000 , COMPRESS
RMTFILE /u04/GG_TRAILS/ff , MAXFILES 9999 ,  MEGABYTES 100 , PURGE , FORMAT RELEASE 11.2
DBOPTIONS LOBBUFSIZE 10485760
STATOPTIONS RESETREPORTSTATS
REPORTROLLOVER AT 00:01
REPORTCOUNT EVERY 10 SECONDS, RATE
DISCARDFILE /u01/GG/dirrpt/ffu.dsc, APPEND
-- FFU TABLES
TABLE TRAFFIC.TF_FFU_RADAR_PICTURES;
====================================================================
The issue is at once it gives the following error
UDT Lob buffer overflow, needed: 19920358, allocated: 10485760
I know that the maximum is 10485760 , 
so how can I resolve this issue
GG version is 11.2.1.0.1
DB version is 10.2.0.4
Thanks In Advance

Any Luck ...

Similar Messages

  • ADL buffer overflow crash when loading external module

    Hi,
    ADL version: 1.5.3
    Flex SDK version: 3.5
    Flash Builder version: 4.5
    OS: Vista 64bit
    I have a project that uses various external Flex modules at runtime.
    With Flash Builder 4, the Debug Launcher runs fine when compiling on a Windows XP box. The app behaves correctly, no crash.
    When debugging the same project on a Vista 64bit using Flash Builder 4.5, all seems fine until I load an external module. At that point, the adl.exe process crashes with the following report:
    Problem signature:
      Problem Event Name: BEX
      Application Name: adl.exe
      Application Version: 1.5.3.9120
      Application Timestamp: 4b06f734
      Fault Module Name: Adobe AIR.dll
      Fault Module Version: 2.6.0.19120
      Fault Module Timestamp: 4d7a8030
      Exception Offset: 005ef27a
      Exception Code: c0000417
      Exception Data: 00000000
      OS Version: 6.0.6002.2.2.0.768.3
      Locale ID: 4105
      Additional Information 1: 6495
      Additional Information 2: 5d54e5c02589ce8bdd8a34774c75928b
      Additional Information 3: 819d
      Additional Information 4: edcc4aab88b7f32eab72edc2ecc4f717
    BEX is a buffer overflow.
    Does anyone have any clue why the ADL would crash with such a configuration.
    Thank you,
    Martin

    Hi
    I have the same problem.
    Flex version 4.6
    AIR: 3.1
    Fb: 4.5
    code to reproduce:
    <?xml version="1.0" encoding="utf-8"?>
    <s:WindowedApplication xmlns:fx="http://ns.adobe.com/mxml/2009"
                           xmlns:s="library://ns.adobe.com/flex/spark"
                           xmlns:mx="library://ns.adobe.com/flex/mx"
                           width="1200" height="800" minWidth="1200" minHeight="800"
                           applicationComplete="applicationConpleteHandler(event)"
                           resizing="windowedapplication1_resizingHandler(event)">
        <fx:Declarations>
            <!-- Place non-visual elements (e.g., services, value objects) here -->
        </fx:Declarations>
        <fx:Script>
            <![CDATA[           
                import mx.controls.Alert;
                import mx.events.FlexEvent;        
                var loader:Loader;
                protected function applicationConpleteHandler(event:FlexEvent):void
                    loader = new Loader;
                    loader.width = this.stage.width;
                    loader.height = this.stage.height;
                    stage.addChild(loader);               
                    var f:File = new File("C:/Users/Мурзик/AppData/Roaming/com.oreilly.WhiteSpace/Local Store/sandler/sandler_0.2.37.swf");             
                    var urlReq:URLRequest = new URLRequest("C:/Users/username/AppData/Roaming/flash_file.swf");
                    loader.load(urlReq);               
                protected function windowedapplication1_resizingHandler(event:NativeWindowBoundsEvent):void
                    //loader.width = this.stage.width;
                    //loader.height = this.stage.height;
            ]]>
        </fx:Script>
    </s:WindowedApplication>
    Is there any solution?

  • Initial load with LOBs

    Hi, i'm trying to do an inital load and I keep getting errors like these:
    ERROR OGG-01192 Oracle GoldenGate Capture for Oracle, ext1.prm: Trying to use RMTTASK on data types which may be written as LOB chunks (Table: 'TESTDB.BLOBTABLE').
    ERROR OGG-01668 Oracle GoldenGate Capture for Oracle, ext1.prm: PROCESS ABENDING.
    The table looks like this:
    COLUMN_NAME|DATA_TYPE|NULLABLE|DATA_DEFAULT|COLUMN_ID|COMMENTS
    UUID     VARCHAR2(32 BYTE)     No          1     
    DESCRIPTION     VARCHAR2(2000 BYTE)     Yes          2     
    CONTENT     BLOB     Yes          3     
    I've checked and the source database does contain data in the blobtable and both databases have the same tables, so now I have no idea what can be wrong? =/

    For initial loads with LOBs, use a RMTFILE and a normal replicat. There are a number of things that are not supported with "RmtTask". A "rmtfile" is basically the same format as a 'rmttrail' file, but is specifically for initial loads or other "captured" data that is not a continuous stream. And make sure you do have a newer build of GG (either v11 or a latest 10.4 from the support site).
    The 'extract' would look something like this:
    ggsci> add extract e1aa, sourceIsTable
    ggsci> edit param e1aa
    extract e1aa
    userid ggs, password ggs
    -- either local or remote
    -- extFile dirdat/aa, maxFiles 999999, megabytes 100
    rmtFile dirdat/aa, maxFiles 999999, megabytes 100
    Table myschema1.*;
    Table myschema2.*;
    Then on the target, use a normal 'replicat' to read the "files".
    Note that if the source and target are both oracle, this is not the most efficient way to instantiate the target. Using export/import or backup/restore (or any other mechanism) would usually be preferable.

  • Initial load failing between identical tables. DEFGEN skewed and fixable?

    Initial load failing between identical tables. DEFGEN skewed and fixable?
    Error seen:
    2013-01-28 15:23:46 WARNING OGG-00869 [SQL error 0 (0x0)][HP][ODBC/MX Driver] DATETIME FIELD OVERFLOW. Incorrect Format or Data. Row: 1 Column: 11.
    Then compared the discard record against a select * on the key column.
    Mapping problem with insert record (target format)...
    **** Comparing Discard contents to Select * display
    ABCHID = 3431100001357760616974974003012 = 3431100001357760616974974003012
    *!!! ABCHSTEPCD = 909129785 <> 9 ???*
    ABCHCREATEDDATE = 2013-01-09 13:43:36 = 2013-01-09 13:43:36
    ABCHMODIFIEDDATE = 2013-01-09 13:43:36 =2013-01-09 13:43:36
    ABCHNRTPUSHED = 0 = 0
    ABCHPRISMRESULTISEVALUATED = 0 = 0
    SABCHPSEUDOTERM = 005340 = 005340
    ABCHTERMID = TERM05 = TERM05
    ABCHTXNSEQNUM = 300911112224 = 300911112224
    ABCHTIMERQSTRECVFROMACQR = 1357799914310 = 1357799914310
    *!!! ABCTHDATE = 1357-61-24 00:43:34 <> 2013-01-09 13:43:34*
    ABCHABCDATETIME = 2013-01-09 13:43:34.310000 = 2013-01-09 13:43:34.310000
    ABCHACCOUNTABCBER =123ABC = 123ABC
    ABCHMESSAGETYPECODE = 1210 = 1210
    ABCHPROCCDETRANTYPE = 00 = 00
    ABCHPROCCDEFROMACCT = 00 = 00
    ABCHPROCCDETOACCT = 00 = 00
    ABCHRESPONSECODE = 00 = 00
    …. <snipped>
    Defgen comes out same when run against either table.
    Also have copied over and tried both outputs from DEFGEN.
    +- Defgen version 2.0, Encoding ISO-8859-1
    * Definitions created/modified 2013-01-28 15:00
    * Field descriptions for each column entry:
    * 1 Name
    * 2 Data Type
    * 3 External Length
    * 4 Fetch Offset
    * 5 Scale
    * 6 Level
    * 7 Null
    * 8 Bump if Odd
    * 9 Internal Length
    * 10 Binary Length
    * 11 Table Length
    * 12 Most Significant DT
    * 13 Least Significant DT
    * 14 High Precision
    * 15 Low Precision
    * 16 Elementary Item
    * 17 Occurs
    * 18 Key Column
    * 19 Sub Data Type
    Database type: SQLMX
    Character set ID: ISO-8859-1
    National character set ID: UTF-16
    Locale: en_EN_US
    Case sensitivity: 14 14 14 14 14 14 14 14 14 14 14 14 11 14 14 14
    Definition for table RT.ABC
    Record length: 1311
    Syskey: 0
    Columns: 106
    ABCHID 64 34 0 0 0 0 0 34 34 34 0 0 32 32 1 0 1 3
    ABCHSTEPCD 132 4 39 0 0 0 0 4 4 4 0 0 0 0 1 0 0 0
    ABCHCREATEDDATE 192 19 46 0 0 0 0 19 19 19 0 5 0 0 1 0 0 0
    ABCHMODIFIEDDATE 192 19 68 0 0 0 0 19 19 19 0 5 0 0 1 0 0 0
    ABCHNRTPUSHED 130 2 90 0 0 0 0 2 2 2 0 0 0 0 1 0 0 0
    ABCHPRISMRESULTISEVALUATED 130 2 95 0 0 0 0 2 2 2 0 0 0 0 1 0 0 0
    ABCHPSEUDOTERM 0 8 100 0 0 0 0 8 8 8 0 0 0 0 1 0 0 0
    ABCTERMID 0 16 111 0 0 0 0 16 16 16 0 0 0 0 1 0 0 0
    ABCHTXNSEQNUM 0 12 130 0 0 0 0 12 12 12 0 0 0 0 1 0 0 0
    ABCHTIMERQSTRECVFROMACQR 64 24 145 0 0 0 0 24 24 24 0 0 22 22 1 0 0 3
    ABCTHDATE 192 19 174 0 0 0 0 19 19 19 0 5 0 0 1 0 0 0
    ABCHABCDATETIME 192 26 196 0 0 1 0 26 26 26 0 6 0 0 1 0 0 0
    ABCHACCOUNTABCER 0 19 225 0 0 1 0 19 19 19 0 0 0 0 1 0 0 0
    ABCHMESSAGETYPECODE 0 4 247 0 0 1 0 4 4 4 0 0 0 0 1 0 0 0
    ABCHPROCCDETRANTYPE 0 2 254 0 0 1 0 2 2 2 0 0 0 0 1 0 0 0
    ABCHPROCCDEFROMACCT 0 2 259 0 0 1 0 2 2 2 0 0 0 0 1 0 0 0
    ABCHPROCCDETOACCT 0 2 264 0 0 1 0 2 2 2 0 0 0 0 1 0 0 0
    ABCHRESPONSECODE 0 5 269 0 0 1 0 5 5 5 0 0 0 0 1 0 0 0
    … <snipped>
    The physical table shows a PACKED REC 1078
    And table invoke is:
    -- Definition of table ABC3.RT.ABC
    -- Definition current Mon Jan 28 18:20:02 2013
    ABCHID NUMERIC(32, 0) NO DEFAULT HEADING '' NOT
    NULL NOT DROPPABLE
    , ABCHSTEPCD INT NO DEFAULT HEADING '' NOT NULL NOT
    DROPPABLE
    , ABCHCREATEDDATE TIMESTAMP(0) NO DEFAULT HEADING '' NOT
    NULL NOT DROPPABLE
    , ABCHMODIFIEDDATE TIMESTAMP(0) NO DEFAULT HEADING '' NOT
    NULL NOT DROPPABLE
    , ABCHNRTPUSHED SMALLINT DEFAULT 0 HEADING '' NOT NULL NOT
    DROPPABLE
    , ABCHPRISMRESULTISEVALUATED SMALLINT DEFAULT 0 HEADING '' NOT NULL NOT
    DROPPABLE
    , ABCHPSEUDOTERM CHAR(8) CHARACTER SET ISO88591 COLLATE
    DEFAULT NO DEFAULT HEADING '' NOT NULL NOT DROPPABLE
    , ABCHTERMID CHAR(16) CHARACTER SET ISO88591 COLLATE
    DEFAULT NO DEFAULT HEADING '' NOT NULL NOT DROPPABLE
    , ABCHTXNSEQNUM CHAR(12) CHARACTER SET ISO88591 COLLATE
    DEFAULT NO DEFAULT HEADING '' NOT NULL NOT DROPPABLE
    , ABCHTIMERQSTRECVFROMACQR NUMERIC(22, 0) NO DEFAULT HEADING '' NOT
    NULL NOT DROPPABLE
    , ABCTHDATE TIMESTAMP(0) NO DEFAULT HEADING '' NOT
    NULL NOT DROPPABLE
    , ABCHABCDATETIME TIMESTAMP(6) DEFAULT NULL HEADING ''
    , ABCHACCOUNTNABCBER CHAR(19) CHARACTER SET ISO88591 COLLATE
    DEFAULT DEFAULT NULL HEADING ''
    , ABCHMESSAGETYPECODE CHAR(4) CHARACTER SET ISO88591 COLLATE
    DEFAULT DEFAULT NULL HEADING ''
    , ABCHPROCCDETRANTYPE CHAR(2) CHARACTER SET ISO88591 COLLATE
    DEFAULT DEFAULT NULL HEADING ''
    , ABCHPROCCDEFROMACCT CHAR(2) CHARACTER SET ISO88591 COLLATE
    DEFAULT DEFAULT NULL HEADING ''
    , ABCHPROCCDETOACCT CHAR(2) CHARACTER SET ISO88591 COLLATE
    DEFAULT DEFAULT NULL HEADING ''
    , ABCHRESPONSECODE CHAR(5) CHARACTER SET ISO88591 COLLATE
    DEFAULT DEFAULT NULL HEADING ''
    …. Snipped
    I suspect that the fields having subtype 3 just before the garbled columns is a clue, but not sure what to replace with or adjust.
    Any and all help mighty appreciated.

    Worthwhile suggestion, just having difficulty applying.
    I will tinker with it more. But still open to more suggestions.
    =-=-=-=-
    Oracle GoldenGate Delivery for SQL/MX
    Version 11.2.1.0.1 14305084
    NonStop H06 on Jul 11 2012 14:11:30
    Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.
    Starting at 2013-01-31 15:19:35
    Operating System Version:
    NONSTOP_KERNEL
    Version 12, Release J06
    Node: abc3
    Machine: NSE-AB
    Process id: 67895711
    Description:
    ** Running with the following parameters **
    2013-01-31 15:19:40 INFO OGG-03035 Operating system character set identified as ISO-8859-1. Locale: en_US_POSIX, LC_ALL:.
    Comment
    Comment
    REPLICAT lodrepx
    ASSUMETARGETDEFS
    Source Context :
    SourceModule : [er.init]
    SourceID : [home/ecloud/sqlmx_mlr14305084/src/app/er/init.cpp]
    SourceFunction : [get_infile_params]
    SourceLine : [2418]
    2013-01-31 15:19:40 ERROR OGG-00184 ASSUMETARGETDEFS is not supported for SQL/MX ODBC replicat.
    2013-01-31 15:19:45 ERROR OGG-01668 PROCESS ABENDING.

  • Short dump in SAP R/3: SQL statement buffer overflow?

    Hello,
    I hope someone can help us with the following problem:
    A short dump in SAP R/3 (DBIF_RSQL_INVALID_RSQL, CX_SY_OPEN_SQL_DB)
    occurred during a delta load, which worked fine for several month.
    The custom code crashes at a FETCH NEXT CURSOR statement.
    I assume, it might be a SQL statement buffer overflow in SAP R/3?
    The problem can be reproduced by RSA3, and is therefore not time-dependent.
    The problem did not occur before or on the quality assurance system.
    Cursor code:
          Read all entries since last transfer (delta mechanism)
            OPEN CURSOR WITH HOLD s_cursor FOR
              SELECT * FROM ekko
                WHERE ebeln IN t_selopt_ekko.
    t_selopt_ekko has up to 60.000 data sets, which worked fine in the past.
    It is very likely that the amount of data during the first crash did not exceed this.
    SQL-Trace of RSA3 call:
    It seems that 25150 data set can be processed via fetch before the short dump occurs
    After that object SNAP is written:
    "...DBIF_RSQL_INVALID_RSQL...dynpen00 + 0xab0 at dymain.c:1645 dw.sapP82_D82
    Thdyn...Report für den Extraktoraufruf...I_T_FIELDS...Table IT_43[16x60]TH058FUNCTION=
    RSA3_GET_DATA_SIMPLEDATA=S_S_IF_SIMPLE-T_FIELDSTH100...shmRefCount  = 1...
    ...> 1st level extension part <...isUsed       = 1...isCtfyAble   = 1...> Shareable Table Header Data
    <...tabi         = Not allo......S_CURSORL...SAPLRSA3...LRSA3U06...SAPLRSA3...
    During dump creation the following occurs:
    "...SAPLSPIAGENTCW...CX_DYNAMIC_CHECK=...CRSFH...BALMSGHNDL...
    DBIF_RSQL_INVALID_RSQL...DBIF_RSQL_INVALID_RSQL...DB_ERR_RSQL_00013...
    INCL_ABAP_ERROR...DBIF_INCL_INTERNAL_ERROR...INCL_INTERNAL_ERROR...
    GENERAL_EXC_WITHOUT_RAISING...INCL_SEND_TO_ABAP...INCL_SEARCH_HINTS...
    INCL_SEND_TO_SAP...GENERAL_EXC_WITHOUT_RAISING...GENERAL_ENVIRONMENT...
    GENERAL_TRANSACTION...GENERAL_INFO...GENERAL_INFO_INTERNAL...
    DBIF_INCL_INTERNAL_CALL_CODE..."
    Basis says, that the Oracle data base works fine. The problem seems to be a SAP R/3 buffer.
    Does anyone had a similar problem or knows where such a buffer might be or how it can be enlarged?
    Best regards
    Thomas
    P.S.
    Found a thread that contains part of the dump message "dynpen00 + 0xab0 at dymain.c:1645":
    Thread: dump giving by std prg contains -> seems not to be helpful
    Found a similar thread:
    Thread: Short dump in RSA3 --Z Data Source -> table space or somting else?
    Edited by: Thomas Köpp on Apr 1, 2009 11:39 AM

    Hi Thomas,
          Its due to different field length.
    Just check it out in code after FETCH NEXT CURSOR what internal table you have mention.
    that internal table shoul deffined by taking refrence of ekko, because your code is
    OPEN CURSOR WITH HOLD s_cursor FOR
    SELECT * FROM ekko
    WHERE ebeln IN t_selopt_ekko.
    hope you got solution.
    Regards,

  • Character set Conversion Buffer Overflow Error

    Hi,
    I have got an issue while loading data from a flat file to a staging table. i.e., Character set Conversion Buffer Overflow. Suppose there are 10,000 records in a flat file, after running control file only 100+ records are loading to the staging table. Remaining are errored out. I think there is no issue with control file because when I load data from different flat file containing same no. of records as the previous flat file, it is loading all the records. what could be the reason and solution for this issue.
    Can anyone please suggest me how to resolve this issue.

    DBNS_OUTPUT is a poor choice for debugging. It has very limited used. And as you've discovered, merely debugging code can now result in new exceptions in the code.
    The proper approach would be to create your own debug procedure (or package). Have your code call this instead of DBMS_OUTPUT.
    In your debug procedure, you can decide what you want to do with that debug data for that specific program in the current environment and circumstances.
    The program that runs could be a DBMS_JOB in which case DBMS_OUTPUT is useless. The program can be called several layers deep from other PL/SQL code.. and you want to know just who is calling your code. Etc.
    Having your own debug procedure allows you to:
    - create an autonomous transaction and log the debug data to a log table
    - write it to a DBMS_PIPE for interactive debugging
    - write it to DBMS_OUTPUT
    - record the PL/SQL call stack to determine who is calling who
    - record the current session's environment (e.g. session_context)
    - record the current session's statistics, opens cursors, current SQL, etc. (courtesy of the V$ views)
    etc. etc.
    In other words, your debug procedure gives you the flexibility to decide on HOW to handle the debugging.
    And when you code goes into production, your debug procedure ships with, containing a simple NULL command.. Which means that at any time the DBA can (when the need arise), add his/her debug methods into it in order to trace a production problem.
    Using DBMS_OUTPUT is a very poor, and often just wrong, choice.
    It is fine for writing a quick test. But when you are developing production code and using DBMS_OUTPUT, you must ask yourself whether you have made the right choice.
    And this is not just about wrapping DBMS_OUTPUT. But also wrapping other system calls like RAISE_APPLICATION_ERROR and so on.

  • SQLLDR exists with conversion buffer overflow

    Hi ,
    i have a flat file with over 3,50,000+ records in csv format. The loader program exits with conversion buffer overflow error in log file after loading 2 60 000 records
    what i need to change in my ctl file to load full set of records.
    i have used following things in ctl file
    OPTIONS (SKIP=1)
    LOAD DATA
    CHARACTERSET WE8ISO8859P1
    INFILE *
    APPEND
    INTO TABLE SYMCDH_QTC_CONTACTS_ALL_TEMP
    FIELDS TERMINATED BY "|"
    OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS
    (

    Hi,
    the error occurs because some data you are trying to load, exceeds the limit for a varchar2 column in your database. You know which record it is due to the error. So you can extend the size of the column of the table or adjust the given data in the file or take a part of the data to insert into your table.
    Herald ten Dam
    http://htendam.wordpress.com

  • "iTunes has stopped working" (Buffer Overflow)

    My computer ran the update to 10.2.1.1 and some windows updates last night. I tried to start iTunes this morning and get an error saying "iTunes has stopped working".
    I have tried repair, un-install windows updates, un-istall iTunes and install legacy version, let apple software update re-install update to 10.2.1.1 from legacy version. I keep getting the same buffer overflow exception when I try to start.
    Here is what windows says:
    Problem signature:
    Problem Event Name: BEX
    Application Name: iTunes.exe
    Application Version: 10.2.1.1
    Application Timestamp: 4d756476
    Fault Module Name: StackHash_0a9e
    Fault Module Version: 0.0.0.0
    Fault Module Timestamp: 00000000
    Exception Offset: 69441800
    Exception Code: c0000005
    Exception Data: 00000008
    OS Version: 6.1.7601.2.1.0.256.48
    Locale ID: 1033
    Additional Information 1: 0a9e
    Additional Information 2: 0a9e372d3b4ad19135b953a78882e789
    Additional Information 3: 0a9e
    Additional Information 4: 0a9e372d3b4ad19135b953a78882e789
    Here is what I get from the debugger:
    69441800()
    wdmaud.drv!67b9049f()
    [Frames below may be incorrect and/or missing, no symbols loaded for wdmaud.drv]
    CoreFoundation.dll!6bc0a893()
    CoreFoundation.dll!6bbd5b39()
    CoreFoundation.dll!6bbd5e36()
    CoreFoundation.dll!6bbd6444()
    iTunes.dll!5d4f12af()
    iTunes.dll!5d4f11a1()
    iTunes.dll!5cc50a08()
    iTunes.dll!5c7d51f3()
    iTunes.dll!5c7d4f5d()
    iTunes.dll!5c7d4dac()
    iTunes.exe!000e15f7()
    iTunes.exe!006e0075()
    ntdll.dll!77cafbda()
    ntdll.dll!77cafada()
    ntdll.dll!77cc9d4c()
    kernel32.dll!76f7886b()
    iTunes.exe!000e1984()
    kernel32.dll!76f733ca()
    ntdll.dll!77cc9ed2()
    ntdll.dll!77cc9ea5()
    Thanks

    I have the exact same error as Darsch with Windows 7, 64-bit with the same symptoms.
    Problem Event Name:    BEX
      Application Name:    iTunes.exe
      Application Version:    10.5.1.42
      Application Timestamp:    4ebf7d7c
      Fault Module Name:    MSVCR80.dll
      Fault Module Version:    8.0.50727.6195
      Fault Module Timestamp:    4dcddbf3
      Exception Offset:    00026b72
      Exception Code:    c000000d
      Exception Data:    00000000
      OS Version:    6.1.7601.2.1.0.256.48
      Locale ID:    1033
      Additional Information 1:    aec1
      Additional Information 2:    aec178c4debdc54fae8bafc6bd84621d
      Additional Information 3:    7696
      Additional Information 4:    7696eb266721b0f3efdd5c932aadd6a6
    Tried removing and reinstalling all apple sw as suggested here: http://support.apple.com/kb/HT1923 and it didn't work. Tried rolling back to my last working itunes version and no luck. Tried loading old itunes libraries and get the same error. Any help here would be appreciated.  Thanks..

  • RVS4000 IPS identifies flickr images, etc., as Microsoft Color Management Module Buffer Overflow exploit

    If I enable the IPS function in my RVS4000, some images from various popular websites like Flickr and blogspot will not load.  They are detected by IPS as "EXPLOIT Microsoft Color Management Module Buffer Overflow"
    You can test it yourself with this image hosted at blogspot:
    http://4.bp.blogspot.com/_a7jkcMVp5Vg/TF3gjYJrHBI/AAAAAAAAMqM/ScJAA8y9nZk/s400/sorry.jpg
    With IPS enabled, that image will not load.  With IPS disabled, it will.
    I am using firmware 1.3.2.0 and IPS signature version 1.42.
    I believe IPS is incorrectly identifying these images as containing the color management buffer overflow exploit.
    Any chance this could be corrected in the next IPS signature release?
    As an aside, I would prefer to open a case with support about this, but I really can't figure out how to do so.  I purchased the RVS4000 when it was still made by linksys.  I would assume I should still be able to get support on it now that it's own by Cisco, but trying to open a case on the web for this seems impossible.  Am I missing something?

    i've just removed the proxy in my browser, so that it connects direct.
    et voila: EXPLOIT Microsoft Color Management Module Buffer Overflow
    but this rises the fear that IPS works just as expected when no (external) proxy is used.
    that would be a serious problem, at least because it isn't mentioned in the online help/manual and because i'd leave my real ip at many places, which i wouldn't like.
    i'd be happy to read a response from cisco to the Buffer Overflow (is it a false positive) and if IPS should work when a external proxy is used (via unencrypted connections, so the [w]rvs has a chance to read the communication.

  • Initial Load Extract (Date Format)

    Hi ,
    I'm doing an Initial Load Extract (File To Replicat ) for module of 30 GB , and I'm getting the following error at on the extracted tables :
    ERROR   OGG-00665  OCI Error error executing fetch with error code 1801  (status = 1801-ORA-01801: date format is too long for internal buffer), SQL<SELECT ........
    My concerns are :
    1- How to overcome this error , without updating the source data ?
    2- How to deal with that at the target side (If the data replicated )  , so it'll not affect the business needs
    Thanks So Much

    Thanks 960104  for your interest
    >> Source GG
    11.1.1.1.2
    DB is 10.2.0.4
    ++++++++++++++++++++++++++++++++++
    Target GG
    Version 12.1.2.1.0
    DB is 12.1.0.2.0
    Report file is only for extract ,as I'm pushing the trails to remote only without doing the replicat at the moment
    =================================================================
    2015-01-25 10:23:53  INFO
    OGG-01026  Rolling over remote file /u04/GG_TRAILS/ff000001.
    2015-01-25 10:23:56  INFO
    OGG-01026  Rolling over remote file /u04/GG_TRAILS/ff000002.
    2015-01-25 10:24:00  INFO
    OGG-01026  Rolling over remote file /u04/GG_TRAILS/ff000003.
    551606 records processed as of 2015-01-25 10:24:00 (rate 49204,delta 49204)
    2015-01-25 10:24:05  INFO
    OGG-01026  Rolling over remote file /u04/GG_TRAILS/ff000004.
    2015-01-25 10:24:08  INFO
    OGG-01026  Rolling over remote file /u04/GG_TRAILS/ff000005.
    1062596 records processed as of 2015-01-25 10:24:10 (rate 50097,delta 51098)
    2015-01-25 10:24:12  INFO
    OGG-01026  Rolling over remote file /u04/GG_TRAILS/ff000006.
    2015-01-25 10:24:16  INFO
    OGG-01026  Rolling over remote file /u04/GG_TRAILS/ff000007.
    2015-01-25 10:24:20  INFO
    OGG-01026  Rolling over remote file /u04/GG_TRAILS/ff000008.
    1471164 records processed as of 2015-01-25 10:24:21 (rate 46541,delta 39289)
    Source Context :
      SourceModule       
    : [ggdb.ora.sess]
      SourceID           
    : [/scratch/pradshar/view_storage/pradshar_bugdbrh40_12927937/oggcore/OpenSys/src/gglib/ggdbora/ocisess.c]
      SourceFunction     
    : [OCISESS_try]
      SourceLine         
    : [501]
      ThreadBacktrace    
    : [10] elements
    : [/u01/GG/extract(CMessageContext::AddThreadContext()+0x26) [0x664446]]
    : [/u01/GG/extract(CMessageFactory::CreateMessage(CSourceContext*, unsigned int, ...)+0x7b2) [0x65aee2]]
    : [/u01/GG/extract(_MSG_ERR_ORACLE_OCI_ERROR_WITH_DESC_SQL(CSourceContext*, int, char const*, char const*, char const*, CMessageFactory::MessageDisposition)+0xb2) [0x613232]]
    : [/u01/GG/extract(OCISESS_try(int, OCISESS_context_def*, char const*, ...)+0x48b) [0x5a3c2b]]
    : [/u01/GG/extract(DBOCI_get_query_row(file_def*, int, int*)+0x95e) [0x923558]]
    : [/u01/GG/extract(gl_get_query_row(file_def*)+0x10) [0x933e2c]]
    : [/u01/GG/extract [0x87d18d]]
    : [/u01/GG/extract(main+0x11e1) [0x527aa1]]
    : [/lib64/libc.so.6(__libc_start_main+0xf4) [0x392f81d994]]
    : [/u01/GG/extract(__gxx_personality_v0+0x1ea) [0x4f32ca]]
    2015-01-25 10:24:21  ERROR   OGG-00665  OCI Error error executing fetch with error code 1801  (status = 1801-ORA-01801: date format is too long for internal buffer), SQL<SELECT x."ID",x."STATUS",x."STATUS_DATE",x."UPDATE_DATE",x."CREATED_BY",x."CREATION_DATE",x."UPDATED_BY",x."DIR_ID",x."LOC_ID",x."TICKET_DATE",x."TICKET_TIME",x."ROAD_SPEED",x."VEHICLE_SPEED",x."RADAR>.
    2015-01-25 10:24:21  ERROR   OGG-01668  PROCESS ABENDING.
    =================================================================
    End of Report file
    Thanks

  • MODPLSQL generates Buffer Overflow errors trying to login

    I am not entirely sure if this the right place but here it goes anyway:
    We are using Oracle Workflow Manager Standalone(2.6.4) as part of our Warehouse Builder setup on a 10.2.0.3.0. Enterprise database on Linux .
    As such the setup has just recently stopped working where as before it worked for a long time.
    The problem is that it is not possible to log in to Oracle Workflow Manager with any user.
    I have traced this problem to the mod_plsql.so library of the Oracle HTTP Server part of the owf setup.
    What happens is that this module tries to login to the database when a user tries to login with hhis browser and sends an ALTER SESSION statement.
    (This is also described in the docs)
    This statement is misformed however, it contains to much characters.
    Instead of :
    ALTER SESSION SET NLS_LANGUAGE='DUTCH' NLS_TERRITORY='THE NETHERLANDS' NLS_CURRENCY='E'
    the last bit , nls_currency, is being filled with random characters .
    Since the total is more than the allowed limit the database returns, or mod_plsql decides, a ora-1017.
    I used the proxy method described here, January 24, 2006: On a breakable Oracle, to find out what the mod_plsql.so package sends to the database.
    Just read DADS /mod_plsql for SQLPlus.
    I have to do this because these requests are handled as a SYS user and as such are not logged.
    The mod_plsql library is supposed to use the DADS.CONF directives over any environment values.
    However in the case of the PlsqlNLSLanguage directive this does not work.
    The environment variable NLS_LANGUAGE , which is set to dutch , is given precedence.
    It uses that to construct the ALTER SESSION statement.
    If i change the environment variable to AMERICAN, the modplsql.so uses this to pick the currency and it gets the $ sign for NLS_CURRENCY.
    Then the ALTER SESSION statement that is being sent is correct and there is no buffer overflow anymore.
    And the database subsequently allows us in. However this changing of NLS_LANGUAGE at an environment variable level is not desirable for us since we get other translate problems.
    Finally The Questions:
    Why does the mod_plsql.so package also send the NLS_CURRENCY ? This is mentioned in none of the (Oracle) documentation but we can clearly see it happening.
    Where does the mod_plsql.so package get this NLS_CURRENCY from? We don't set it anywhere in the environment or the .conf files, yet it is retrieved somewhere. In our case this is retrieveing some garbage data and thus causing the login to fail. Even looking in the .so library i see no mechanism for nls_currency.
    Why does the mod_plsql.so package favor the environment variable over the DADS.CONF PlsqlNLSLanguage directive. All the manuals say otherwise yet in our case it is not being used. And when i load the library in an editor i see remarks that indeed point to my statement.
    The most important question here is where do i need to look to get the NLS_CURRENCY . It is somehow corrupt and i want to correct this ofcourse.
    Another important one is how we can force the mod_plsql.so package to use the PlsqlNLSLanguage directive since we do not want to change the environment variable.
    I hope someone can help us out here.
    rgrds Mike

    Well i must say i am sorry not haveing received any answer whatsoever.
    This absence of Oracle people here is worrying me, and is the second time in a row lately.
    It seems Oracle is abandoning its own products.
    Anyway, just to answer my own thread so that somebody else gets some benefit from it:
    After investigation i find that it works like this when things go right:
    Modplsql creates a connection with the database and sends numerous key value pairs to the server.
    Such as:
    AUTH_TERMINAL
    AUTH_PROGRAM_NM
    AUTH_MACHINE
    AUTH_PID
    AUTH_SID
    AUTH_SESSKEY
    AUTH_PASSWORD
    AUTH_ACL
    AUTH_ALTER_SESSION    :
    NLS_LANGUAGE
    NLS_TERRITORY
    NLS_CURRENCY
    ... and more NLS_ stuff
    It then sends a pl/sql ALTER SESSION statement this time only with
    NLS_LANGUAGE and
    NLS_TERRITORY
    It then sends several pl/sql code bits probably to test if the database can access owa_match packages.
    In this part it also sends pl/sql to get the database NLS_LANGUAGE, NLS_TERRITORY, and NLS_CHARACTERSET.
    It also sends pl/sql to test owa_util.get_version for the proper version.
    The last part is all of the web stuff: all of the CGI variables including the POSTed data if any. Ofcourse when doing authentication tru basic authentication there is no POST data.
    The authentication info is passed on in the first step with AUTH_PASSWORD.
    The environment value NLS_LANGUAGE is used and parsed in the first bit. The corresponding bits pop up in the AUTH_ALTER_SESSION key-value pair. Modplsql finds the other info(i don't know where really) such as NLS_CURRENCY and puts that there.
    The dads.conf PlsqlNLSLanguage setting is used and parsed in the second step. The second step is formed like  alter session set nls_language='DUTCH' nls_territory='THE NETHERLANDS' .
    So my assumption about modplsql not using the Plsql is wrong here, but due to the error i encountered my debug info never got past step 1.
    If the environment value is not set then a default value of AMERICAN_AMERICA is used.
    What went wrong in my case ?
    The environment var was set to DUTCH. Modplsql uses this to lookup nls_currency as explained to output the info in step 1.
    However nls_currency returned garbage instead of just the euro sign. This is the real problem btw and not solved yet in our case. If someone knows where modplsql gets this info i would like to know this. !
    The other steps were never finished and therefore it looked to the database that the AUTH_ALTER_SESSION    key-value pair was too long.
    It could not authenticate this reqeuest causing the effect that nobody could login. Since these requests are handled as SYS users no logging takes place. Only a trace with the error:
    *** SERVICE NAME:(SYS$USERS) 2013-07-10 11:01:29.414
    *** SESSION ID:(458.3138) 2013-07-10 11:01:29.414
    Buffer overflow for attribute AUTH_ALTER_SESSION - max length[850] actual length[1131]
    indicates there is something wrong here.
    Setting the dads.conf file to override this environment parameter doesn'solve this ofcourse since this info is used somewhere else.
    Fixing it, for now at least , means clearing the envrionment variable , then starting the http server.
    And using DUTCH in the dads.conf file.
    After starting the http server we reset the environment variable.
    I am still looking for an answer on where modplsql gets the NLS_CURRENCY info since thats where the corruption is !
    Hope somebody can use this info.

  • No initial load of Customers, Material and delta load of Sales Orders.

    Hi Experts,
    I am facing a very troublesome issue. I am not able to setup the Middleware portion for initial and delta loads. I read a lot of documents and corrected a lot of things. finally, the connectivity is done with R/3 and CRM. Initial load of all objects is successful (as per Best PRactices guide). Customizing load is successful.
    But after now I have these open issues for which I am unable to find any answers (am really exhausted!!):
    - Customer_main load, it was succesful, but no BP's of R/3 are available.
    - Material, it failed in SMW01, SMQ2, the errors are:
    Mat. for Initial Download: Function table not supported
    EAN xxxxxxxxxxxxxxxxxx does not correspond to the GTIN format and cannot be transferred
    EAN yyyyyyyyyyyyyyyyyy does not correspond to the GTIN format and cannot be transferred
    Plant xx is not assigned to a business partner
    - Sales order, it shows green bdoc, but error segments says "No upload to R/3" and the order does not flow to R/3.
    We had our system setup 3 years back for data transfer and Middleware. But few things changed and connectivity stopped. I did all that again now, but am not yet successful. Any inputs will be greatly appreciated.
    Thanks,
    -Pat

    Hi Ashvin,
    The error messages in SMW01 for MAterial initial load is :
         Mat. for Initial Download: Function table not supported
         EAN 123456789000562 does not correspond to the GTIN format and cannot be transferred
         EAN 900033056531434 does not correspond to the GTIN format and cannot be transferred
         Plant 21 is not assigned to a business partner
    I have done the DNL_PLANT load successfully. Why then the plant error?
    Some of the messages for BP:
    Messages for business partner 1331:
    No classification is assigned to business partner 1331
    For another,
         Partner 00001872(469206A60E5F61C6E10000009F70045E): the following errors occurred
         City Atlanta does not exist in country US
         Time zone EST_NA does not exist
         You are not allowed to enter a tax jurisdiction code for country US
         Validation error occurred: Module CRM_BUPA_MAIN_VAL, BDoc type BUPA_MAIN.
    Now, the time zone EST is assigned by default in R/3. Where do I change that? I do not want to change time zones as this may have other impacts. Maybe CRM I cna change this, not for sure in R/3. City check has been deactivated in R/3 and CRM, still the error.
    Till these 2 are not solved, I cannot go into the Sales order loads.
    Any ideas will be greatly appreciated.
    Thanks,
    -Pat

  • Initial load of sales orders from R3 to CRM without statuses

    1) Some sales orders were uploaded into CRM without statuses in the headers or line items. 
    2) Some sales orders were uploaded without status, ship-to, sold-to, payer.....If I deleted them and use R3AR2, R3AR4 to upload each individual then no problem.
    Any ideas or suggestions?
    Thanks.

    Hi,
       Request load of adapter objects uses different extractor modules for extracting the data from external system to CRM. While your initial load of sales docs. will use a different extraction logic bases on the filter conditions specfied on trx.
    R3AC1
       There may be a problem in the extraction of data from the source system (don't know if you are using a R/3). Can you please de-register the R/3 (i suppose) outbound queue using trx.
    SMQS
    , and then debug the extraction (R/3 outbound) before the data is sent to CRM using FM
    CRS_SEND_TO_SERVER
       If this goes well, you may try debugging the mapper in CRM inbound and the validation module in CRM as a last resort. Also, please refer to trx.
    SMW01
    to see if the Bdocs are fully processed.
    Hope this helps...Reward if helpful.
    Regards,
    Sudipta.

  • After Upgrade BI Initial load is taking much time

    Dear Friends,
    We had BW 3.5 on  Windows 2003(32 bit) & SQL 2000.
    I upgraded it to BI 7.01 (EHP1 SR1) with Windows 2003(64bit),SQL 2005 & completed all followup activities.
    Now when we are doing initial load it is taking long time. Please do let me know your inputs as soon as possible.
    Regards,
    Sunil Maurya.

    Hi ,
    I created the thread under netweaver forum.
    But still I'll check & try create it in correct forum.
    Regards,
    Sunil Maurya

  • XMLTRANSFORM Too large stylesheet - code buffer overflow issue

    Hi All,
    My question is related to MSWordML generation from PLSQL stored procedure.
    1. I have table, containing XSLT stylesheets for different documents
    2. PLSQL stored procedure is generating dynamic content depending on some params and at the end I'm using
    SELECT XMLTRANSFORM(XMLTYPE.createxml(db_data_clob), XMLTYPE.createxml(x.xslt_clob)).GetClobVal()
    INTO   res
    FROM   msword_ml_data x
    WHERE  x.report_id = rep_id_variable;
    where : x.xslt_clob -> column, containing XSLT CLOB
    db_data_clob -> dynamic content CLOB
    res -> CLOB result
    All this was working fine on Oracle11gR1, but I had to reinstall database and I said why not install Oracle11gR2 ...
    Guess what. Stored procedure is raising exception when using XMLTRANSFORM :
    Exception : : ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00004: internal error "Too large stylesheet - code buffer overflow"
    Google says nothing about it. I don't recall setting some special DB property in Oracle11gR1.
    Has anyone encountered this ?
    I haven't changed procedure nor table.
    I'm using exactly the same XSLT's from Java code and they are working just fine, so they are not the reason. My guess is that something in Oracle11gR2 related to XML processing is changed.
    If anyone could help, thanks in advance

    For those who are interested.
    I have logged a service request and it turned out that this is is a bug in Oracle 11gR2.
    "The limitation on the style sheet is not exactly a size limit but a limitation on the number of style sheet instructions and depends on the way the style sheet has been written. This is a C based parser limitation"
    Anyway, the workaround is to create Java stored procedure and do transformation from there.

Maybe you are looking for

  • Automatically show boot select screen at startup

    Is there a way to force OSX to start with the boot selection screen at startup?  I'd like to be able to power on my system and have it show me the boot select screen (i.e. Windows/OSX) so that I can choose which OS I wish to boot into.  I currently h

  • IBooks landscape view fit to screen-width

    I believe it would be very nice to have option of fit to screen-width as default landscape view. Whenever I pass to next page, everything is so small that I have to resize it manually, all the time. Thanks

  • Table Model:: Null Pointer Exception

    I have made an application wherein I am using a JTree to display some hierarchy. Now while scrolling the tree,I display the details in the table. Data for both the components come from the database.If I continously scroll up and down for some time in

  • Update Organizational unit (ORGEH) in PA0001 .

    Hi Gurus, My Requirement as follows. Using BAPI_HRMASTER_SAVE_REPL_MULT i need to update ORGEH in PA0001. For that first Using Operator = 'D' in the BAPI, the latest entry corresponding to PERNR needs to be deleted. Then again using the operator = 'U

  • ReCreate my Database

    Hi, I have a production Oracle 8i Database, but it support only US7ASCII character set. I would like to configure my Database to support uncode character set. thank you for your time.