Truncate performance

Dear SQL Experts,
We are trying to delete 35 Million rows from a table with 60 Million rows. Table is non-partitioned, non-clustered, and no LOBs.
Here's what I'm doing:
1. creating a new table, using insert append to copy all the rows I want to retain from original table
2. truncate the original table
3. Import rows from new table to originial table.
My question is - Original table has indexes on it. Does dropping indexes help the truncate run faster? I know for delete it does... not sure if indices matter for truncate..Kindly, share if you think any other method is
more germane.
Thanks

Hi,
if the table is partitioned then it might be faster to insert rows using ALTER TABLE EXCHANGE PARTITION.
It would work even if there's only one partition (so one could partition the table just to make such operations faster,
and it won't affect anything else):
create table t1 (id number, y number, z number)
partition by range(id)(
  partition p values less than(maxvalue)
insert into t1
(id, y, z)
select level id, dbms_random.value y, dbms_random.value z
from dual
connect by level <= 1e5;
create index i$t1 on t1(id);
create table t2
as
select *
from t1
where y<=0.5;
truncate table t1;
alter table t1 exchange partition (p) with table t2;
I wonder if it's possible to do the same thing with a non-partitioned table.
Best regards,
Nikolay

Similar Messages

  • Truncate performance on 10g vs. 8/9i

    Has anyone experienced better truncate performance on 10g vs. earlier versions? We've had crippling episodes on 8i and are hoping for a noticeable improvement.

    One thing a truncate does is to require dbwr to write any cached buffers of the table to disk prior to the object being truncated. At first consideration this may not appear to make sense to someone, however when the requirement to support time based forward recovery to the moment just prior to the truncate command being issued then this does indeed make perfect sense. Dirty blocks for the target table and indexes have to be flushed to disk prior to the object header(s) being marked as empty.
    Then under dictionary management uet$ and fet$ have to be updated for the extents. The single ST lock on the database is used to single thread access to these two tables so multiple concurent truncates, Create tables, Drop tables and the resulting extent allocations that have to happen for each of these tasks is potentially bad news as each session attempts to grab the single ST lock.
    To eliminate contention for the ST lock is why Oracle introduced temporary table locks (used with sort segments)
    and part of the reason behing locally managed tablespaces.
    Under dictionary management you can help lession contention for the ST lock by first defining your temporary tablespace to be of mode temporary (create tablespace temp temporary) or to use the newest form of create temporary tablespace temp tempfile 'xx' and by assigning extent sizes to minimize the number of extents allocated per time period. Conversion of tablespaces to being locally managed is a big help here also.
    HTH -- Mark D Powell --

  • Large Uniform Extent Size = Slow TRUNCATE?

    Here's the scenario...
    We have a a tablespace with the following storage parameter:
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 32M
    Users were complaining about slow TRUNCATE performance. I saw the same when I created a table with 30,000 rows - same as the user was complaining about - in the same tablespace.
    I proceeded to move the objects from the schema the user was referencing to a tablespace with:
    EXTENT MANAGEMENT LOCAL AUTOALLOCATE
    ... and the TRUNCATE executed in the expected time (less than a second) for the same amount of rows in the same table structure.
    Why does a large UNIFORM extent size (such as 32M in this case) cause for slow TRUNCATE performance? I wasn't able to find an exact cause of this in the forums or on Metalink thus far.
    Version: Oracle DB 10.2.0.3
    System Info:
    Linux ilqaos01c 2.6.9-55.0.12.ELsmp #1 SMP Wed Oct 17 08:15:59 EDT 2007 x86_64
    Thanks.

    Robert Sislow wrote:
    The Metalink article was helpful, however, the database we're on is version 10.2.0.3, and the article is referencing 9.2.0.4.
    Additionally, the last few responses in this thread are referring to concurrent TRUNCATE operations. The TRUNCATE that we're running is a single-thread TRUNCATE on a very small table - about 8000 rows.
    After executing a 10046 level 12 trace, and using the Trace Analyzer tool, we've found that the "local write wait" event is taking up ~90% of the statement's activity for each run. Once again, all we can find that's causing this is the fact the the extent size in the tablespace with the table where we're seeing the slowness is set to UNIFORM size of 32M.
    You're using ASSM (automatic segment space management), which means you have a number of bitmap space management blocks scattered through the object.
    If you're running with 32MB uniform extents, the first extent will be 4096 blocks, and there will be one level 2 bitmap, 64 level 1 bitmaps, and the segment header block at the start of the extent. With autoallocate, the first extent will start with one level 2 bitmap, one (or possibly 2) level 1 bitmap(s) and the segment header block.
    When you truncate an object, all the space managment blocks in the first extent (and any extents you keep) have to be reset to show 100% free space - this means they may all have to be read into the buffer cache before being updated and written back with local writes (i.e. writes by the process, not by dbwr).
    So you have to wait for 66 reads and writes in one case and 3 (or 4) reads and writes in the other case. This helps to explain part of the difference. However, a local write wait should NOT take the best part of a second - so there must be a configuration problem somewhere in your setup. (e.g. issues with async I/O, or RAID configuration).
    Regards
    Jonathan Lewis

  • Truncate Table before Insert--Performance

    HI All,
    This post is in focus of special requirement where a table is truncated before inserting records in the table.
    Now, when a table is truncated the High Water Mark(HWK) is reset to lowest memory allocated for table in tablespace. After this, would insert with append can boost the performance of the insert query?
    In simple insert query, the oracle engine consults the free list to look for free spaces.
    But in insert with apppend, the engine starts above the HWM. And the argument is when truncate has been executes on table, would the freelist be used in simple insert.
    I just need to know if there are any benefits of using append insert on truncated table or simple insert would be same in term of performance with respect to insert with append.
    Regards
    Nits

    Hi,
    if you don't need the data truncate the table. There is no negativ impact whether you are using an conventional path or a direct path insert.
    If you use append less redo is written for the table if the table is in NOLOGGING mode, but redo is written for all indexes. I would recommand to create a full backup after that (if needed), because your table will not be recoverable after that (no REDO Information).
    Dim

  • URGENT:  Cannot perform truncate in SQLPLUS* Activity in process flow

    I try to truncate (as a test for functionality) using an SQLPLUS* Activity inside of my process flow. It hangs and then returns errors when cancelled. Below are the script and error messages. Any help would be greatly appreciated.
    TRUNCATE TABLE UTICA_STAGE.TEST_SQLPLUS_ACT;
    EXIT;
    SP2-0306: Invalid option.
    Usage: CONN(ECT) (logon) (AS SYSDBA)
    where &lt;logon&gt; ::= &lt;username&gt;(/&lt;password&gt;)(@&lt;connect_identifier&gt;) | /
    Enter password:

    user10408896 wrote:
    Fixed the problem myself, I specified a custom connection string. If anyone knows how to use a variable connection string to solve this problem I'll give you pointsbig deal, people dont respond for points as points cant be redeemed..and moreover stop using terms like "URGENT" as we dont care if something is urgent ,if you havent read then please read forum etiqutte.

  • Sporadically getting error "string or binary data would be truncated" in SQL server 2008 while inserting in a Table Type object

    I am facing a strange SQL exception:-
    The code flow is like this:
    .Net 4.0 --> Entity Framework --> SQL 2008 ( StoredProc --> Function {Exception})
    In the SQL Table-Valued Function, I am selecting a column (nvarchar(50)) from an existing table and (after some filtration using inner joins and where clauses) inserting the values in a Table Type Object having a column (nvarchar(50))
    This flow was working fine in SQL 2008 but now all of sudden the Insert into @TableType is throwing  "string or binary data would be truncated"  exception. 
    Insert Into @ObjTableType
    Select * From dbo.Table
    The max length of data in the source column is 24 but even then the insert statement into nvarchar temp column is failing.
    Moreover, the same issue started coming up few weeks back and I was unable to find the root cause, but back then it started working properly after few hours
    (issue reported at 10 AM EST and was automatically resolved post 8 PM EST). No refresh activity was performed on the database.
    This time however the issue is still coming up (even after 2 days) but is not coming up in every scenario. The data set, for which the error is thrown, is valid and every value in the function is fetched from existing tables. 
    Due to its sporadic nature, I am unable to recreate it now :( , but still unable to determine why it started coming up or how can i prevent such things to happen again.
    It is difficult to even explain the weirdness of this bug but any help or guidance in finding the root cause will be very helpful.
    I also Tried by using nvarchar(max) in the table type object but it didn't work.
    Here is a code similar to the function which I am using:
    BEGIN
    TRAN
    DECLARE @PID
    int = 483
    DECLARE @retExcludables
    TABLE
        PID
    int NOT
    NULL,
        ENumber
    nvarchar(50)
    NOT NULL,
        CNumber
    nvarchar(50)
    NOT NULL,
        AId
    uniqueidentifier NOT
    NULL
    declare @PSCount int;
    select @PSCount =
    count('x')
    from tblProjSur ps
    where ps.PID
    = @PID;
    if (@PSCount = 0)
    begin
    return;
    end;
    declare @ExcludableTempValue table (
            PID
    int,
            ENumber
    nvarchar(max),
            CNumber
    nvarchar(max),
            AId
    uniqueidentifier,
            SIds
    int,
            SCSymb
    nvarchar(10),
            SurCSymb
    nvarchar(10)
    with SurCSymbs as (
    select ps.PID,
                   ps.SIds,              
                   csl.CSymb
    from tblProjSur ps
                right
    outer join tblProjSurCSymb pscs
    on pscs.tblProjSurId
    = ps.tblProjSurId
    inner join CSymbLookup csl
    on csl.CSymbId
    = pscs.CSymbId 
    where ps.PID
    = @PID
        AssignedValues
    as (
    select psr.PID,
                   psr.ENumber,
                   psr.CNumber,
                   psmd.MetaDataValue
    as ClaimSymbol,
                   psau.UserId
    as AId,
                   psus.SIds
    from PSRow psr
    inner join PSMetadata psmd
    on psmd.PSRowId
    = psr.SampleRowId
    inner join MetaDataLookup mdl
    on mdl.MetaDataId
    = psmd.MetaDataId
    inner join PSAUser psau
    on psau.PSRowId
    = psr.SampleRowId
                inner
    join PSUserSur psus
    on psus.SampleAssignedUserId
    = psau.ProjectSampleUserId
    where psr.PID
    = @PID
    and mdl.MetaDataCommonName
    = 'CorrectValue'
    and psus.SIds
    in (select
    distinct SIds from SurCSymbs)         
        FullDetails
    as (
    select asurv.PID,
    Convert(NVarchar(50),asurv.ENumber)
    as ENumber,
    Convert(NVarchar(50),asurv.CNumber)
    as CNumber,
                   asurv.AId,
                   asurv.SIds,
                   asurv.CSymb
    as SCSymb,
                   scs.CSymb
    as SurCSymb
    from AssignedValues asurv
    left outer
    join SurCSymbs scs
    on    scs.PID
    = asurv.PID
    and scs.SIds
    = asurv.SIds
    and scs.CSymb
    = asurv.CSymb
    --Error is thrown at this statement
    insert into @ExcludableTempValue
    select *
    from FullDetails;
    with SurHavingSym as (   
    select distinct est.PID,
                            est.ENumber,
                            est.CNumber,
                            est.AId
    from @ExcludableTempValue est
    where est.SurCSymb
    is not
    null
    delete @ExcludableTempValue
    from @ExcludableTempValue est
    inner join SurHavingSym shs
    on    shs.PID
    = est.PID
    and shs.ENumber
    = est.ENumber
    and shs.CNumber
    = est.CNumber
    and shs.AId
    = est.AId;
    insert @retExcludables(PID, ENumber, CNumber, AId)
    select distinct est.PID,
    Convert(nvarchar(50),est.ENumber)
    ENumber,
    Convert(nvarchar(50),est.CNumber)
    CNumber,
                            est.AId      
    from @ExcludableTempValue est 
    RETURN
    ROLLBACK
    TRAN
    I have tried by converting the columns and also validated the input data set for any white spaces or special characters.
    For the same input data, it was working fine till yesterday but suddenly it started throwing the exception.

    Remember, the CTE isn't executing the SQL exactly in the order you read it as a human (don't get too picky about that statement, it's at least partly true enough to say it's partly true), nor are the line numbers or error messages easy to read: a mismatch
    in any of the joins along the way leading up to your insert could be the cause too.  I would suggest posting the table definition/DDL for:
    - PSMetadata, in particular PSRowID, but just post it all
    - tblProjectSur, in particularcolumns CSymbID and TblProjSurSurID
    - cSymbLookup, in particular column CSymbID
    - PSRow, in particular columns SampleRowID, PID,
    - PSAuser and PSUserSur, in particualr all the USERID and RowID columns
    - SurCSymbs, in particular colum SIDs
    Also, a diagnostic query along these lines, repeat for each of your tables, each of the columns used in joins leading up to your insert:
    Select count(asurv.sid) as count all
    , count(case when asurv.sid between 0 and 9999999999 then 1 else null end) as ctIsaNumber
    from SurvCsymb
    The sporadic nature would imply that the optimizer usually chooses one path to the data, but sometimes others, and the fact that it occurs during the insert could be irrelevant, any of the preceding joins could be the cause, not the data targeted to be inserted.

  • Last digit truncates while downloading to Excel from ALV Grid

    Hi All,
    I have been using REUSE_ALV_LIST_DISPLAY and REUSE_ALV_GRID_DISPLAY in my report program.
    While i use REUSE_ALV_GRID_DISPLAY and downloading the data to a local file(Excel) the last digit of the Vendor code is truncated. But in REUSE_ALV_LIST_DISPLAY the datas are downloaded correctly as displayed in the ALV.
    I have copied the code for the reference.
    REPORT  zfirp001                                .
    TYPE-POOLS: slis.
    TABLES: bsak.
    SELECT-OPTIONS:  s_bukrs FOR bsak-bukrs,
                     s_lifnr FOR bsak-lifnr,
                     s_blart FOR bsak-blart,
                     s_augdt FOR bsak-augdt,
                     s_zterm FOR bsak-zterm.
    PARAMETERS:      s_list AS CHECKBOX,
                     s_vari LIKE disvariant-variant.
    DATA: g_ext_num(24) TYPE c.
    DATA: BEGIN OF gt_bsak OCCURS 0,
            bukrs LIKE bsak-bukrs,
            lifnr LIKE bsak-lifnr,
            augdt LIKE bsak-augdt,
            gjahr LIKE bsak-gjahr,
            belnr LIKE bsak-belnr,
            blart LIKE bsak-blart,
            zterm LIKE bsak-zterm,
          END OF gt_bsak.
    DATA: BEGIN OF gt_bseg OCCURS 0,
            bukrs LIKE bseg-bukrs,
            belnr LIKE bseg-belnr,
            gjahr LIKE bseg-gjahr,
            wrbtr LIKE bseg-wrbtr,
            projk LIKE bseg-projk,
            shkzg LIKE bseg-shkzg,
            hkont LIKE bseg-hkont,
          END OF gt_bseg.
    ALV
    DATA: gt_fieldtab TYPE slis_t_fieldcat_alv,
          g_save(1)   TYPE c,
          g_variant   LIKE disvariant.
    DATA: BEGIN OF gt_result OCCURS 0,
            bukrs       LIKE bsak-bukrs,
            lifnr       LIKE bsak-lifnr,
            name1       LIKE lfa1-name1,
            augdt       LIKE bsak-augdt,
            gjahr       LIKE bsak-gjahr,
            belnr       LIKE bsak-belnr,
            blart       LIKE bsak-blart,
            zterm       LIKE bsak-zterm,
            wrbtr       LIKE bseg-wrbtr,
            waers       LIKE bkpf-waers,
            ext_num(24) TYPE c,
            txt20       LIKE skat-txt20,
            usr00       LIKE prps-usr00,
            usr01       LIKE prps-usr01,
            usr02       LIKE prps-usr02,
            usr03       LIKE prps-usr03,
          END OF gt_result.
    CONSTANTS: c_credit(1)       TYPE c VALUE 'H',
               c_x(1)            TYPE c VALUE 'X',
               c_en(2)           TYPE c VALUE 'EN',
               c_mrc(4)          TYPE c VALUE 'CA01'.
    *====================================================
    INITIALIZATION.
      PERFORM initialize_variant.
    AT SELECTION-SCREEN.
      PERFORM pai_of_selection_screen.
    *====================================================
    START-OF-SELECTION.
      REFRESH gt_result.
    find clearing documents
      SELECT        bukrs
                    lifnr
                    augdt
                    gjahr
                    belnr
                    blart
                    zterm
             INTO   TABLE gt_bsak
             FROM   bsak
             WHERE  bukrs  IN s_bukrs
             AND    lifnr  IN s_lifnr
             AND    augdt  IN s_augdt
             AND    blart  IN s_blart
             AND    zterm  IN s_zterm.
      CHECK NOT gt_bsak[] IS INITIAL.
      LOOP AT gt_bsak.
    read WBS items
        SELECT        bukrs
                      belnr
                      gjahr
                      dmbtr
                      projk
                      shkzg
                      hkont
               INTO   TABLE  gt_bseg
               FROM   bseg
               WHERE  bukrs  = gt_bsak-bukrs
               AND    belnr  = gt_bsak-belnr
               AND    gjahr  = gt_bsak-gjahr
               AND    projk  > space.
        gt_result-bukrs = gt_bsak-bukrs.
        gt_result-lifnr = gt_bsak-lifnr.
        gt_result-augdt = gt_bsak-augdt.
        gt_result-belnr = gt_bsak-belnr.
        gt_result-gjahr = gt_bsak-gjahr.
        gt_result-blart = gt_bsak-blart.
        gt_result-zterm = gt_bsak-zterm.
    document currency
        SELECT SINGLE waers
               INTO   gt_result-waers
               FROM   bkpf
               WHERE  bukrs  = gt_bsak-bukrs
               AND    belnr  = gt_bsak-belnr
               AND    gjahr  = gt_bsak-gjahr.
    vendor name
        SELECT SINGLE name1
               INTO   gt_result-name1
               FROM   lfa1
               WHERE  lifnr  = gt_result-lifnr.
    for each accounting document
        LOOP AT gt_bseg.
    convert to external WBS
          CALL FUNCTION 'PSPNUM_INTERN_TO_EXTERN_CONV'
            EXPORTING
              edit_imp  = c_x
              int_num   = gt_bseg-projk
            IMPORTING
              ext_num   = gt_result-ext_num
            EXCEPTIONS
              not_found = 1
              OTHERS    = 2.
    debit or credit
          IF gt_bseg-shkzg = c_credit.
            gt_result-wrbtr = gt_bseg-wrbtr.
          ELSE.
            gt_result-wrbtr = gt_bseg-wrbtr * -1.
          ENDIF.
    GL short Text
          SELECT SINGLE txt20
                 INTO   gt_result-txt20
                 FROM   skat
                 WHERE  spras  = c_en
                 AND    ktopl  = 'CA01'
                 AND    saknr  = gt_bseg-hkont.
    user fields
          SELECT SINGLE usr00
                        usr01
                        usr02
                        usr03
                 INTO  (gt_result-usr00,
                        gt_result-usr01,
                        gt_result-usr02,
                        gt_result-usr03)
                 FROM   prps
                 WHERE  pspnr  = gt_bseg-projk.
    append to result table
          APPEND gt_result.
        ENDLOOP.
      ENDLOOP.
    *====================================================
    END-OF-SELECTION.
      PERFORM initialize_fieldcat USING gt_fieldtab[].
      g_variant-report = sy-repid.
      g_save           = 'A'.
      IF s_list = 'X'.
        CALL FUNCTION 'REUSE_ALV_LIST_DISPLAY'
          EXPORTING
            it_fieldcat = gt_fieldtab
            i_default   = 'A'
            i_save      = g_save
            is_variant  = g_variant
          TABLES
            t_outtab    = gt_result.
      ELSE.
        CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
          EXPORTING
            it_fieldcat = gt_fieldtab
            i_default   = 'A'
            i_save      = g_save
            is_variant  = g_variant
          TABLES
            t_outtab    = gt_result.
      ENDIF.
    *&      Form  initialize_variant
    FORM initialize_variant.
      g_save = 'A'.
      CLEAR g_variant.
      g_variant-report = sy-repid.
      CALL FUNCTION 'REUSE_ALV_VARIANT_DEFAULT_GET'
        EXPORTING
          i_save     = g_save
        CHANGING
          cs_variant = g_variant
        EXCEPTIONS
          not_found  = 2.
      IF sy-subrc = 0.
        s_vari = g_variant-variant.
      ENDIF.
    ENDFORM.                               " INITIALIZE_VARIANT
    *&      Form  pai_of_selection_screen
    FORM pai_of_selection_screen.
      IF NOT s_vari IS INITIAL.
        MOVE s_vari TO g_variant-variant.
        CALL FUNCTION 'REUSE_ALV_VARIANT_EXISTENCE'
          EXPORTING
            i_save     = g_save
          CHANGING
            cs_variant = g_variant.
      ELSE.
        PERFORM initialize_variant.
      ENDIF.
    ENDFORM.                    " PAI_OF_SELECTION_SCREEN
    *&      Form  initialize_fieldcat
    FORM initialize_fieldcat USING p_fieldtab TYPE slis_t_fieldcat_alv.
      DATA: l_fieldcat TYPE slis_fieldcat_alv.
      CLEAR l_fieldcat.
      l_fieldcat-tabname    = 'GT_RESULT'.
      l_fieldcat-fieldname  = 'BUKRS'.
      l_fieldcat-seltext_L  = 'Company'.
      l_fieldcat-outputlen  = '8'.
      APPEND l_fieldcat TO p_fieldtab.
      l_fieldcat-tabname    = 'GT_RESULT'.
      l_fieldcat-fieldname  = 'LIFNR'.
      l_fieldcat-seltext_L  = 'Vendor'.
      l_fieldcat-outputlen  = '10'.
      APPEND l_fieldcat TO p_fieldtab.
      l_fieldcat-tabname    = 'GT_RESULT'.
      l_fieldcat-fieldname  = 'NAME1'.
      l_fieldcat-seltext_L  = 'Name'.
      l_fieldcat-outputlen  = '35'.
      APPEND l_fieldcat TO p_fieldtab.
      l_fieldcat-tabname    = 'GT_RESULT'.
      l_fieldcat-fieldname  = 'AUGDT'.
      l_fieldcat-seltext_L  = 'Cleared'.
      l_fieldcat-outputlen  = '10'.
      APPEND l_fieldcat TO p_fieldtab.
      l_fieldcat-tabname    = 'GT_RESULT'.
      l_fieldcat-fieldname  = 'GJAHR'.
      l_fieldcat-seltext_L  = 'Year'.
      l_fieldcat-outputlen  = '5'.
      APPEND l_fieldcat TO p_fieldtab.
      l_fieldcat-tabname    = 'GT_RESULT'.
      l_fieldcat-fieldname  = 'BELNR'.
      l_fieldcat-seltext_L  = 'Document'.
      l_fieldcat-outputlen  = '10'.
      APPEND l_fieldcat TO p_fieldtab.
      l_fieldcat-tabname    = 'GT_RESULT'.
      l_fieldcat-fieldname  = 'BLART'.
      l_fieldcat-seltext_L  = 'Type'.
      l_fieldcat-outputlen  = '4'.
      APPEND l_fieldcat TO p_fieldtab.
      l_fieldcat-tabname    = 'GT_RESULT'.
      l_fieldcat-fieldname  = 'ZTERM'.
      l_fieldcat-seltext_L  = 'Pay Terms'.
      l_fieldcat-outputlen  = '4'.
      APPEND l_fieldcat TO p_fieldtab.
      l_fieldcat-tabname    = 'GT_RESULT'.
      l_fieldcat-fieldname  = 'WRBTR'.
      l_fieldcat-seltext_L  = 'Amount'.
      l_fieldcat-outputlen  = '13'.
      APPEND l_fieldcat TO p_fieldtab.
      l_fieldcat-tabname    = 'GT_RESULT'.
      l_fieldcat-fieldname  = 'WAERS'.
      l_fieldcat-seltext_L  = 'CURR'.
      l_fieldcat-outputlen  = '5'.
      APPEND l_fieldcat TO p_fieldtab.
      l_fieldcat-tabname    = 'GT_RESULT'.
      l_fieldcat-fieldname  = 'EXT_NUM'.
      l_fieldcat-seltext_L  = 'WBS'.
      l_fieldcat-outputlen  = '24'.
      APPEND l_fieldcat TO p_fieldtab.
      l_fieldcat-tabname    = 'GT_RESULT'.
      l_fieldcat-fieldname  = 'TXT20'.
      l_fieldcat-seltext_L  = 'Short Text'.
      l_fieldcat-outputlen  = '20'.
      APPEND l_fieldcat TO p_fieldtab.
      l_fieldcat-tabname    = 'GT_RESULT'.
      l_fieldcat-fieldname  = 'USR00'.
      l_fieldcat-seltext_L  = 'H/O File Ref'.
      l_fieldcat-outputlen  = '20'.
      APPEND l_fieldcat TO p_fieldtab.
      l_fieldcat-tabname    = 'GT_RESULT'.
      l_fieldcat-fieldname  = 'USR01'.
      l_fieldcat-seltext_L  = 'Local File Ref'.
      l_fieldcat-outputlen  = '20'.
      APPEND l_fieldcat TO p_fieldtab.
      l_fieldcat-tabname    = 'GT_RESULT'.
      l_fieldcat-fieldname  = 'USR02'.
      l_fieldcat-seltext_L  = 'INFORM Agree ID'.
      l_fieldcat-outputlen  = '10'.
      APPEND l_fieldcat TO p_fieldtab.
      l_fieldcat-tabname    = 'GT_RESULT'.
      l_fieldcat-fieldname  = 'USR03'.
      l_fieldcat-seltext_L  = 'INFM Prim Ag ID'.
      l_fieldcat-outputlen  = '10'.
      APPEND l_fieldcat TO p_fieldtab.
    ENDFORM.                    " INITIALIZE_FIELDCAT
    Could the experts do help to overcome the probs.
    Thanks in Advance.
    Regards,
    Anbalagan.V

    Hi Anbalagan,
    i've tested your program - but it works fine (Rel. 4.6C,SAPKB46C30)
    download is ok and direct transfer (excel inplace) is ok.
    but i've a question to the selection of waers in your program -
    why do you select waers from bkpf and not from bsak ?
    i think sel. of bkpf is'nt necessary !
    regards Andreas

  • Performance issue on the sys.dba_audit_session

    i have the following query which is taking long time and had performance issue.
    SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD') AS curr_timestamp, COUNT(username) AS
    failed_count
    FROM sys.dba_audit_session
    WHERE returncode != 0
    AND timestamp >= current_timestamp - TO_DSINTERVAL('0 0:30:00')
    call count cpu elapsed disk query current rows
    Parse 1 0.01 0.04 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 2 68.42 216.08 3943789 3960058 0 1
    total 4 68.43 216.13 3943789 3960058 0 1
    The view dba_audit_session is a select from the view dba_audit_trail. If you
    look at the definition of dba_audit_trail it does a CAST on the ntimestamp#
    column. Therefore disabling index access because there is not a function
    based index on ntimestamp#. I am not even sure a function based index would
    work to match what the view does.
    cast ( /* TIMESTAMP */
    (from_tz(ntimestamp#,'00:00') at local) as date),
    To get index access the metric would have to avoid the use of the view. I have changed the query like this.
    SELECT /*+ INDEX(a I_AUD3) */ TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD
    HH24:MI:SS TZD') AS curr_timestamp, COUNT(userid) AS failed_count
    FROM sys.aud$ a
    WHERE returncode != 0
    and action# between 100 and 102
    AND ntimestamp# >= systimestamp at time zone 'GMT' - 30/1440
    is it correct way to di it?
    could you comment on this ?

    The query is run by grid Control (or DBConsole) to count the metric related to audit sessions which is ON by default in 11g. To decrease the impact of this query you should purge the aud$ table regularly.
    Best way is to use DBMS_AUDIT_MGMT to periodically purge the data older than "whatever date". If you don't need the audit infor, you can simply truncate aud$.

  • Slow performance Designer 10g (on 10gR2 database)

    We are busy testing Designer 10g version 10.1.2.5 (windows XP) on 10gR2 64 bit database (Sun solaris) . The repository was migrated from Designer 6.0.
    But the performance/response is rather slow. Example: At first, opening a server model diagram took 2 minutes or more. By searching the forum we found the tip for the "alter system set OPTIMIZER_SECURE_VIEW_MERGING = false;". That made a big difference.
    But now we still have response problems: Expanding the treeview for the list of tables, views or snapshots takes much longer then in Designer 6.0.
    Also opening for example Design Editor takes longer then one normally expects, allthough some delay can be expected because we have a lot of applications (100) in the repository.
    Is it because it is now written in Java, or are more database optimizations possible?
    Paul.

    Have you computed statistics using the Repository Administration Utility (RAU)? The default percentage of 20% is usually good enough, but you could go higher.
    Do a View Objects in RAU and check for missing, disabled or invalid objects. If you find any, there are ways to correct the situation, mostly under the Recreate button.
    Make sure that no-one else is using the repository, then press the Recreate button in RAU and use the selection labeled: Truncate Temporary Tables. Sometimes these tables get too full and can impact performance.
    Under the Options menu in RAU, there is an item labeled: Enable Performance Enhancements. To be honest with you, I've never noticed this item before, and I don't know for sure what it does. Then again, I've never had any serious performance problems in Designer. It might be worth your while to back up your repository, then turn this on.

  • Problems with Full loads/Decreased query performance in Prod

    We have a table which serves as a base for a complex view. The table has roughly around 10 Million records and its a daily full load(I know, that delta loads are much better for handling large sets of data, but this information is very dynamic and with the business time constraints and project deliverables, the best decision was to do a Full load).
    This is the process we follow:
    > Drop Indexes (All columns individual indexes which are used inside the complex view as joins)
    > Truncate table
    > Load data
    > Recreate indexes.
    All the above steps are performed from SAP Dataservices Thru scripts and sql() function to execute the command, no manual intervention what so ever.
    After the job is successfully completed, the view doesn't refresh at all(It sits there forever). The same job when run across same volume of production data in Test environment performs much faster.
    Then, how do I refresh the view is manually log into SQL Developer, drop all the indexes on the parent table, and re-create all the indexes in the same order as Dataservices script. It performs very well after till the next load (the next morning).
    Any suggestions would be very helpful.
    My main question is why does it run faster, when I drop and recreate the indexes? and doesnt complete when the indexes are created by the SQL() from data services tool.
    Tried:
    Explain Plan(in dev, Test, Prod): Query cost vaired accross environments but returned results with same return times (In Production after manual Index creation)
    Tuning advisor (Only in Test) DBA evaluated it to be good.
    Thanks
    Nash
    DB Version Oracle 11.0.7
    Dataservices 3.2

    BluShadow and Harman
    Thank You!
    Im using a regular view, not a materialized view. and yes the query plan is completely different from Test and Production. In test the query was completely running on Hash based joins whereas in Production its using Nested Joins in the execution plan.
    Will try to gather statistics after the load and as per BluShadow, will look on the way of writing a function that makes a call to Database.
    Thank you all for taking sometime. I will try to test this out starting today and will extend tests over a couple of days.
    Regards
    Nash

  • Flat file truncation issue

    I am attempting to perform a fairly standard operation, extract a table to a flat file.
    I have set all the schemas, models and interfaces, and the file is produced how I want it, apart from one thing. In the source, one field is 100 characters long, and in the output, it needs to be 25.
    I have set the destination model to have a column physical and logical length of 25.
    Looking at the documentation presented at http://docs.oracle.com/cd/E25054_01/integrate.1111/e12644/files.htm - this suggests that setting the file driver up to truncate fields should solve the issue.
    However, building a new file driver using the string 'jdbc:snps:dbfile?TRUNC_FIXED_STRINGS=TRUE&TRUNC_DEL_STRINGS=TRUE' does not appear to truncate the output.
    I noticed a discrepancy in the documentation - the page above notes 'Truncates strings to the field size for fixed files'. The help tooltip in ODI notes 'Truncates the strings from the fixed files to the field size'. Which might explain the observed lack of truncation.
    My question is - what is the way to enforce field sizes in a flat file output?
    I could truncate the fields separately in each of the mapping statements using substr, but that seems counter-intuitive, and losing the benefits of the tool.
    Using ODI Version:
    Standalone Edition Version 11.1.1 - Build ODI_11.1.1.5.0_GENERIC_110422.1001

    bump
    If this is an elementary issue, please let me know what I've missed in the manual.

  • Truncation of leading Zeros when Down Loading into Excel - OLE Objects

    Hi,
    Can any one help me on this.
    I am using <b>OLE Objects</b> to download Data into Excel Sheet. Data with leading Zeros is getting truncated in Excel.
    Ex: Report Output is showing Plant Number as 0002. But when i am downloading to Excel Plant value will become 2 .
       I would like to have it as 0002 in Excel.
    I have declared Werks as CHAR of 4.I am using OLE Obects for Downloading into Excel Sheet.
    I am using "OLE2_OBJECT" I can not use any other FMs to down load to Excel.As i am modifying this program not creating.
    Thanks In Advance.
    K.Nirmala
    Message was edited by: Nirmala Reddy

    Hi Nirmala,
    While downloading to excel sheet, u need to change the number format of cell from General to Text, then leading zero's won't get deleted. For that u need to set the property of the cell. Please check this sample code,
    INCLUDE OLE2INCL.
    tables : zobrent.
    data : it_kna1 type table of zobrent with header line.
    handles for OLE objects
    DATA: H_EXCEL TYPE OLE2_OBJECT,        " Excel object
          H_MAPL TYPE OLE2_OBJECT,         " list of workbooks
          H_MAP TYPE OLE2_OBJECT,          " workbook
          H_ZL TYPE OLE2_OBJECT,           " cell
          H_F TYPE OLE2_OBJECT.            " font
    DATA  H TYPE I.
    DATA: cell1 TYPE ole2_object.
    *&   Event START-OF-SELECTION
    START-OF-SELECTION.
      select * from zobrent into table it_kna1
               where zopanid = '10001'
                and zo_brent = '050'.
    start Excel
      CREATE OBJECT H_EXCEL 'EXCEL.APPLICATION'.
      PERFORM ERR_HDL.
      SET PROPERTY OF H_EXCEL  'Visible' = 1.
    get list of workbooks, initially empty
      CALL METHOD OF H_EXCEL 'Workbooks' = H_MAPL.
      PERFORM ERR_HDL.
    add a new workbook
      CALL METHOD OF H_MAPL 'Add' = H_MAP.
      PERFORM ERR_HDL.
    output column headings to active Excel sheet
      PERFORM FILL_CELL USING 1 1 1 'EDate'.
      PERFORM FILL_CELL USING 1 2 1 'Brent'.
      PERFORM FILL_CELL USING 1 3 1 'Zopanid'.
      PERFORM FILL_CELL USING 1 4 1 'Contract Type'.
      PERFORM FILL_CELL USING 1 5 1 'Price Type'.
      PERFORM FILL_CELL USING 1 6 1 'Installation Type'.
      PERFORM FILL_CELL USING 1 7 1 'Volume'.
      PERFORM FILL_CELL USING 1 8 1 'AQ'.
      PERFORM FILL_CELL USING 1 9 1 '00000123'.
      LOOP AT IT_KNA1.
    copy values to active EXCEL sheet
        H = SY-TABIX + 1.
        PERFORM FILL_CELL USING H 1 0 IT_KNA1-zo_effdat.
        PERFORM FILL_CELL USING H 2 0 IT_KNA1-zo_brent.
        PERFORM FILL_CELL USING H 3 0 IT_KNA1-zopanid.
      ENDLOOP.
      CALL METHOD OF h_excel 'Cells' = cell1
        EXPORTING
          #1 = 1
          #2 = 1.
      FREE OBJECT H_EXCEL.
      PERFORM ERR_HDL.
      if sy-subrc eq 0.
       write : / 'year'(001).
      endif.
         FORM FILL_CELL
    sets cell at coordinates i,j to value val boldtype bold
    FORM FILL_CELL USING I J BOLD VAL.
      CALL METHOD OF H_EXCEL 'Cells' = H_ZL EXPORTING #1 = I #2 = J.
      PERFORM ERR_HDL.
      GET PROPERTY OF H_ZL 'Font' = H_F.
      PERFORM ERR_HDL.
      SET PROPERTY OF H_F 'Bold' = BOLD .
      PERFORM ERR_HDL.
    ***Changing the format of the cell from General to Text
      <b>SET PROPERTY OF H_ZL 'NumberFormat' = '@'.</b>
      PERFORM ERR_HDL.
      SET PROPERTY OF H_ZL 'Value' = VAL .
      PERFORM ERR_HDL.
    ENDFORM.
    *&      Form  ERR_HDL
    FORM ERR_HDL.
    IF SY-SUBRC <> 0.
      WRITE: / 'Fehler bei OLE-Automation:'(010), SY-SUBRC.
      STOP.
    ENDIF.
    ENDFORM.                    " ERR_HDL
    U just paste this code in a sample program & see.
    Please reward, if found helpful.

  • Replace Metadata in Bridge CS4 appears to truncate 5.9GB .PSB File; Does CS6 do this too?

    I am running Photoshop CS4 on a MacBook Pro with Mac OS X 10.6.8, and ran into a problem with Bridge. I would like to know if this problem occurs with Photoshop CS6. Here's the sequence of events:
    1.  I have a 5.9GB .psb file created with PS; it loads into PS cleanly - no errors.
    2.  In Bridge, I create a metadata template, and apply it to my .psb file using "Replace Metadata".
    3.  The "Replace Metadata" operation completes quite quickly, no error messages
    4.  Immediately after the "replace Metadata" operation Bridge reports that my 5.9GB file is now a 1.6GB file - 4.3GB smaller!!
    5.  Bridge displays the updated metadata, no visible indication anything is wrong.
    6.  Now the file will not open in Photoshop - the read starts, but errors out with "unexpected end of file".
    7.  I restore the 5.9GB file and can open it in Photoshop without problems.
    Update:
    This problem is not specific to the "Replace Metadata" function in the Bridge menu.  I get a similar result by updating the metadata for this file in the Metadata Panel; the resulting file is truncated, but appears to be a different size this time, and it will not open in Photoshop - same error message as before.
    This is a 34-image 36856px X 7464px panorama

    Thanks for your suggestion, appreciate your effort to help.  The link you provided seems to focus on transfer rates and performance; it does take a while to write or read this file, but I expect that.  I'm not using RAID of any flavour.  I do not expect file corruption, which is what I believe is happening (and PS is reporting).
    I have a number of large panoramas; this is the largest.  I believe the largest of the others is a bit over 3GB; I have no difficulty updating the metadata in the other panoramas through the metadata panel or with "Replace Metadata".  All of these files live on the same drive (therefore connected to the same firewire port).  They are backed up to a USB drive with Time Machine, and I've recovered the 5.9GB file from the Time Machine drive a number of times.  I can always read the recovered file with PS, and the size after recovery is always exactly the same.  When I realized that the size of my 5.9GB file changed during/because of the metadata update, I reverted to an earlier stage in the post-processing, got a 3.8GB .PSB and put a copy on DVD; the DVD copy is good, and I can easily and quickly reconstruct the 5.9GB file by applying the same actions used the first time (the 5.9GB file has three layers that are not in the 3.8GB .PSB).  I have not tried to change metadata in the 3.8GB PSB, and can't try that for the next few weeks as I'll be travelling.  The only thing I'm certain of is that updating the metadata in the 5.9GB .PSB reduces the size to 1.6GB (put text in Headline field through Metadata panel) or 1.4GB ("Replace Metadata" in Bridge Tools menu), depending on how the update is done.
    So, the 5.9GB file has survived several complete transfers (PS=>Disk, Disk=>Time Machine, multiple Time Machine=>Disk, PS=>Disk to 3.8GB after deleting layers, and Disk(3.8GB)=>DVD) with no signs of corruption.
    It's very hard to accept the idea that there's some kind of hardware problem that afflicts just one file.  The 5.9GB file is output from PS CS4, and I can read it back into PS with no problem. 
    I'll watch this forum for a few days to see if there are more answers until I have to travel.  If not, I'll figure it out another way and report a bug in CS6 if that appears to be appropriate.

  • Performance monitor reports and graphs don't show all the counters that were captured

    I've run into an odd behavior of performance monitor that I haven't been able to clear up yet: after I run a data collector set, the resulting report only shows a partial list of the counters that I captured, and so does the graph of that report (but different
    items).
    For instance, I chose the following counters for the data collector set:
    \PhysicalDisk(*)\% Idle Time
    \PhysicalDisk(*)\Avg. Disk Queue Length
    \PhysicalDisk(*)\Avg. Disk sec/Read
    \PhysicalDisk(*)\Avg. Disk sec/Write
    When the data collection ends, the report that gets displayed shows all those counters for _Total instance and C:, but then E: only shows % Idle Time and Avg. Disk Queue Length. That's it, the report ends there, even though I also have F: and V: drives on
    this server.
    If I choose to view the graph of this data collection, it only shows those four counters for the _Total instance.
    It took me a while to realize that all the data is being captured, but it's just the display that is truncated. If I view the folder then open the report.html, it actually shows all the data. Same with the graphs. If I select to add counters and add
    all the missing counters, they are displayed.
    I have searched high and low but haven't been able to find more than one post at ServerFault where someone had a similar issue, but no real solution or cause was provided, so I was hoping that someone here might have an idea.
    I also found a variety of articles about missing counters due to registry settings or because the counters needed to be reset, but that's a different issue that doesn't apply in my case. By the way, this is on a windows server 2012 R2 VM (file server).

    Hi SomeAdmin,
    The counter missing often caused by the related registry value is corrupted, please first refer the following related KB then perform a fix then monitor the issue again.
    Event ID 3012 — Performance Counter Loading
    http://technet.microsoft.com/en-us/library/cc775053(v=ws.10).aspx
    More information:
    Overview of Performance Monitoring
    http://technet.microsoft.com/en-us/library/cc958257.aspx
    PerfGuide: Analyzing Poor Disk Response Times
    http://social.technet.microsoft.com/wiki/contents/articles/1516.perfguide-analyzing-poor-disk-response-times.aspx
    Analyzing Storage Performance using the Windows Performance Analysis ToolKit (WPT)
    http://blogs.technet.com/b/robertsmith/archive/2012/02/07/analyzing-storage-performance-using-the-windows-performance-toolkit.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • DBLINK truncation with SAP HANA db

    Hi - I have Oracle 11g installed in my Windows laptop and dblink connected to SAP's HANA database via ODBC using the HANA odbc driver. My NVARCHAR data in HANA is being truncated in half. I am working thru sqlplus. Same result in SQL developer client tool. The VARCHAR data is ok. I created three Oracle instances with the only difference being the NLS_CHARACTERSET & NLS_NCHAR_CHARACTERSET values. I have three SIDS: orcl, orulu, and orclutf8. All with the same result. My gateway settings for each are all the same. I started testing with SID orcl and once I found that out I decided to create orclu and orclutf8. In our Unix boxes, we have orcl and orclu settings and those are behaving the same (we use unixodbc.org as the mgr).
    I provided orclutf8 gateway .ora file and the orclut8 system info below.
    Symptoms/Info:
    The character set of HANA db is AL32UTF8.
    The HANA db table contains NVARCHAR and have Unicode values (eg: em dash, even Chinese char). NVARCHAR columns gets cut in half as shown in sqlplus (same in sql developer).
    For the half that do show up, the actual Unicode character shows up in sqlplus as either unprintable character or upside down question mark or a \u character. This is ok coz no abends therefore data gets process and let my customers deal with the non-converted data – it is ok with them.
    Since all SIDs are behaving the same way, I provided you information for orclutf8: initdwutf.ora, the system info, and the trace file. Of all things that SHOULD work, it is the one with the exact character set to HANA.
    I have two tables in HANA with the same number of columns and rows. Only difference is NVARCHAR versus VARCHAR.There are three columns with 3, 20, and 150 length.
    I took a Oracle trace when selecting from each table and compared them both. I pasted a picture at the bottom. The left side is the VARCHAR and right side NVARCHAR. You can see the HANA odbc driver report a truncation issue on line 209 but I do not see this error in sqlplus. I have a SAP incident open on this.
    Is there something in the Oracle side that can be tried? For example, in the trace compare picture, the VARCHAR trace shows that is doubled the data size for each column from 3, 20, and 150 to 6, 40, and 300. In the NVARCHAR it did not.
    SID: orcl
                    SELECT value$ FROM sys.props$ WHERE name = 'NLS_CHARACTERSET’;
                    WE8MSWIN1252
                    SELECT value$ FROM sys.props$ WHERE name = 'NLS_NCHAR_CHARACTERSET’;
                    AL16UTF16
    SID: orclu
                    SELECT value$ FROM sys.props$ WHERE name = 'NLS_CHARACTERSET’;
                    AL32UTF8
                   SELECT value$ FROM sys.props$ WHERE name = 'NLS_NCHAR_CHARACTERSET’;
                    AL16UTF16
    SID: orclutf8
                    SELECT value$ FROM sys.props$ WHERE name = 'NLS_CHARACTERSET’;
                    AL32UTF8
                   SELECT value$ FROM sys.props$ WHERE name = 'NLS_NCHAR_CHARACTERSET’;
                    UTF8
    initdw7utf.ora:
    # This is a sample agent init file that contains the HS parameters that are
    # needed for the Database Gateway for ODBC
    # HS init parameters
    #HS_FDS_CONNECT_INFO = <odbc data_source_name>
    HS_FDS_CONNECT_INFO = HANADW7
    HS_FDS_TRACE_LEVEL=DEBUG
    #HS_LANGUAGE=AL32UTF8
    HS_LANGUAGE=AMERICAN_AMERICA.AL32UTF8
    HS_FDS_REMOTE_DB_CHARSET=AL32UTF8
    # Environment variables required for the non-Oracle system
    #set <envvar>=<value>
    SELECT * FROM sys.props$:
    DICT.BASE       2
    DEFAULT_TEMP_TABLESPACE           TEMP
    DEFAULT_PERMANENT_TABLESPACE            USERS
    DEFAULT_EDITION       ORA$BASE
    Flashback Timestamp TimeZone            GMT
    TDE_MASTER_KEY_ID
    DST_UPGRADE_STATE            NONE
    DST_PRIMARY_TT_VERSION    11
    DST_SECONDARY_TT_VERSION          0
    DEFAULT_TBS_TYPE   SMALLFILE
    NLS_LANGUAGE          AMERICAN
    NLS_TERRITORY          AMERICA
    NLS_CURRENCY          $
    NLS_ISO_CURRENCY   AMERICA
    NLS_NUMERIC_CHARACTERS  .,
    NLS_CHARACTERSET  AL32UTF8
    NLS_CALENDAR          GREGORIAN
    NLS_DATE_FORMAT    DD-MON-RR
    NLS_DATE_LANGUAGE            AMERICAN
    NLS_SORT       BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT      DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT            HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY            $
    NLS_COMP      BINARY
    NLS_LENGTH_SEMANTICS       BYTE
    NLS_NCHAR_CONV_EXCP       FALSE
    NLS_NCHAR_CHARACTERSET UTF8
    NLS_RDBMS_VERSION            11.2.0.1.0
    GLOBAL_DB_NAME     ORCLUTF8
    EXPORT_VIEWS_VERSION      8
    WORKLOAD_CAPTURE_MODE           
    WORKLOAD_REPLAY_MODE  
    NO_USERID_VERIFIER_SALT   57505D68AFECC3BCECE484A1C42CC8CE
    DBTIMEZONE   00:00

    1) When I tried HS_KEEP_REMOTE_COLUMN_SIZE=LOCAL the nvarchar select statement still truncated and displayed them in sqlplus.
    For the varchar select statement, it just error'ed out in sqlplus.
    ERROR:
    ORA-28562: Heterogeneous Services data truncation error
    ORA-02063: preceding line from DEVUTF8
    no rows selected
    I commented out the HS_KEEP_REMOTE_COLUMN_SIZE=LOCAL for now.
    2) For the nvarchar select statement, I do not get an error messages via sqlplus. I get the records displayed truncated in half they should be. A native odbc error do show up in the Oracle Trace file. I think that comes from the HANA odbc driver. It is line 209 of the picture in my original thread.
    3) DESCRIBE commands output below:
    SQL> desc ESBA_DB.ZTESTSAP@DEVUTF8 - THIS IS THE NVARCHAR TABLE. The sizes match what is in HANA db.
    Name                                      Null?    Type
    MANDT                                     NOT NULL NVARCHAR2(3)
    NAME                                      NOT NULL NVARCHAR2(20)
    NAME_150                                  NOT NULL NVARCHAR2(150)
    SQL> desc PTAN.ZTESTSAP_VC@DEVUTF8 - THIS IS THE VARCHAR TABLE.The sizes do not match what is in HANA db.
    Name                                      Null?    Type
    MANDT                                              VARCHAR2(1)
    NAME                                               VARCHAR2(6)
    NAME150                                            VARCHAR2(50)
    4) Below is the gateway trace. I included from the first occurence of hgodscr and all the way to the end of it. You can see the HANA odbc driver truncation.
    Entered hgodscr, cursor id 1 at 2014/10/02-11:15:41
    Allocate hoada @ 03705518
    Entered hgopcda at 2014/10/02-11:15:41
    Column:1(M): dtype:-9 (WVARCHAR), prc/scl:3/0, nullbl:1, octet:3, sign:1, radix:0
    Exiting hgopcda, rc=0 at 2014/10/02-11:15:41
    Entered hgopcda at 2014/10/02-11:15:41
    Column:2(N): dtype:-9 (WVARCHAR), prc/scl:20/0, nullbl:1, octet:20, sign:1, radix:0
    Exiting hgopcda, rc=0 at 2014/10/02-11:15:41
    Entered hgopcda at 2014/10/02-11:15:41
    Column:3(N): dtype:-9 (WVARCHAR), prc/scl:150/0, nullbl:1, octet:150, sign:1, radix:0
    Exiting hgopcda, rc=0 at 2014/10/02-11:15:41
    hgodscr, line 910: Printing hoada @ 03705518
    MAX:3, ACTUAL:3, BRC:100, WHT=5 (SELECT_LIST)
    hoadaMOD bit-values found (0x40:TREAT_AS_NCHAR)
    DTY         NULL-OK  LEN  MAXBUFLEN   PR/SC  CST IND MOD NAME
    12 VARCHAR Y          3          3 128/  3 1000   0  40 MANDT
    12 VARCHAR Y         20         20 128/ 20 1000   0  40 NAME
    12 VARCHAR Y        150        150 128/150 1000   0  40 NAME_150
    Exiting hgodscr, rc=0 at 2014/10/02-11:15:41
    Entered hgoftch, cursor id 1 at 2014/10/02-11:15:41
    hgoftch, line 130: Printing hoada @ 03705518
    MAX:3, ACTUAL:3, BRC:100, WHT=5 (SELECT_LIST)
    hoadaMOD bit-values found (0x40:TREAT_AS_NCHAR)
    DTY         NULL-OK  LEN  MAXBUFLEN   PR/SC  CST IND MOD NAME
    12 VARCHAR Y          3          3 128/  3 1000   0  40 MANDT
    12 VARCHAR Y         20         20 128/ 20 1000   0  40 NAME
    12 VARCHAR Y        150        150 128/150 1000   0  40 NAME_150
    Performing delayed open.
    SQLBindCol: column 1, cdatatype: -8, bflsz: 6
    SQLBindCol: column 2, cdatatype: -8, bflsz: 22
    SQLBindCol: column 3, cdatatype: -8, bflsz: 152
    Entered hgopoer at 2014/10/02-11:15:41
    hgopoer, line 233: got native error 0 and sqlstate 01004; message follows...
    [SAP AG][LIBODBCHDB32 DLL] Data truncated {01004}[SAP AG][LIBODBCHDB32 DLL] Data truncated {01004}[SAP AG][LIBODBCHDB32 DLL] Data truncated {01004}[SAP AG][LIBODBCHDB32 DLL] Data truncated {01004}[SAP AG][LIBODBCHDB32 DLL] Data truncated {01004}
    Exiting hgopoer, rc=0 at 2014/10/02-11:15:41
    hgoftch, line 740: calling SQLFetch got sqlstate 01004
    SQLFetch: row: 1, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 1, column 1, bflsz: 6,  bflar: 6, (bfl: 3, mbl: 3)
    SQLFetch: row: 1, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 1, column 2, bflsz: 22,  bflar: 6, (bfl: 20, mbl: 20)
    SQLFetch: row: 1, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 1, column 3, bflsz: 152,  bflar: 0, (bfl: 150, mbl: 150)
    SQLFetch: row: 2, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 2, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 2, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 2, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 2, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 2, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 3, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 3, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 3, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 3, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 3, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 3, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 4, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 4, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 4, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 4, column 2, bflsz: 22,  bflar: 6, (bfl: 0, mbl: 20)
    SQLFetch: row: 4, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 4, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 5, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 5, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 5, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 5, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 5, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 5, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 6, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 6, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 6, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 6, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 6, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 6, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 7, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 7, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 7, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 7, column 2, bflsz: 22,  bflar: 6, (bfl: 0, mbl: 20)
    SQLFetch: row: 7, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 7, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 8, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 8, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 8, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 8, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 8, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 8, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 9, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 9, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 9, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 9, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 9, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 9, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 10, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 10, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 10, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 10, column 2, bflsz: 22,  bflar: 6, (bfl: 0, mbl: 20)
    SQLFetch: row: 10, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 10, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 11, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 11, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 11, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 11, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 11, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 11, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 12, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 12, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 12, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 12, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 12, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 12, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 13, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 13, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 13, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 13, column 2, bflsz: 22,  bflar: 6, (bfl: 0, mbl: 20)
    SQLFetch: row: 13, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 13, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 14, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 14, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 14, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 14, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 14, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 14, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 15, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 15, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 15, column 2, bflsz: 22, bflar: 40
    SQLFetch: row: 15, column 2, bflsz: 22,  bflar: 40, (bfl: 0, mbl: 20)
    SQLFetch: row: 15, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 15, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 16, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 16, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 16, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 16, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 16, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 16, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 17, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 17, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 17, column 2, bflsz: 22, bflar: 32
    SQLFetch: row: 17, column 2, bflsz: 22,  bflar: 32, (bfl: 0, mbl: 20)
    SQLFetch: row: 17, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 17, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 18, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 18, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 18, column 2, bflsz: 22, bflar: 40
    SQLFetch: row: 18, column 2, bflsz: 22,  bflar: 40, (bfl: 0, mbl: 20)
    SQLFetch: row: 18, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 18, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 19, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 19, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 19, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 19, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 19, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 19, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 20, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 20, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 20, column 2, bflsz: 22, bflar: 2
    SQLFetch: row: 20, column 2, bflsz: 22,  bflar: 2, (bfl: 0, mbl: 20)
    SQLFetch: row: 20, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 20, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 21, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 21, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 21, column 2, bflsz: 22, bflar: 2
    SQLFetch: row: 21, column 2, bflsz: 22,  bflar: 2, (bfl: 0, mbl: 20)
    SQLFetch: row: 21, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 21, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 22, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 22, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 22, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 22, column 2, bflsz: 22,  bflar: 6, (bfl: 0, mbl: 20)
    SQLFetch: row: 22, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 22, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 23, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 23, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 23, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 23, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 23, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 23, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 24, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 24, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 24, column 2, bflsz: 22, bflar: 40
    SQLFetch: row: 24, column 2, bflsz: 22,  bflar: 40, (bfl: 0, mbl: 20)
    SQLFetch: row: 24, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 24, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 25, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 25, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 25, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 25, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 25, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 25, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 26, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 26, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 26, column 2, bflsz: 22, bflar: 32
    SQLFetch: row: 26, column 2, bflsz: 22,  bflar: 32, (bfl: 0, mbl: 20)
    SQLFetch: row: 26, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 26, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 27, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 27, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 27, column 2, bflsz: 22, bflar: 40
    SQLFetch: row: 27, column 2, bflsz: 22,  bflar: 40, (bfl: 0, mbl: 20)
    SQLFetch: row: 27, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 27, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 28, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 28, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 28, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 28, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 28, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 28, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 29, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 29, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 29, column 2, bflsz: 22, bflar: 2
    SQLFetch: row: 29, column 2, bflsz: 22,  bflar: 2, (bfl: 0, mbl: 20)
    SQLFetch: row: 29, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 29, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 30, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 30, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 30, column 2, bflsz: 22, bflar: 2
    SQLFetch: row: 30, column 2, bflsz: 22,  bflar: 2, (bfl: 0, mbl: 20)
    SQLFetch: row: 30, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 30, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 31, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 31, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 31, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 31, column 2, bflsz: 22,  bflar: 6, (bfl: 0, mbl: 20)
    SQLFetch: row: 31, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 31, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 32, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 32, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 32, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 32, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 32, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 32, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 33, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 33, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 33, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 33, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 33, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 33, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 34, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 34, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 34, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 34, column 2, bflsz: 22,  bflar: 6, (bfl: 0, mbl: 20)
    SQLFetch: row: 34, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 34, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 35, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 35, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 35, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 35, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 35, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 35, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 36, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 36, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 36, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 36, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 36, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 36, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 37, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 37, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 37, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 37, column 2, bflsz: 22,  bflar: 6, (bfl: 0, mbl: 20)
    SQLFetch: row: 37, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 37, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 38, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 38, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 38, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 38, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 38, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 38, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 39, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 39, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 39, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 39, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 39, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 39, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 40, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 40, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 40, column 2, bflsz: 22, bflar: 38
    SQLFetch: row: 40, column 2, bflsz: 22,  bflar: 38, (bfl: 0, mbl: 20)
    SQLFetch: row: 40, column 3, bflsz: 152, bflar: 298
    SQLFetch: row: 40, column 3, bflsz: 152,  bflar: 298, (bfl: 0, mbl: 150)
    40 rows fetched
    Exiting hgoftch, rc=0 at 2014/10/02-11:15:42 with error ptr FILE:hgoftch.c LINE:740 ID:Fetch resultset data
    Entered hgoftch, cursor id 1 at 2014/10/02-11:15:42
    hgoftch, line 130: Printing hoada @ 03705518
    MAX:3, ACTUAL:3, BRC:40, WHT=5 (SELECT_LIST)
    hoadaMOD bit-values found (0x40:TREAT_AS_NCHAR)
    DTY         NULL-OK  LEN  MAXBUFLEN   PR/SC  CST IND MOD NAME
    12 VARCHAR Y          4          3 128/  3 1000   0  40 MANDT
    12 VARCHAR Y          6         20 128/ 20 1000   0  40 NAME
    12 VARCHAR Y          0        150 128/150 1000   0  40 NAME_150
    0 rows fetched
    Exiting hgoftch, rc=1403 at 2014/10/02-11:15:42
    Entered hgoclse, cursor id 1 at 2014/10/02-11:15:46
    Exiting hgoclse, rc=0 at 2014/10/02-11:15:46
    Entered hgodafr, cursor id 1 at 2014/10/02-11:15:46
    Free hoada @ 03705518
    Exiting hgodafr, rc=0 at 2014/10/02-11:15:46
    Entered hgocomm at 2014/10/02-11:15:46
    keepinfo:0, tflag:1
       00: 4F52434C 55544638 2E376265 35343664  [ORCLUTF8.7be546d]
       10: 392E312E 32362E36 3630               [9.1.26.660]
                     tbid (len 23) is ...
       00: 4F52434C 55544638 5B312E32 362E3636  [ORCLUTF8[1.26.66]
       10: 305D5B31 2E345D                      [0][1.4]]
    cmt(0):
    Entered hgocpctx at 2014/10/02-11:15:46
    Exiting hgocpctx, rc=0 at 2014/10/02-11:15:46
    Exiting hgocomm, rc=0 at 2014/10/02-11:15:46
    Entered hgolgof at 2014/10/02-11:15:46
    tflag:1
    Exiting hgolgof, rc=0 at 2014/10/02-11:15:46
    Entered hgoexit at 2014/10/02-11:15:46
    Exiting hgoexit, rc=0

Maybe you are looking for