Multiple entries in a Z table with same key fields

Hi
I do have a ZTABLE, with 3 key fields defined earlier. It consists of around 1 lakh records. Later onwards, two of the non keyfields have been made to key fields.
This table is being populated with records at the time of saving a ztransaction.
But some times, the system is updating the same records, some times twice, sometimes thrice, etc. I got to know that all fields (both key fields and non-key fields) of the record are same. That is, records are being updated in to the database table n number of times may be depending of some false logic in the program.
If I tried to enter the same using SM30, it is showing me an error message stating that the record is already existing.
What can be the reson?

Hi,
It seems there is some kind of data inconsistency..try to get all the records and then delete the duplicate entries through program....Now once u r done , from now onwards there won't be any duplicate entiers.,also before updating table use filters to avoid the duplication..
Regards,
Nagaraj

Similar Messages

  • Reading data from BSEG table with Non-key fields in where clause

    Hi All,
        I have to read data from BSEG table based on WBS element field (PROJK). As I'm not passing key fields to WHERE clause system couldnt run the select statement. Since BSEG is a cluster table I cant even create secondary index on PROJK field.
       Could you please tell me, how to improve its performance.
    Regards
    Jaker.

    SELECT bukrs
                 belnr
                 gjahr
                 shkzg
                 dmbtr
                 hkont
                 lifnr
                 matnr
                 werks
                 menge
                 meins
                 ebeln
            FROM bseg
            INTO TABLE it_bseg
            PACKAGE SIZE 10
            FOR ALL ENTRIES IN it_final
            WHERE  bukrs EQ it_final-bukrs
               AND belnr EQ it_final-belnr
               AND gjahr EQ it_final-gjahr
               AND buzei EQ it_final-buzei
               AND hkont EQ it_final-hkont
               AND werks IN s_werks.
    By using package and fetch from BSEG table. gathering all other information to a final internal table.This will reduce the hit to database.And also try to put that data in hashed internal table which is it_bseg....then definetly improve the performance.
    <REMOVED BY MODERATOR>
    Dara.
    Edited by: Alvaro Tejada Galindo on Apr 21, 2008 12:47 PM

  • How to use Read table with out key fields

    Hi Experts,
    I need to retrieve the 2 internal tables data into single table.
    I have 3 common fields between the 2 tables but I don't have the Key fields. Then how to use the read table in this.
    Thanks in Advance.
    Edited by: satish4abap on Mar 10, 2010 9:39 AM

    Hi Satish,
    Key fields are nothing but the common fields with which you can pick the data from the second internal table.
    If you can paste your Internal table fields then we will be able to assit you better.
    However, in genral scenarios you can use it as below :
    In this scenario, we are putting data from 3 internal table to another single internal table.
    LOOP AT T_PRGEN INTO WA_PRGEN.
           WA_FINAL-GUID_PR       = WA_PRGEN-GUID_PR.
           WA_FINAL-ATTR20A       = WA_PRGEN-ATTR20A.
           WA_FINAL-ATTR05A       = WA_PRGEN-ATTR05A.
           WA_FINAL-ATTR05B       = WA_PRGEN-ATTR05B.
           WA_FINAL-ATTR05C       = WA_PRGEN-ATTR05C. " + DG1K902190
           WA_FINAL-ATTR10A       = WA_PRGEN-ATTR10A.
        READ TABLE T_V_TCAV201 INTO WA_V_TCAV201 WITH KEY ATTRV20 = WA_PRGEN-ATTR20A BINARY SEARCH.
        IF SY-SUBRC = 0.
          WA_FINAL-TEXT1   = WA_V_TCAV201-TEXT1.    "SUBID-TEXT1
        ENDIF.
        READ TABLE T_PNTPR INTO WA_PNTPR WITH KEY GUID_PR = WA_PRGEN-GUID_PR BINARY SEARCH.
        IF SY-SUBRC = 0.
           WA_FINAL-PRVSY  = WA_PNTPR-PRVSY.   "PROD NO
           WA_FINAL-GRVSY  = WA_PNTPR-GRVSY.   "LOGICAL SYS GROUP
        ENDIF.
      append wa_final to t_final.
    endloop.

  • Data from two tables with no key fields

    Hi Friends,
    I want to retrieve data from mara and mseg tables,matnr from mara and menge from mseg but problem is there is no key fields
    from these two tables how can i retrieve the data from these two tables?
    regards,
    bikash

    Hi,
    You need to have a common field between 2 tables if you want to put a join between them.
    Or otherwise,
    Find the check table of the primary keys of the tables.
    Try to find the common fields from one of the check tables of the primary keys.
    Example,
    If you want to join tables A and B.
    Both tables don't contain common fields.
    A has primary key fields X and Y.
    go to check table of X or Y.
    Let check table of X is C.
    check any field in C which are exist in table B.
    Join tables A and C.
    Based on this pick the data from B.
    Regards,
    Prashant

  • SSIS 2012 is intermittently failing with below "Invalid date format" while importing data from a source table into a Destination table with same exact schema.

    We migrated Packages from SSIS 2008 to 2012. The Package is working fine in all the environments except in one of our environment.
    SSIS 2012 is intermittently failing with below error while importing data from a source table into a Destination table with same exact schema.
    Error: 2014-01-28 15:52:05.19
       Code: 0x80004005
       Source: xxxxxxxx SSIS.Pipeline
       Description: Unspecified error
    End Error
    Error: 2014-01-28 15:52:05.19
       Code: 0xC0202009
       Source: Process xxxxxx Load TableName [48]
       Description: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80004005.
    An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 11.0"  Hresult: 0x80004005  Description: "Invalid date format".
    End Error
    Error: 2014-01-28 15:52:05.19
       Code: 0xC020901C
       Source: Process xxxxxxxx Load TableName [48]
       Description: There was an error with Load TableName.Inputs[OLE DB Destination Input].Columns[Updated] on Load TableName.Inputs[OLE DB Destination Input]. The column status returned was: "Conversion failed because the data value overflowed
    the specified type.".
    End Error
    But when we reorder the column in "Updated" in Destination table, the package is importing data successfully.
    This looks like bug to me, Any suggestion?

    Hi Mohideen,
    Based on my research, the issue might be related to one of the following factors:
    Memory pressure. Check there is a memory challenge when the issue occurs. In addition, if the package runs in 32-bit runtime on the specific server, use the 64-bit runtime instead.
    A known issue with SQL Native Client. As a workaround, use .NET data provider instead of SNAC.
    Hope this helps.
    Regards,
    Mike Yin
    If you have any feedback on our support, please click
    here
    Mike Yin
    TechNet Community Support

  • How can i export the data to excel which has 2 tables with same number of columns & column names?

    Hi everyone, again landed up with a problem.
    After trying a lot to do it myself, finally decided to post here..
    I have created a form in form builder 6i, in which on clicking a button the data gets exported to excel sheet.
    It is working fine with a single table. The problem now is that i am unable to do the same with 2 tables.
    Because both the tables have same number of columns & column names.
    Below are 2 tables with column names:
    Table-1 (MONTHLY_PART_1)
    Table-2 (MONTHLY_PART_2)
    SL_NO
    SL_NO
    COMP
    COMP
    DUE_DATE
    DUE_DATE
    U-1
    U-1
    U-2
    U-2
    U-4
    U-4
    U-20
    U-20
    U-25
    U-25
    Since both the tables have same column names, I'm getting the following error :
    Error 402 at line 103, column 4
      alias required in SELECT list of cursor to avoid duplicate column names.
    So How can i export the data to excel which has 2 tables with same number of columns & column names?
    Should i paste the code? Should i post this query in 'SQL and PL/SQL' Forum?
    Help me with this please.
    Thank You.

    You'll have to *alias* your columns, not prefix it with the table names:
    $[CHE_TEST@asterix1_impl] r
      1  declare
      2    cursor cData is
      3      with data as (
      4        select 1 id, 'test1' val1, 'a' val2 from dual
      5        union all
      6        select 1 id, '1test' val1, 'b' val2 from dual
      7        union all
      8        select 2 id, 'test2' val1, 'a' val2 from dual
      9        union all
    10        select 2 id, '2test' val1, 'b' val2 from dual
    11      )
    12      select a.id, b.id, a.val1, b.val1, a.val2, b.val2
    13      from data a, data b
    14      where a.id = b.id
    15      and a.val2 = 'a'
    16      and b.val2 = 'b';
    17  begin
    18    for rData in cData loop
    19      null;
    20    end loop;
    21* end;
      for rData in cData loop
    ERROR at line 18:
    ORA-06550: line 18, column 3:
    PLS-00402: alias required in SELECT list of cursor to avoid duplicate column names
    ORA-06550: line 18, column 3:
    PL/SQL: Statement ignored
    $[CHE_TEST@asterix1_impl] r
      1  declare
      2    cursor cData is
      3      with data as (
      4        select 1 id, 'test1' val1, 'a' val2 from dual
      5        union all
      6        select 1 id, '1test' val1, 'b' val2 from dual
      7        union all
      8        select 2 id, 'test2' val1, 'a' val2 from dual
      9        union all
    10        select 2 id, '2test' val1, 'b' val2 from dual
    11      )
    12      select a.id a_id, b.id b_id, a.val1 a_val1, b.val1 b_val1, a.val2 a_val2, b.val2 b_val2
    13      from data a, data b
    14      where a.id = b.id
    15      and a.val2 = 'a'
    16      and b.val2 = 'b';
    17  begin
    18    for rData in cData loop
    19      null;
    20    end loop;
    21* end;
    PL/SQL procedure successfully completed.
    cheers

  • Query to insert a row in a table with same values except 1 field.

    Suppose I have a table with 100 columns(fields).
    I want to insert a row with 99 fileds being the same as previous ones except one fileld being different.
    The table doesn't have any constraints.
    Kindly suggest a query to solve the above purpose..

    And for much more lazy people, a desc of table is shorter to write. :-)
    Then copy & paste into any editor, then mode column to add a comma after every column name in one shot.Or have the system do it for you. Be lazy.
    SELECT Column_Name || ',' FROM User_Tab_Columns where Table_Name = '';
    Indeed, it can be converted to a script called desc(.sql)
    SELECT Column_Name || ',' FROM User_Tab_Columns where Table_Name = '&1' ORDER BY Column_Id;
    Then desc or @desc.

  • SQL 2008 how to compare & identify data among 2 tables with same structure ?

    Hello is there a TSQL way of comparing row by row for example comparing DateTimeStamp field in 2 separate tables with same structure in identifying if it's data was changed (If Table1.DateField > Table2.DateField THEN UPDATE it's entire row to match it's
    corresponding Table1 row.) ?
    Probably thinking would need a cursor to loop through or are there other better ways via TSQL ?
    Thanks in advance.

    Hi,
    I am assuming you have a PRIMARY KEY field or a UNIQUE field in both tables, which you should!
    You can do this using "UPDATE with INNER JOIN". (SQL Server – Update Table with INNER JOIN)
    Working example:
    USE [TestDatabase]
    GO
    /* Create Temp Tables */
    CREATE TABLE [dbo].[Table1]
    [ID] INT
    ,[Name] VARCHAR(10)
    ,[ModifyDate] DATETIME
    CREATE TABLE [dbo].[Table2]
    [ID] INT
    ,[Name] VARCHAR(10)
    ,[ModifyDate] DATETIME
    GO
    INSERT INTO [dbo].[Table1] ([ID], [Name], [ModifyDate]) VALUES (1, 'Vishal', GETDATE())
    INSERT INTO [dbo].[Table2] ([ID], [Name], [ModifyDate]) VALUES (1, 'Gajjar', GETDATE() + 1)
    GO
    SELECT [ID], [Name], [ModifyDate] FROM [dbo].[Table1]
    SELECT [ID], [Name], [ModifyDate] FROM [dbo].[Table2]
    GO
    UPDATE T1
    SET T1.[Name] = T2.[Name]
    FROM [dbo].[Table1] T1
    INNER JOIN [dbo].[Table2] T2 ON T1.[ID] = T2.[ID] /* Join on Key field */
    WHERE T2.[ModifyDate] > T1.[ModifyDate] /* Update criteria */
    GO
    /* CleanUp - Drop Tables
    DROP TABLE [dbo].[Table1], [dbo].[Table2]
    This Updates Table1 with latest values from Table2, You need to change the UPDATE statement for your scenario.
    - Vishal
    SqlAndMe.com

  • Get the Common from Two Internal Tables with same structure

    Hi ,
    I need to get the Common data from Two Internal Tables with same structure with using the looping method.
    For e.g.
    I have two internal table say ITAB1 and ITAB2.
    ITAB1 has values A,B,C,D,E,F
    ITAB2 has values A,H,B,Y,O
    Output at runtime should be : A,B

    Hi mohit,
    1. If u want to compare all fields,
       for matching purpose,
       then we can do like this.
    2.
    report abc.
    data : a like t001 occurs 0 with header line.
    data : b like t001 occurs 0 with header line.
    loop at a.
      LOOP AT B.
        IF A = B.
          WRITE :/ 'SAME'.
        ENDIF.
      endloop.
    ENDLOOP.
    regards,
    amit m.

  • 2 tables with same primary key

    Hi,
    2 tables with same primary key is possible,but when is wise to go for such ?
    is that a bad desisn ?
    eg:
    Personal_details_employee
    Official_details_employee
    these 2 tables have same Primary key emp_id.
    plz. reply me at [email protected] also
    thanx
    vikram

    hi Vikram,
    In this case u can have then in one table itself and why do want to have two tables.
    - Suresh.A

  • R4 EA: Error loading a table with an XMLTYPE field

    When I try to view the data in a table with an XMLTYPE field I get the following error in R4 EA Version 4.0.0.12 Build MAIN-12-27.  This works in 2.2 and 3.0.
    oracle.sqldeveloper.migration.application
         Error: Resource not found: ${SCRATCH_COMMAND_ICON}.
    Double clicking on the error opens the EXTENSION.XML file and shows this line:
    <trigger-hooks xmlns="http://xmlns.oracle.com/ide/extension">
      <!-- Add registry here if required -->
      <triggers xmlns:c="http://xmlns.oracle.com/ide/customization">
        <actions xmlns="http://xmlns.oracle.com/jdeveloper/1013/extension">
          <action id="MigrationProject.ApplicationScan">
            <properties>
              <property name="Name">${APPSCAN_TITLE}</property>
              <property name="MnemonicKey">${APPSCAN_TITLE2}</property>
              <property name="SmallIcon">res:${SCRATCH_COMMAND_ICON}</property>
            </properties>
          </action>
        </actions>
    The table has the following definition:
    ID    NUMBER(38,0)    No
    WS_DATA    XMLTYPE    Yes
    WS_SNAPSHOT_ID    NUMBER(38,0)    No
    Any help would be greatly appreaciated.  We have to upgrade to 4.0 because our security team will no longer allow Java 6 on any server or workstation.
    Thanks,
    Steve

    Hi Steve
    Still no response?
    I am having same issue when querying a table with XMLTYPE.
    How is your XMLTYPE stored in the DB, as a CLOB or as a BINARY XML?
    Regards,
    Shaun

  • MaxDB: Table with many LONG fields does not allow an INSERT: ...?

    Hi,
    I have a table with many LONG fields (28). So far, everythings works fine.
    However, if I add another LONG field I cannot insert a dataset anymore
    (29 LONG fields).
    Does there exist a MaxDB parameter or anything else I can change to make inserts possible again?
    Thanks in advance
    Michael
    appendix:
    - Create and Insert command and error message
    - MaxDB version and its parameters
    Create and Insert command and error message
    CREATE TABLE "DBA"."AZ_Z_TEST02"
         "ZTB_ID"               Integer    NOT NULL,
         "ZTB_NAMEOFREPORT"           Char (400) ASCII DEFAULT '',
         "ZTB_LONG_COMMENT"                LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_00"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_01"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_02"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_03"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_04"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_05"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_06"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_07"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_08"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_09"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_10"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_11"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_12"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_13"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_14"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_15"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_16"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_17"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_18"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_19"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_20"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_21"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_22"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_23"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_24"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_25"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_26"         LONG ASCII DEFAULT '',
         PRIMARY KEY ("ZTB_ID")
    The insert command
    INSERT INTO AZ_Z_TEST02 SET ztb_id = 87
    works fine. If I add the LONG field
    "ZTB_LONG_TEXTBLOCK_27"         LONG ASCII DEFAULT '',
    the following error occurs:
        Auto Commit: On, SQL Mode: Internal, Isolation Level: Committed
        General error;-7032 POS(1) SQL statement not allowed for column of data type LONG
        INSERT INTO AZ_Z_TEST02 SET ztb_id = 88
    MaxDB version and its parameters
    All db params given by
    dbmcli -d myDB -u dbm,dbm param_directgetall > maxdb_params.txt
    are
    KERNELVERSION                         KERNEL    7.5.0    BUILD 026-123-094-430
    INSTANCE_TYPE                         OLTP
    MCOD                                  NO
    RESTART_SHUTDOWN                      MANUAL
    SERVERDBFOR_SAP                     YES
    _UNICODE                              NO
    DEFAULT_CODE                          ASCII
    DATE_TIME_FORMAT                      INTERNAL
    CONTROLUSERID                         DBM
    CONTROLPASSWORD                       
    MAXLOGVOLUMES                         10
    MAXDATAVOLUMES                        11
    LOG_VOLUME_NAME_001                   LOG_001
    LOG_VOLUME_TYPE_001                   F
    LOG_VOLUME_SIZE_001                   64000
    DATA_VOLUME_NAME_0001                 DAT_0001
    DATA_VOLUME_TYPE_0001                 F
    DATA_VOLUME_SIZE_0001                 64000
    DATA_VOLUME_MODE_0001                 NORMAL
    DATA_VOLUME_GROUPS                    1
    LOG_BACKUP_TO_PIPE                    NO
    MAXBACKUPDEVS                         2
    BACKUP_BLOCK_CNT                      8
    LOG_MIRRORED                          NO
    MAXVOLUMES                            22
    MULTIO_BLOCK_CNT                    4
    DELAYLOGWRITER                      0
    LOG_IO_QUEUE                          50
    RESTARTTIME                         600
    MAXCPU                                1
    MAXUSERTASKS                          50
    TRANSRGNS                           8
    TABRGNS                             8
    OMSREGIONS                          0
    OMSRGNS                             25
    OMS_HEAP_LIMIT                        0
    OMS_HEAP_COUNT                        1
    OMS_HEAP_BLOCKSIZE                    10000
    OMS_HEAP_THRESHOLD                    100
    OMS_VERS_THRESHOLD                    2097152
    HEAP_CHECK_LEVEL                      0
    ROWRGNS                             8
    MINSERVER_DESC                      16
    MAXSERVERTASKS                        20
    _MAXTRANS                             288
    MAXLOCKS                              2880
    LOCKSUPPLY_BLOCK                    100
    DEADLOCK_DETECTION                    4
    SESSION_TIMEOUT                       900
    OMS_STREAM_TIMEOUT                    30
    REQUEST_TIMEOUT                       5000
    USEASYNC_IO                         YES
    IOPROCSPER_DEV                      1
    IOPROCSFOR_PRIO                     1
    USEIOPROCS_ONLY                     NO
    IOPROCSSWITCH                       2
    LRU_FOR_SCAN                          NO
    PAGESIZE                            8192
    PACKETSIZE                          36864
    MINREPLYSIZE                        4096
    MBLOCKDATA_SIZE                     32768
    MBLOCKQUAL_SIZE                     16384
    MBLOCKSTACK_SIZE                    16384
    MBLOCKSTRAT_SIZE                    8192
    WORKSTACKSIZE                       16384
    WORKDATASIZE                        8192
    CATCACHE_MINSIZE                    262144
    CAT_CACHE_SUPPLY                      1632
    INIT_ALLOCATORSIZE                    229376
    ALLOW_MULTIPLE_SERVERTASK_UKTS        NO
    TASKCLUSTER01                       tw;al;ut;2000sv,100bup;10ev,10gc;
    TASKCLUSTER02                       ti,100dw;30000us;
    TASKCLUSTER03                       compress
    MPRGN_QUEUE                         YES
    MPRGN_DIRTY_READ                    NO
    MPRGN_BUSY_WAIT                     NO
    MPDISP_LOOPS                        1
    MPDISP_PRIO                         NO
    XP_MP_RGN_LOOP                        0
    MP_RGN_LOOP                           0
    MPRGN_PRIO                          NO
    MAXRGN_REQUEST                        300
    PRIOBASE_U2U                        100
    PRIOBASE_IOC                        80
    PRIOBASE_RAV                        80
    PRIOBASE_REX                        40
    PRIOBASE_COM                        10
    PRIOFACTOR                          80
    DELAYCOMMIT                         NO
    SVP1_CONV_FLUSH                     NO
    MAXGARBAGECOLL                      0
    MAXTASKSTACK                        1024
    MAX_SERVERTASK_STACK                  100
    MAX_SPECIALTASK_STACK                 100
    DWIO_AREA_SIZE                      50
    DWIO_AREA_FLUSH                     50
    FBM_VOLUME_COMPRESSION                50
    FBM_VOLUME_BALANCE                    10
    FBMLOW_IO_RATE                      10
    CACHE_SIZE                            10000
    DWLRU_TAIL_FLUSH                    25
    XP_DATA_CACHE_RGNS                    0
    DATACACHE_RGNS                      8
    XP_CONVERTER_REGIONS                  0
    CONVERTER_REGIONS                     8
    XP_MAXPAGER                           0
    MAXPAGER                              11
    SEQUENCE_CACHE                        1
    IDXFILELIST_SIZE                    2048
    SERVERDESC_CACHE                    73
    SERVERCMD_CACHE                     21
    VOLUMENO_BIT_COUNT                    8
    OPTIM_MAX_MERGE                       500
    OPTIM_INV_ONLY                        YES
    OPTIM_CACHE                           NO
    OPTIM_JOIN_FETCH                      0
    JOIN_SEARCH_LEVEL                     0
    JOIN_MAXTAB_LEVEL4                    16
    JOIN_MAXTAB_LEVEL9                    5
    READAHEADBLOBS                      25
    RUNDIRECTORY                          E:\_mp\u_v_dbs\EVERW_C5
    _KERNELDIAGFILE                       knldiag
    KERNELDIAGSIZE                        800
    _EVENTFILE                            knldiag.evt
    _EVENTSIZE                            0
    _MAXEVENTTASKS                        1
    _MAXEVENTS                            100
    _KERNELTRACEFILE                      knltrace
    TRACE_PAGES_TI                        2
    TRACE_PAGES_GC                        0
    TRACE_PAGES_LW                        5
    TRACE_PAGES_PG                        3
    TRACE_PAGES_US                        10
    TRACE_PAGES_UT                        5
    TRACE_PAGES_SV                        5
    TRACE_PAGES_EV                        2
    TRACE_PAGES_BUP                       0
    KERNELTRACESIZE                       648
    EXTERNAL_DUMP_REQUEST                 NO
    AKDUMP_ALLOWED                      YES
    _KERNELDUMPFILE                       knldump
    _RTEDUMPFILE                          rtedump
    UTILITYPROTFILE                     dbm.utl
    UTILITY_PROTSIZE                      100
    BACKUPHISTFILE                      dbm.knl
    BACKUPMED_DEF                       dbm.mdf
    MAXMESSAGE_FILES                    0
    EVENTALIVE_CYCLE                    0
    _SHAREDDYNDATA                        10280
    _SHAREDDYNPOOL                        3607
    USE_MEM_ENHANCE                       NO
    MEM_ENHANCE_LIMIT                     0
    __PARAM_CHANGED___                    0
    __PARAM_VERIFIED__                    2008-05-13 13:47:17
    DIAG_HISTORY_NUM                      2
    DIAG_HISTORY_PATH                     E:\_mp\u_v_dbs\EVERW_C5\DIAGHISTORY
    DIAGSEM                             1
    SHOW_MAX_STACK_USE                    NO
    LOG_SEGMENT_SIZE                      21333
    SUPPRESS_CORE                         YES
    FORMATTING_MODE                       PARALLEL
    FORMAT_DATAVOLUME                     YES
    HIRES_TIMER_TYPE                      CPU
    LOAD_BALANCING_CHK                    0
    LOAD_BALANCING_DIF                    10
    LOAD_BALANCING_EQ                     5
    HS_STORAGE_DLL                        libhsscopy
    HS_SYNC_INTERVAL                      50
    USE_OPEN_DIRECT                       NO
    SYMBOL_DEMANGLING                     NO
    EXPAND_COM_TRACE                      NO
    OPTIMIZE_OPERATOR_JOIN_COSTFUNC       YES
    OPTIMIZE_JOIN_PARALLEL_SERVERS        0
    OPTIMIZE_JOIN_OPERATOR_SORT           YES
    OPTIMIZE_JOIN_OUTER                   YES
    JOIN_OPERATOR_IMPLEMENTATION          IMPROVED
    JOIN_TABLEBUFFER                      128
    OPTIMIZE_FETCH_REVERSE                YES
    SET_VOLUME_LOCK                       YES
    SHAREDSQL                             NO
    SHAREDSQL_EXPECTEDSTATEMENTCOUNT      1500
    SHAREDSQL_COMMANDCACHESIZE            32768
    MEMORY_ALLOCATION_LIMIT               0
    USE_SYSTEM_PAGE_CACHE                 YES
    USE_COROUTINES                        YES
    MIN_RETENTION_TIME                    60
    MAX_RETENTION_TIME                    480
    MAX_SINGLE_HASHTABLE_SIZE             512
    MAX_HASHTABLE_MEMORY                  5120
    HASHED_RESULTSET                      NO
    HASHED_RESULTSET_CACHESIZE            262144
    AUTO_RECREATE_BAD_INDEXES             NO
    LOCAL_REDO_LOG_BUFFER_SIZE            0
    FORBID_LOAD_BALANCING                 NO

    >
    Lars Breddemann wrote:
    > Hi Michael,
    >
    > this really looks like one of those "Find-the-5-errors-in-the-picture" riddles to me.
    > Really.
    >
    > Ok, first to your question: this seems to be a bug - I could reproduce it with my 7.5. Build 48.
    > Anyhow, when I use
    >
    > insert into "AZ_Z_TEST02"  values (87,'','','','','','','','','','','','','','','',''
    >                                           ,'','','','','','','','','','','','','','','','')
    >
    > it works fine.
    It solves my problem. Thanks a lot. -- I hardly can believe that this is all needed to solve the bug. This may be the reason why I have not given it a try.
    >
    Since explicitely specifying all values for an insert is a good idea anyhow (you can see directly, what value the new tupel will have), you may want to change your code to this.
    >
    > Now to the other errors:
    > - 28 Long values per row?
    > What the heck is wrong with the data design here?
    > Honestly, you can save data up to 2 GB in a BLOB/CLOB.
    > Currently, your data design allows 56 GB per row.
    > Moreover 26 of those columns seems to belong together originally - why do you split them up at all?
    >
    > - The "ZTB_NAMEOFREPORT" looks like something the users see -
    > still there is no unique constraint preventing that you get 10000 of reports with the same name...
    You are right. This table looks a bit strange. The story behind it is: Each crystal report in the application has a few textblocks which are the same for all the e.g. persons the e.g. letter is created for. Principally, the textblocks could be directy added to the crystal report. However, as it is often the case, these textblocks may change once in a while. Thus, I put the texts of the textblock into this "strange" db table (one row for each report, one field for each textblock, the name of the report is given by "ztb_nameofreport"). And the application offers a menue by which these textblocks can be changed. Of course, the fields in the table could be of type CHAR, but LONG has the advantage that I do not have to think about the length of the field, since sometime the texts are short and sometimes they are really long.
    (These texts would blow up the sql select command of the crystal report very much if they were integrated into the this select command. Thus it is realized in another way: the texts are read before the crystal report is loaded, then the texts are "given" to the crystal report (by its parameters), and finally the crystal report is loaded.)
    >
    - MaxDB 7.5 Build 26 ?? Where have you been the last years?
    > Really - download the 7.6.03 Version [here|https://www.sdn.sap.com/irj/sdn/maxdb-downloads] from SDN and upgrade.
    > With 7.6. I was not able to reproduce your issue at all.
    The customer still has Win98 clients. MaxDB odbc driver 7.5.00.26 does not work for them. I got the hint to use odbc driver 7.3 (see [lists.mysql.com/maxdb/25667|lists.mysql.com/maxdb/25667]). Do MaxDB 7.6 and odbc driver 7.3 work together?
    All Win98 clients may be replaced by WinXP clients in the near future. Then, an upgrade may be reasonable.
    >
    - Are you really putting your data into the DBA schema? Don't do that, ever.
    > DBM/SUPERDBA (the sysdba-schemas) are reserved for the MaxDB system tables.
    > Create a user/schema for your application data and put your tables into that.
    >
    > KR Lars
    In the first MaxDB version I used, schemas were not available. I haven't changed it afterwards. Is there an easy way to "move an existing table into a new schema"?
    Michael

  • How to fill internal table with selection screen field.

    Hi all,
    i am new to sap . pls tell me how to fill internal table with selection screen field.

    Hi,
    Please see the example below:-
    I have used both select-options and parameter on the selection-screen.
    Understand the same.
    * type declaration
    TYPES: BEGIN OF t_matnr,
            matnr TYPE matnr,
           END OF t_matnr,
           BEGIN OF t_vbeln,
             vbeln TYPE vbeln,
           END OF t_vbeln.
    * internal table declaration
    DATA : it_mara  TYPE STANDARD TABLE OF t_matnr,
           it_vbeln TYPE STANDARD TABLE OF t_vbeln.
    * workarea declaration
    DATA : wa_mara  TYPE t_matnr,
           wa_vbeln TYPE t_vbeln.
    * selection-screen field
    SELECTION-SCREEN: BEGIN OF BLOCK b1.
    PARAMETERS : p_matnr TYPE matnr.
    SELECT-OPTIONS : s_vbeln FOR wa_vbeln-vbeln.
    SELECTION-SCREEN: END OF BLOCK b1.
    START-OF-SELECTION.
    * I am adding parameter value to my internal table
      wa_mara-matnr = p_matnr.
      APPEND wa_mara TO it_mara.
    * I am adding select-options value to an internal table
      LOOP AT s_vbeln.
        wa_vbeln-vbeln =  s_vbeln-low.
        APPEND  wa_vbeln TO  it_vbeln.
      ENDLOOP.
    Regards,
    Ankur Parab

  • Unique data record means you can't  update a record from ECC with same key.

    Unique data record means you can't  update a record from ECC with same key fileds right?
    Details: For example i have two requests Req1 and Req2 in DSO with unique data record setting checked. on day one Req1 has a filed quantity with value 10 in Active data table. On day two Req1 can not be overwitten from ECC with Req2 with the same key fields but different value 20 in the filed quantity because of the Unique data record settings. finally the delta load fails from ECC going to DSO because of this setting. is it right?
    I think we can only use unique record setting going from DSO to cube right?
    Please give me a simple scenario in which we can use this setting.
    I already search the threads and will assign points only to valuable information.
    Thanks in advance.
    York

    Hi Les,
    Unique Data Records:
    With the Unique Data Records indicator, you determine whether only unique data records are to be updated to the ODS object. This means that you cannot load a data record into the ODS object the key combination for which already exists in the system – otherwise a termination occurs. Only use this setting when you are sure that only unique data records are to be loaded into the ODS object (for example, single documents). A typical application of this is in the loading of mass data. It improves the load performance.
    Hope it Helps
    Srini

  • Maintenance View for custom table with foreign key relationship

    Hi Folks,
         I have created a custom table with foreign key relationship with other check tables. I want to create a maintenance view / tablemaintenance generator. What all things I need to take care for the foreign keys related fields while creating the maintenance view / tablemaintenance generator.
    Regards,
      santosh

    Hi,
    You do not have to do anything explicitely for the foreign key relationships in the table maintainance generator.
    Create the table maintainance generator via SE11 and it will take care of all teh foreign key checks by itself.
    Regards,
    Ankur Parab

Maybe you are looking for

  • Increasing memory on K8N-Neo2F

    I would like to increase the memory in a home built PC I inherited from a friend who emigrated – but as I am not that knowledgeable I would be very grateful for any advice on the following - The PC runs Windows XP & is fine but slow, it has 512 MB of

  • WLST and configToScript problems.....and problems....

    Well I have been running into many different problems with the py that gets created by the configToScript. The simplest of which is if I take a clean domain that has just been configured using the configuration wizard and just add a unix machine and

  • Display users IP Address

    How can I display the IP address of the user in APEX? Is there any way to avoid displaying just the Proxy's IP address? Btw - I searched the forum and found this select statement: select utl_inaddr.get_host_name(OWA_UTIL.GET_CGI_ENV('REMOTE_ADDR')) h

  • Bookmark issues

    I transfered my bookmarks to my new Macbook Pro and now I'm having issues with opening some-but not all- of the bookmark folders. I can get to the links if I go through "show all bookmarks" but when I try to use the drop down bar it won't show any of

  • Financial Reporting with emailing  question

    Greetings, I'm on Fusion 11.1.1.4. Is it possible to have an admin run either a book or reports and have these reports emailed to specific users? I would like if possible to use Essbase security to ensure that only the assigned users would received t