Issue with updating partitioned table

Hi,
Anyone seen this bug with updating partitioned tables.
Its very esoteric - its occurs when we update a partitioned table using a join to a temp table (not non-temp table) when the join has multiple joins and you're updating the partitoned column that isn't the first column in the primary key and the table contains a bit field. We've tried changing just one of these features and the bug disappears.
We've tested this on 15.5 and 15.7 SP122 and the error occurs in both of them.
Here's the test case - it does the same operation of a partitioned table and a non-partitioned table, but the partitioned table shows and error of "Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'".
I'd be interested if anyone has seen this and has a version of Sybase without the issue.
Unfortunately when it happens on a replicated table - it takes down rep server.
CREATE TABLE #table1
    (   PK          char(8) null,
        FileDate        date,
        changed         bit
CREATE TABLE partitioned  (
  PK         char(8) NOT NULL,
  ValidFrom     date DEFAULT current_date() NOT NULL,
  ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
LOCK DATAROWS
  PARTITION BY RANGE (ValidTo)
  ( p2014 VALUES <= ('20141231') ON [default],
  p2015 VALUES <= ('20151231') ON [default],
  pMAX VALUES <= (MAX) ON [default]
CREATE UNIQUE CLUSTERED INDEX pk
  ON partitioned(PK, ValidFrom, ValidTo)
  LOCAL INDEX
CREATE TABLE unpartitioned  (
  PK         char(8) NOT NULL,
  ValidFrom     date DEFAULT current_date() NOT NULL,
  ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
LOCK DATAROWS
CREATE UNIQUE CLUSTERED INDEX pk
  ON unpartitioned(PK, ValidFrom, ValidTo)
insert partitioned
select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
insert unpartitioned
select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
insert #table1
select "ET00jPzh", "Jan 15 2015", 1
union all
select "ET00jPzh", "Jan 15 2015", 1
go
update partitioned
set    ValidTo = dateadd(dd,-1,FileDate)
from   #table1 t
inner  join partitioned p on (p.PK = t.PK)
where  p.ValidTo = '99991231'
and    t.changed = 1
go
update unpartitioned
set    ValidTo = dateadd(dd,-1,FileDate)
from   #table1 t
inner  join unpartitioned u on (u.PK = t.PK)
where  u.ValidTo = '99991231'
and    t.changed = 1
go
drop table #table1
go
drop table partitioned
drop table unpartitioned
go

wrt to replication - it is a bit unclear as not enough information has been stated to point out what happened.  I also am not sure that your DBA's are accurately telling you what happened - and may have made the problem worse by not knowing themselves what to do - e.g. 'losing' the log points to fact that someone doesn't know what they should.   You can *always* disable the replication secondary truncation point and resync a standby system, so claims about 'losing' the log are a bit strange to be making. 
wrt to ASE versions, I suspect if there are any differences, it may have to do with endian-ness and not the version of ASE itself.   There may be other factors.....but I would suggest the best thing would be to open a separate message/case on it.
Adaptive Server Enterprise/15.7/EBF 23010 SMP SP130 /P/X64/Windows Server/ase157sp13x/3819/64-bit/OPT/Fri Aug 22 22:28:21 2014:
-- testing with tinyint
1> use demo_db
1>
2> CREATE TABLE #table1
3>     (   PK          char(8) null,
4>         FileDate        date,
5> --        changed         bit
6>  changed tinyint
7>     )
8>
9> CREATE TABLE partitioned  (
10>   PK         char(8) NOT NULL,
11>   ValidFrom     date DEFAULT current_date() NOT NULL,
12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
13>   )
14>
15> LOCK DATAROWS
16>   PARTITION BY RANGE (ValidTo)
17>   ( p2014 VALUES <= ('20141231') ON [default],
18>   p2015 VALUES <= ('20151231') ON [default],
19>   pMAX VALUES <= (MAX) ON [default]
20>         )
21>
22> CREATE UNIQUE CLUSTERED INDEX pk
23>   ON partitioned(PK, ValidFrom, ValidTo)
24>   LOCAL INDEX
25>
26> CREATE TABLE unpartitioned  (
27>   PK         char(8) NOT NULL,
28>   ValidFrom     date DEFAULT current_date() NOT NULL,
29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
30>   )
31> LOCK DATAROWS
32>
33> CREATE UNIQUE CLUSTERED INDEX pk
34>   ON unpartitioned(PK, ValidFrom, ValidTo)
35>
36> insert partitioned
37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
38>
39> insert unpartitioned
40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
41>
42> insert #table1
43> select "ET00jPzh", "Jan 15 2015", 1
44> union all
45> select "ET00jPzh", "Jan 15 2015", 1
(1 row affected)
(1 row affected)
(2 rows affected)
1>
2> update partitioned
3> set    ValidTo = dateadd(dd,-1,FileDate)
4> from   #table1 t
5> inner  join partitioned p on (p.PK = t.PK)
6> where  p.ValidTo = '99991231'
7> and    t.changed = 1
Msg 2601, Level 14, State 6:
Server 'PHILLY_ASE', Line 2:
Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
Command has been aborted.
(0 rows affected)
1>
2> update unpartitioned
3> set    ValidTo = dateadd(dd,-1,FileDate)
4> from   #table1 t
5> inner  join unpartitioned u on (u.PK = t.PK)
6> where  u.ValidTo = '99991231'
7> and    t.changed = 1
(1 row affected)
1>
2> drop table #table1
1>
2> drop table partitioned
3> drop table unpartitioned
-- duplicating with 'int'
1> use demo_db
1>
2> CREATE TABLE #table1
3>     (   PK          char(8) null,
4>         FileDate        date,
5> --        changed         bit
6>  changed int
7>     )
8>
9> CREATE TABLE partitioned  (
10>   PK         char(8) NOT NULL,
11>   ValidFrom     date DEFAULT current_date() NOT NULL,
12>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL
13>   )
14>
15> LOCK DATAROWS
16>   PARTITION BY RANGE (ValidTo)
17>   ( p2014 VALUES <= ('20141231') ON [default],
18>   p2015 VALUES <= ('20151231') ON [default],
19>   pMAX VALUES <= (MAX) ON [default]
20>         )
21>
22> CREATE UNIQUE CLUSTERED INDEX pk
23>   ON partitioned(PK, ValidFrom, ValidTo)
24>   LOCAL INDEX
25>
26> CREATE TABLE unpartitioned  (
27>   PK         char(8) NOT NULL,
28>   ValidFrom     date DEFAULT current_date() NOT NULL,
29>   ValidTo       date DEFAULT '31-Dec-9999' NOT NULL,
30>   )
31> LOCK DATAROWS
32>
33> CREATE UNIQUE CLUSTERED INDEX pk
34>   ON unpartitioned(PK, ValidFrom, ValidTo)
35>
36> insert partitioned
37> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
38>
39> insert unpartitioned
40> select "ET00jPzh", "Jan  7 2015", "Dec 31 9999"
41>
42> insert #table1
43> select "ET00jPzh", "Jan 15 2015", 1
44> union all
45> select "ET00jPzh", "Jan 15 2015", 1
(1 row affected)
(1 row affected)
(2 rows affected)
1>
2> update partitioned
3> set    ValidTo = dateadd(dd,-1,FileDate)
4> from   #table1 t
5> inner  join partitioned p on (p.PK = t.PK)
6> where  p.ValidTo = '99991231'
7> and    t.changed = 1
Msg 2601, Level 14, State 6:
Server 'PHILLY_ASE', Line 2:
Attempt to insert duplicate key row in object 'partitioned' with unique index 'pk'
Command has been aborted.
(0 rows affected)
1>
2> update unpartitioned
3> set    ValidTo = dateadd(dd,-1,FileDate)
4> from   #table1 t
5> inner  join unpartitioned u on (u.PK = t.PK)
6> where  u.ValidTo = '99991231'
7> and    t.changed = 1
(1 row affected)
1>
2> drop table #table1
1>
2> drop table partitioned
3> drop table unpartitioned

Similar Messages

  • Issue with Update of Table VARINUM

    Hi,
    I am getting waiting Issues with Update of table VARINUM. Has anybody faced such an issue.
    I have a lot of Jobs which are running in background. I am submitting it through a report. what can be the issue.
    Regards,
    Abhishek jolly

    Thisi is quite old, but not answered properly yet, so there you go:
    SAP generates a new job and temporary variant on report RSDBSPJS, for each HTTP call,which creates database locks on table VARINUM .
    This causes any heavyweight BSP application  to hang and give timeout errors.
    The problem is fixed applying OSS note 1791958, which is not included in any service pack.

  • Export Issues with Compressed Partition Tables?

    We recently partitioned and compressed some large tables. It appears, but I'm not sure yet, that this is causing the export to run extremely slow. The database is at 10.2.0.2 and we are using the exp utility, not datapump. Does anyone know of any known issues with using exp to export compressed, partitioned tables?

    can you give more details of the table structure with dbms_metadata if possible, and how you are taking the export please?
    did you try to take an sql*trace of the export process to see what is going on behind, this is an introduction if you may need;
    http://tonguc.wordpress.com/2006/12/30/introduction-to-oracle-trace-utulity-and-understanding-the-fundamental-performance-equation/

  • Issue with updation of table BBP_PDBEI

    Hello,
    I am facing a strange issue. I created an extended classic PO. It is properly replicated in the backend. But the table BBP_PDBEI is not getting filled. The only details I found in BBP_PD were BE_OBJECT_TYPE as BUS2201 and BE_OBJECT_ID as 1. BBP_GET_STATUS_2 seems to work fine.
    If I execute function module BBP_PD_PO_TRANSFER_EXEC using the GUID of PO which is already transferred to backend, then the table BBP_PDBEI is updated properly.
    Therefore, everytime I need to update the PO using FM BBP_PD_PO_TRANSFER_EXEC. Ideally, it should happen automatically.
    Any suggesstions?
    Thanks,
    Arun Singh

    Hi,
    It appears to be more a BBP_GET_STATUS program issue.
    You can do 2 things:
    1) Execute Report BBP_GET_STATUS_2 manually & then check BBP_PD for the particular PO.
    2) You may also look for spools (Log) for the GET STATUS job in tcode SM37. Look for any errors listed.
    Cheers,
    Akash

  • Issue with data dictionary -Table maintanance generator

    Hi all,
    I have an issue with Data dictionary, table maintenance generator. I have entered some records in a custom table (ZBCSECROLETOGRP) and changed the delivery class from C to A. When I create the table maintainance generator, I am encountered with the following errors:
    1)Field ZBCSECROLETOGRP-PORTALGROUP shortened (new visible length: 000032)
    2)0012 could not be generated
    3)In TCTRL_ZBCSECROLETOGRP field LENGTH has the invalid value 01
    My main motto is to create the table maintainace generator and transport to the furthur systems .
    Please help.
    ThnX in advance,
    Vishal..

    HI,
    Regenerate the table maintenance by selecting the checkbox of "Modified field structure" => new entry & then save.
    Also ensure that the new changes are not affecting old data bcz of data type changes. If that is the case, then delete the old records, regenerate table maint. & re-enter those records which you had deleted.
    Thanks,
    Best regards,
    Prashant

  • Any other realtors having issues with updating their Suprakey on their Android?

    Any other realtors having issues with updating their Suprakey on their Android?

    The follow is from clicking on that error number in the article cited at the end of my post:
    Error 3194: Resolve error 3194 by updating to the latest version of iTunes. "This device is not eligible for the requested build" in the updater logs confirms this is the root of the issue. For more Error 3194 steps see: This device is not eligible for the requested build above.
    iOS: Resolving update and restore alert messages

  • Having issue with update Adobe 11.0.06.  Error 1603.

    Having issue with update Adobe 11.0.06.  Error 1603.

    You should have gotten this information with the error: "Shut down Microsoft Office and all web browsers. Then, in Acrobat or Reader, choose Help > Check for Updates. See also Error 1603 | Install | CS3, CS4 products." Also, be sure you log in as the administrator and disable anti-virus.

  • Having issues with update to iOS 8

    Having issues with update to iOS 8, wheel keeps spinning with 9 hrs remaining to complete update. What are my options?
    <Re-Titled By Host>

    I am a windows/mac user... And I was  Linux user for several years...  And so far I have been really happy with apple but this problem is frustrating.
    I don't understand:
    1)Why can't I go back to iOS7 since this was apparently the most stable version for my iPad ?
    2) Why is apple not doing anything about it,  ?
    3) If the problem is that my iPad's can't fully support the features of iOS8, why on earth have they made it available to iPad 2?
    Any way, I will just wait a bit longer and probably start looking in to a Samsung tablet or something....
    Cheers,

  • CVC creation - Strange issue with Master data table of 9AMATNR

    Hi Experts,
    We have encountered a strange issue with Master data table (/BI0/9APMATNR) of info object 9AMATNR.
    We have a BADI implemented for checking the valid Characteristic before creation of the CVC using transaction /SAPAPO/MC62. This BADI puts a select on master data tab of material /BI0/9APMATNR and returns no value. But the material actually exists in the table (checked through SE16).
    Now we go inside the info object 9AMATNR and go to the Master data Tab. There we go inside the master table
    /BI0/9APMATNR and activate that. After activating the table it is read by the select statement inside BADI (Strange) and allows the CVC to be created.
    Ideally it should not allow us to activate the SAP standard table /BI0/9APMATNR. I observed that in technical settings of this table it has single record buffering as switched on. (But as per my knowledge buffer gets refreshed every 2 to 4 mins and not in 2 days or something).
    Your expert comment is valuable to us. Thanks.
    Best Regards,
    Chandan Dubey

    Hi Chandan,
                 Try to use a WAIT statment with 5 seconds before your select statment.
    I'm not sure whether this will work. Anyway check it and let me know the result.
    Regards,
    Siva.

  • I want to install OS X Mavericks on my MacBook Pro which is not with guid partition table

    i want to install OS X Mavericks on my MacBook Pro which had both mountain lion & windows 8.1, its not in GUID partition table format, so i couldnt install mavericks, so is there any way to change into GUID partition table format & install Mavericks  without losing Windows from my hard disk ?

    Open the App Store and upgrade iPhoto to the Mavericks version.
    iWork and iLife for Mac come free with every new Mac purchase. Existing users running Mavericks can update their apps for free from the Mac App Store℠. iWork and iLife for iOS are available for free from the App Store℠ for any new device running iOS 7, and are also available as free updates for existing users. GarageBand for Mac and iOS are free for all OS X Mavericks and iOS 7 users. Additional GarageBand instruments and sounds are available for a one-time in-app purchase of $4.99 for each platform.
    The iWork apps are free with a new iOS device since 1 SEP 2013. They are free with a new Mac since 1 OCT 2013. They are also free with the upgrade to OS X Mavericks 10.9 if you had the previous version installed when you upgraded.

  • Performence issue with Update

    hi all,
    I am facing issue with below update statemetn. taking huge time to update. in xx__TEMP table I have index on Project id column. and all underlying table hase index.
    Please look into plan and let me how I can reduce Cost for the blow update statement.
    Thanks in advance.
    UPDATE dg2.ODS_PROJ_INFO_LOOKUP_TEMP o
    SET Months_In_Stage_Cnt =
    (SELECT
    NVL(ROUND(MONTHS_BETWEEN(SYSDATE,x.project_event_actual_date),2),0) Months_In_Stage_Cnt
    FROM od.project_event x
    WHERE x.project_id = o.project_id
    AND event_category_code = 'G'
    AND project_event_seq_nbr =
    (SELECT MAX(project_event_seq_nbr)
    FROM od.project_event y
    WHERE y.project_id = x.project_id
    AND y.event_category_code = 'G'
    AND y.project_event_actual_date IS NOT NULL
    AND stage_nbr <> 0
    AND y.project_event_seq_nbr <
    (SELECT project_event_seq_nbr
    FROM od.project_event z
    WHERE stage_nbr =
    (SELECT current_stage_nbr
    FROM od.project
    WHERE project_id = x.project_Id )
    AND z.project_id = x.project_Id
    AND z.event_category_code = 'G'
    AND skip_stage_ind = 'N'
    AND project_event_actual_date IS NULL )
    Plan
    UPDATE STATEMENT CHOOSECost: *1,195,213* Bytes: 71,710,620 Cardinality: 41,213
    14 UPDATE DG2.ODS_PROJ_INFO_LOOKUP_TEMP
    1 TABLE ACCESS FULL TABLE DG2.ODS_PROJ_INFO_LOOKUP_TEMP Cost: 36 Bytes: 71,710,620 Cardinality: 41,213
    13 FILTER
    3 TABLE ACCESS BY INDEX ROWID TABLE OD.PROJECT_EVENT Cost: 9 Bytes: 104 Cardinality: 8
    2 INDEX RANGE SCAN INDEX (UNIQUE) od.XPKPROJECT_EVENT Cost: 3 Cardinality: 8
    12 SORT AGGREGATE Bytes: 16 Cardinality: 1
    11 FILTER
    5 TABLE ACCESS BY INDEX ROWID TABLE od.PROJECT_EVENT Cost: 9 Bytes: 16 Cardinality: 1
    4 INDEX RANGE SCAN INDEX (UNIQUE) od.XPKPROJECT_EVENT Cost: 3 Cardinality: 8
    10 FILTER
    7 TABLE ACCESS BY INDEX ROWID TABLE od.PROJECT_EVENT Cost: 9 Bytes: 108 Cardinality: 6
    6 INDEX RANGE SCAN INDEX (UNIQUE) od.XPKPROJECT_EVENT Cost: 3 Cardinality: 8
    9 TABLE ACCESS BY INDEX ROWID TABLE od.PROJECT Cost: 2 Bytes: 9 Cardinality: 1
    8 INDEX UNIQUE SCAN INDEX (UNIQUE) od.XPKPROJECT Cost: 1 Cardinality: 1
    Thanks
    Deb

    882134 wrote:
    Can any body give me some light why upto Select statement cost is ok, but only Update statemet is take huge 11m costing.
    thanks
    DebOkay so completely ignore the content of the 2 forum posts.
    Why is the cost an issue for you? Without your tables, data and environment, and without a readable execution plan it's difficult to help you.
    Maybe you could read the link I gave you and post some of the information it talks about up here.
    p.s. read the link.

  • Issue with Period Control Table after copying Essbase adapter

    Hi Experts,
    I'm working on version 11.1.1.3 and have copied the adapters (Essbase, Pull + EPRi) in the work bench so I can add an additional target for the FDM application. However, I have an issue with the import process; it returns an error with the Time & Periods (I guess it's something to do with the Periods category).
    I have reimported the Periods Control Table and updated the new application's Target Period & Year (whilst changing the system code in the Application settings to the newl adapter) and still receive the same error message.
    Any direction or thoughts would be welcome.
    Thanks
    Mark

    The time periods do not copy. You need to maintain them through the UI or upload them from excel. There is a KM article on this if you need additional detail.

  • Correcting a bad block on ext4 and with GTP partition table

    Hello,
    I ran a SMART offline test today which came back as a bad block:
    # 1  Extended offline    Completed: read failure       30%      8363         3212759896
    This is my first run-in with a bad block, and since these drives are big and relatively new, I want to be proactive and fix any problems as they arise. Here is my setup:
    * I have 2x 2TB HDDs of same make and model, with the device link being /dev/sdc and /dev/sdd. /dev/sdc is the one with the error.
    * These two disks are linked via a Linux RAID 1 array under /dev/md0 which is then mounted on /storage.
    * Both drives have only 1 partition under a GUID Partition Table (GPT)
    I've looked around to try to find info on fixing bad blocks, and I came across this: smartmontools.sourceforge.net/badblockhowto.html
    However, it seems to be out-dated and geared for tools like fdisk (which I cannot use for GPT) and filesystems ext2/3 (although, due to the backwards compatibility, I'm sure it works with ext4 as well), and a lot fo the commands gives things like "Couldn't find valid filesystem superblock."
    Can someone point me in the right direction as to how I can fix this issue?
    EDIT:
    My noob is showing. I got the commands above to work, and when I check to see which file is using the bad block it shows this (after all the calculations involved, the block was 401594731):
    debugfs:  icheck 401594731
    Block   Inode number
    401594000       <block not found>
    So i'm assuming that there isn't a file assigned to it (empty space?). But then, when I use dd to read from it, it seems to read just fine:
    sudo dd if=/dev/md0 of=my.block skip=401594731 bs=4096 count=1
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB) copied, 0.0222587 s, 184 kB/s
    I think it's able to read it since the other disk in the RAID1 array doesn't have the bad block. But I just want to make absolutely sure that there is no file assigned to that block before I nuke it. Given the above information, would it be safe to remove this block from service?
    Last edited by XtrmGmr99 (2012-01-26 01:17:51)

    Yes I think the block is not in use. You can do
    debugfs: testb 401594731
    which will state it clearly ("not in use" vs "marked in use")

  • Issues with update rule

    Hi All,
    I am having a problem with update rule
    My object
    A
    B
    C
    D(Seller phase) (complex table)
    Q1(Question)
    A1(Answer) (complex table)
    Q2(Question)
    A2(Answer) (complex table)
    Q3(Question)
    A3(Answer) (complex table)
    E(Collection object)
    D(seller Phase)
    Q(Question)
    A(Answer)
    My transaction during Add
    A
    B
    C
    D(Seller phase)(complex table)
    Q1(Question)
    A1(Answer)(complex table)
    Q2(Question)
    A2(Answer)(complex table)
    Q3(Question)
    A3(Answer)(complex table)
    E(Collection object)
    D(seller Phase)
    Q(Question)
    A(Answer)
    here is my requirement:
    During Activity create/Edit upon choosing seller phase i bring questions based on update rules of q1 q2 and q3 from complex table.
    Possible answers are also fetched from complex table based on selection of phase for each question. I have a update rules for answers to check the object collection(E) and determine if this phase exist or not, If exist previously selected Answers will bulled from collection for the corresponding question if not it has to empty.
    All rule are working with out any problem.
    Issue.
    During create or edit if some one chooses phase P1 and answers for the questions and with out saving if they change phase then answers which are been selected earlier still exist.
    This is happening in IPAD but not in ATE. I have checked the log for rules all the rules are returning values as expected.
    So i tried the below options to test
    1) with special value option in field level
    2) I created a dummy field with update rule with value test. On creation i have modified the dummy field value X then changed the phase the value is not getting updated as per update rule.
    Is it the real behavior of update rule? Is there any work around for this problem
    Regards,
    Gupta

    Gupta,
    So based on your comments above I have the workarounds I gave you.
    1) A button that refreshes "resets" to make the screen repaint (in essence a button that just re-navigates to the same screen will make all the system reset) if you just want one screen.
    2)Or the multiple screen approach. Let phase 1 be in one screen and the other questions and answers on others.
    Not sure if you can just do 1 screen with multi-tiles - If this is better approach where the phase selection is on 1 tile (top tile) and the questions and answers are on a different tile (bottom tile).
    The trick is to make the screen repaint. As long as you present to the customers a flawless flow you will be okay. The one that you don't want to happen is when your rules are buggy then it may be just a good idea to sell a more controlled flow that works than debugging what went down the wrong during UAT (User Acceptance Test).
    Regards,
    Mark

  • One laptop issue and one partition table request.

    Title says is all.
    I'd like one partition table. I want to install Windows Vista (HomeBasic - HP's OEM) with ArchLinux (dualboot). My hard disk has only 160GBs.
    I surely want a partition for my data files.
    Then, I want to learn a way, for my laptop fan, to get the fuck. It is veeeeeeeeeeeery loud all the time.
    ... and to get lower cpu temp.
    That's it, could you help me please?

    This is issue with TAB
    Try to include this code and & check
    DATA:BEGIN OF tab_temp OCCURS 0,
    CDOCT LIKE ZDNTINF-CDOCT,
    DDTEXT LIKE DD07T-DDTEXT,
    END OF tab_temp.
    SELECT a~CDOCT b~DDTEXT FROM ZDNTDEPDOC as a
    INNER JOIN DD07T as b
    on a~CDOCT = b~DOMVALUE_L
    INTO TABLE TAB_TEMP
    WHERE b~DOMNAME = 'ZCDCT'.
      loop at TAB_TEMP.
        tab = TAB_TEMP-CDOCT. append tab.
        tab = TAB_TEMP-DDTEXT append tab.
      endloop.

Maybe you are looking for

  • Running OC4J on Linux in the background?

    I am relatively new to Linux and have been trying to start OC4J either automatically or as a background process (so that I do not have to keep the terminal window open). I tried "$nohup java -jar oc4j.jar &", but OC4J still shuts down when I logout o

  • Creating table /  field definition

    HEllo, I am trying to create a table using Microsoft Access Driver with the following field definitions: createStatement = "CREATE TABLE Employee(SSN VARCHAR(9)PRIMARY KEY, " + "FirstName VARCHAR(15), LastName VARCHAR(15), " + "MiddleInitial CHAR, Bi

  • Update using select/option question

    Morning, Does anyone have a snippet of code somewhere that I can use for an update select? I need it to take the values from a database table and show the one the user selected like : <SELECT NAME="DinnerChoice"> <OPTION SELECTED = 'Dinner1'>Dinner1

  • Can I do video chats and Webcam suggestions.

    My wife has a G4 iBook running 10.3.9 (Panther). Our son, is away at college and has a MacBook running 10.4.8 and has the built in iSight camera in the Book. My wife would like to do video chats with him from her iBook. 1. I assume that her version o

  • Authorization Object Assignmnet

    hi, I 've created an authorization object 'ZABC'..now i want to assign it to tcode MIGO.how can i do that..also will the check be carried out w/o any modifcations/enhancements to da tcode or is some other work required? <removed_by_moderator> Edited