TDMS - time based reduction - receiver system deletion

Experts,
I'm doing a time based reduction.  I'm on the step "Start Deletion of Data in Receiver System".  It's been running for over 18hours.
But I don't see any jobs running in SM66 on the Central/Control or Sender or Reciever systems.
When I click on the "Task" button, I see it has completed 8,444 of 12,246  sub activites.  There are 3,802 not yet started.
We're on all the latest levels of DMIS and ECC.
Any ideas?
Thanks
NICK

Ashley and Niraj,
Hey, I'm all for tips/tricks so don't worry about messing up my thread.
I completely shut down the central/control system via stopsap and restarted.  Still it was in "running" status but no jobs were running on sender/rec or central/control.
So I tried the trouble-shooting but it was un-clear to me what to do.
I ended up highlighting the phase I reference earlier, then doing "execute" again.  The status changes from the "truck" to a green flag and I started to see jobs run again on the receiver system.  Again they have stopped, but I see another job scheduled to run in a few minutes....It's just weird, I didn't run into this on my last time-based copy.
I'll post a few things I've learned to increase performance:
RDISP/MAX_WP_RUNTIME = 0
At LEAST 25 WP and 25 BCK procs
rec/client = OFF
RDISP/BTCTIME = 60
RUN STATS regularly
TAKE OUT OF ARCHIVELOG MODE
Read/Impl these notes:
Read theseu2026Update these parameters
o TD05X_FILL_VBUK_1 Note 1058864
o TD05X_FILL_VBUK_2 Note 1054584
o TD05X_FILL_BKPF Note 1044518
o TD05X_FILL_EBAN Note 1054583
o TD05X_FILL_EQUI Note 1037712
Set these oracle index on rec system:
Table: QMIH
  fields: MANDT, BEQUI
Table: PRPR
  fields: MANDT, EQUNR
Table: VBFA
  fields: MANDT, VBELN, VBELV, POSNV
set parameter u2018P_CLUu2019 to u2018Yu2019 in the following
activities before you start the activities for filling internal header tables:
TD05X_FILL_BKPF
TD05X_FILL_CE
TD05X_FILL_EKKO
TD05X_FILL_VBUK
TD05X_FILL_VBUK_1
TD05X_FILL_VBUK_2
TD05X_FILL_VSRESB
TD05X_FILL_WBRK_1
run TCODE CNVMBTACTPAR, specify the project number to do this
IMPORTANT TCODEs
CNV_MBT_TDMS_MY  Main TDMS starting point     
CNVMBTMON  Process Monitor (must know your project number)
DTLMON  MWB transfer monitor
CNVMBTACTPAR  activity parameters
CNVMBTACTDEF  MBT PCL activity maint
CNVMBTTWB  TDMS workbench to develop scrambling rules
CNV_TDMS_HCM_SCRAM  run in SENDER system for scrambling functionality
Reports
CNV_MBT_PACKAGE_REORG  to reorganize TDMS projects..aka delete
CNV_MBT_DTL_FUGR_DELETE  deletes function groups associated with old projects
Tables
CNVMBTUSEDMTIDS   lists obsolete MTIDs
IMPORTANT NOTES
Note 894307 - TDMS: Tips, tricks, general problems, error tracing
Note 1405597 - All relevant notes for TDMS Service Pack 12 and above
Note 1402704 - TDMS Composite Note : Support Package Independent
Note 890797 - SAP TDMS - required and recommended system settings
Note 894904 - TDMS: Problems during deletion of data in receiver system
Note 916763 - TDMS performance "composite SAP note"
Note 1003051 - TDMS 3.0 corrections - Composite SAP Note
Note 1159279 - Objects that are transferred with TDMS
Note 939823 - TDMS: General questionnaire for problem specification
Note 897100 - TDMS: Generating profiles for TDMS user roles
Note 1068059 - To obtain the optimal deletion method for tables (receiver)
Note 970531 - Installation and delta upgrade of DMIS 2006_1
Note 970532 - Installation of DMIS Content 2006_1
Note 1231203 - TDMS release strategy (Add-on: DMIS, DMIS_CNT, DMIS_EXT...)
Note 1244346 - Support Packages for TDMS (add-on DMIS, DMIS_CNT, ...)
I'm doing this for an ECC system running ecc 6.0 EHP6 by the way.
Still any help with my issue on the delete would be helpful. but post tips I don't kwnow about
NICK

Similar Messages

  • TDMS time based reduction method has no end date

    Hello TDMS experts,
    I am in the process of configuring the 'Time based reduction' method of TDMS. In the package, I am in phase "System Analysis"->"Analyze and specify'from date'".
    Here it gives me the option to specify the start/from date only. If I want to transfer data for only  a month in the past lets say april of 2007, then how can I specify the from and to dates.
    Please advise.
    Thank you,
    kind regards,
    Zaheer Shaikh

    Hi Pankaj,
    Thanks for the reply.
    In the phase "maintain Table Reduction" I have chosen SAP standard reduction, that means the table reduction is handled by SAP itself. Since my sender system database is 4 Terabytes, my question is if I select the 'from date' as 10th April 2009 (present date is 20 April 2009), will the TDMS still copy the whole tables for which I have not manually defined any reduction?
    Please advice.
    Thank you,
    Kind regards,
    Zaheer Shaikh

  • Sales orders in TDMS company/time based reduction  are outside the scope

    Guys,
    I have had some issues with TDMS wheras it didn't handle company codes without plants very well. That was fixed by SAP. But I have another problem now. If I do a company code and time based reduction, It doesn't seem to affect my sales orders in VBAK/VBUK as I would have expected. I was hoping it would only copy sales orders across that have a plant which is assigned to a company code that was specified in the company code based reduction scenario. That doesn't seem to be the case.
    VBAK is now about one third of the size of the original table (number of records). But I see no logic behind the reduction. I can clearly see plenty of sales documents that have a time stamp way back from what I specified in my copy procedure and I can see others that have plant entries that should have been excluded from the copy as they do belong to different company codes than the ones I specified.
    I was under the impression that TDMS would sort out the correct sales orders for me but somehow that doesn't seem to be happening. I have to investigate further as to what exactly it did bring across but just by looking at what's in the target system I can see plenty of "wrong" entries in there either with a date outside the scope or with a plant outside the scope.
    I can also see that at least the first 10'000 entries in VBAK in the target system have a valid from and to date of 00.00.0000 which could explain why the time based reduction didn't work?
    Did you have similar experiences with your copies? Do I have to do a more detailed reduction such as specifying tables/fields and values?
    Thanks for any suggestions
    Stefan
    Edited by: Stefan Sinzig on Oct 3, 2011 4:57 AM

    The reduction itself is not based on the date when the order was created but the logic enhances it to invoices and offers, basically the complete update process.
    If you see data that definitely shouldn't be there I'd open an OSS call and let the support check what's wrong.
    Markus

  • Exclude a table from time-based reduction

    Hi,
    Iu2019d like to exclude a table from time-based reduction. How can I do this ? Is there any manual how to do customizing in TDMS ?
    Regards
    p121848

    Thank you Markus for your annotation.
    AUFK is technically declared as an Master Data Table, but stores orders. Standard
    TDMS provides a reduction of this file and in the client copies we did via TDMS a lot of  records disappeared when we selected time-reduction.
    Now we fond out that some Transactions as OKB9 or KA03 refer to old internal orders. So we would like to maintain the customizing, to exclude AUFK from reduction. But this is not possible in activity TD02P_TABLEINFO, because no changes can be done to the tables, which have got the transfer_status 1 = Reduce.
    You can manipulate the Transfer-Status in file CNVTDMS_02_STEMP before getting to activity  TD02P_TABLEINFO, but I wonder whether this is the way one should do.
    Any idea ?
    Regards p121848

  • Please explain steps for Time based reduction scenario

    Hi,
    I am done with the RFC connections.
    The scenario I am going to work with is 'Time Based Reduction'.
    Can somebody please explain the process step-by-step from here on.
    Thanks,
    Mohammed Tauseef Ahmed

    hi,
    thanks for the replies.
    the step "Start Programs for Generation and Receiver Settingsu201D is taking very very long time for execution(the log shows that the process started on May20,and it is still going on).
    Now till the execution of this step is completed,we cannot execute furthet steps.Is there a way that we can execute the other steps or do we have to wait for this step to be completed?
    Secondly, on the suggestion to change the process settings, I was not able to make the changes in process settings. Can the somebody kindly explain step-by-step on how to change the process settings.
    Thanks.
    Mohammed Tauseef Ahmed

  • TDMS copy based on Company Code and time based reduction.

    I'm struggling to understand the process of this scenario.
    I created a new client in the target system (using local client copy with SAP_UCSV).
    I configured a time based and company code based reduction.
    After the TDMS copy is complete, I check the target system and still find plenty of data that isn't related to the company codes I selected. It appears that it still copies part of the excluded data. For example: I can still find work orders and sales orders for company codes that I specifically didn't select for the copy process.
    Any idea why this is? It appears that the copy doesn't bring across ALL of the excluded data. Just some.
    I double checked and triple checked my TDMS selection for company code based reduction and I can't find any error in there.
    To be more specific: I selected company code A01 A02 A03 and left B01 B02 B03 out. After the copy there are still orders visible in the target system with company code B01 B02 B03.
    This is after I created an empty target client first so it's not copying into an already existing client.
    Thanks for any guidance guys.

    Hm, not sure what's going on. There is no clear link between those existing orders and the company codes I initially selected. Then again it only seems to have a "high level" record of the data. As soon as one starts to drill down, the data is missing as expected. Same goes for other areas as well.
    Looks like we can live with that for now so I won't lose any sleep over it anymore.

  • 실수로 TABLE을 DELETE 또는 DROP한 경우 복구 방법 (time-based recovery)

    제품 : ORACLE SERVER
    작성날짜 : 2004-07-29
    PURPOSE
    사용자의 실수로 중요한 데이타를 delete 하였거나, 테이블을 drop하였을
    경우 복구하는 절차를 정리한 자료이다.
    Explanation
    사용 중인 database 의 archive log mode 여부를 다음과 같이 확인 후
    archive log mode인지 아닌지에 따라 복구 방법이 달라진다.
    os>svrmgrl
    SVRMGR> connect internal;
    SVRMGR> archive log list
    Database log mode ARCHIVELOG
    Automatic archival ENABLED
    Archive destination ?/dbs/arch
    Oldest online log sequence 2525
    Current log sequence 2527
    1. Archive mode인 경우 복구 방법
    archive log mode로 운영중이면서 full backup및 archive file들이 모두
    존재하는 경우 복구가 항상 가능하다.
    이때 다시 다음과 같이 두가지의 경우를 구분하여 복구 작업을 수행할 필요가
    있으며 아래에 각각의 경우에 대한 절차를 자세히 설명한다.
    (1) drop이나 delete한 시점이 얼마 지나지 않았거나, 혹은 필요에 의해
    database 전체를 data 손실 이전 시점으로 돌리고자 하는 경우
    (2) 손실된 data만을 drop이나 delete이전 상태를 받아내고, 나머지 data는
    모두 drop/delete이후 발생한 transaction의 반영분을 유지시키기 위해
    현재 시점으로 맞추어햐 하는 경우
    1-1 database 전체를 drop이나 delete 이전시점으로 돌리고자 할 때
    이러한 경우는 데이타 손실이 발생하기 이전의 datafile들에 대한 backup을
    restore한 후 손실 발생 시점 직전까지 time based로 imcomplete recovery를
    수행하면 된다.
    (1) db를 shutdown시킨 후 현재 상태를 만약의 경우를 대비하여 cold
    backup을 받아둔다.
    shutdown immediate로 db를 내리고, datafiles, redo log files, control
    files 모두 tar등의 명령어를 이용하여 backup을 받아둔다.
    이것은 만약의 경우 이전 backup이나 archive file에 문제가 있는 경우,
    삭제된 data의 복구가 불가능할 경우 현재 상태로라도 돌리기 위한
    것이다.
    (2) 데이타를 삭제하기 이전 상태에서 받아둔 datafile full backup을
    restore한다.
    redo log files과 controlfiles은 현재 것을 그대로 이용하고, 현재
    상태의 모든 datafiles을 지우고 데이타가 삭제되기 전에 받아둔
    backup datafile을 restore시킨다.
    (temp datafile의 경우는 restore시간을 줄이기 위해 restore하지
    않아도된다.)
    (3) restore한 backup시점 이후의 archive file들을 archive log
    destination에 위치시킨다.
    full backup 이후의 archive file들을 일부 tape 장비등에 backup을
    받아두고 지운 상태라면 그러한 file을을 다시 원래 위치에 restore
    하여, full backup받은 시점부터 복구하고자 하는 시점까지의
    archive log file들이 모두 log archive destination에 존재하는지
    확인한다.
    (4) 데이타 삭제 이전 시점까지 time based로 recover를 수행한다.
    이때 날짜와 시간 지정은 항상 아래 예와 같은 형식으로 지정한다.
    만약 데이타를 삭제한 시간이후로 지정하게 되면 recovery후에도
    여전히 data는 지워진 상태로 보여진다.
    os>svrmgrl
    SVRMGR> connect internal
    SVRMGR> startup mount;
    SVRMGR> set autorecovery on
    SVRMGR> recover database until time '2001-01-30:21:00:00';
    SVRMGR> alter database open resetlogs;
    (a) 만약 이때, controlfile이 현재것이 존재하지 않아 datafile뿐
    아니라, controlfile도 과거것을 사용했다면, recovery command를
    다음과 같이 지정하면 된다.
    recover database using backup controlfile (아래줄과 이어짐)
    until time '2001-01-30:21:00:00';
    (b) temp datafile을 restore하지 않은 경우라면 startup mount수행
    직후 다음과 같은 문장을 추가하면 된다.
    alter databse datafile '/oracle/oradata/temp01.dbf'
    offline drop;
    /oracle/oradata/temp01.dbf는 temp datafile의 이름을 예로 든
    것이다.
    (5) 원하는 data가 제대로 조회되는지 확인한다.
    1-2. 삭제된 data만을 복구하고 나머지는 현재 상태를 유지하고자 하는 경우
    이러한 경우는 데이타가 삭제되기 이전 상태의 backup datafile 중에
    SYSTEM, RBS와 삭제된 data가 들어있는 datafile만을 restore시켜 time
    based로 recovery하여 db를 open시킨 후 원하는 table만 export를 받아낸다.
    그리고 다시 현재 상태의 db에 export받은 내용을 import시키는 것이다.
    이때 삭제된 data를 export받기까지의 절차에 대해서 다음과 같이 세가지
    경우의 방법을 생각할 수 있다.
    (a) 다른 시스템에 backup이나 test용으로 oracle이 설치되어 있는 경우
    그 시스템에 backup을 restore하여 해당 table을 export 받는 방법
    (b) 같은 시스템에 backup을 restore 하되, db name과 oracle_sid를 다른
    이름으로 변경하여 다른 db를 만들어 recovery후 open하여 export받는
    방법 <Bulletin:11616 참조>
    (c) 현재 db는 cold backup받아 두고, backup을 restore하여 export을
    받아내는 방법
    여기에서 (a)의 경우는 다른 시스템에 이용하고자 하는 datafile의 oracle
    version 이상의 oracle software가 이미 설치되어 있어야 한다.
    (b)의 경우는 현재 상태의 cold backup을 받았다가 export받은 후 다시
    현재 상태를 restore할 필요는 없애주지만, 대신 SYSTEM, RBS, user
    datafile 만큼의 disk space가 추가로 필요하며, 새로운 db를 추가로
    구성하는 것이므로 init.ora file 구성 등의 작업이 필요하다.
    이것에 관한 자세한 절차는 <Bulletin:11616>을 참조하도록 한다.
    이 자료에서는 이중 (c)에 대한 것만을 자세히 설명한다.
    (1) 현재의 모든 datafiles과 redo log files, datafiles을 다음과 같이
    신중히 cold backup을 받아두도록 한다.
    이것은 이후에 삭제한 data를 복구하여 export를 받은 후에 다시 현재
    상태의 db로 되돌아오기 위한 것이다.
    단 이 때 tar 등을 이용하여 cold backup하는 대신에 disk 상에 cold
    backup을 받아 시간을 절약할 수 있다. disk로의 backup이 가능한
    이유는 export를 위한 time based recovery를 수행하기 위해 필요한
    datafile이 SYSTEM, RBS, 삭제된 data를 포함한 user datafile
    정도이기 때문이다.
    os>svrmgrl
    SVRMGR> connect internal
    SVRMGR> shutdown immediate;
    SVRMGR> exit
    os>cp /oracle/oradata/*.ctl /oracle/backup/*.ctl
    os>cp /oracle/oradata/redo*.log /oracle/backup/redo*.log
    os>mv /oracle/oradata/system01.dbf /oracle/backup/system01.bak
    os>mv /oracle/oradata/rbs01.dbf     /oracle/backup/rbs01.bak
    os>mv /oracle/oradata/tools01.dbf /oracle/backup/tools01.bak
    redo log file이나 controlfile의 경우는 time based recovery시에도
    필요하므로 cp를 받아두고, datafile의 경우는 mv를 이용하도록 한다.
    위의 예에 제시된 file이름들은 해당 환경에 맞게 변경되어야 하며,
    tar를 이용하여 tape에 옮겨도 관계는 없으나, 단 cold backup에
    해당하는 모든 datafiles, redo log files, control files를 받아두어야
    한다.
    (2) 삭제한 data가 존재할 당시에 받아둔 backup에서 SYSTEM datafile, RBS
    datafile, 그리고 삭제한 table이 들어있는 datafile만을 restore한다.
    redo log files과 controlfiles은 현재 것을 그대로 이용한다.
    (3) restore한 backup시점 이후의 archive file들을 archive log
    destination에 위치시킨다.
    full backup 이후의 archive file들을 일부 tape 장비 등에 backup을
    받아두고 지운 상태라면 그러한 file을을 다시 원래 위치에 restore
    하여, full backup받은 시점부터 복구하고자 하는 시점까지의
    archive log file들이 모두 log archive destination에 존재하는지
    확인한다.
    (4) restore하지 않을 datafile, 즉 SYSTEM, RBS, 삭제된 data를 포함하
    는 user datafile을 제외한 모든 datafile은 모두 offline drop시키
    고, 데이타 삭제 이전 시점까지 time based로 recover를 수행한다.
    이때 날짜와 시간 지정은 항상 아래 예와 같은 형식으로 지정한다.
    만약 데이타를 삭제한 시간이후로 지정하게 되면 recovery후에도
    여전히 data는 지워진 상태로 보여진다.
    os>svrmgrl
    SVRMGR> connect internal
    SVRMGR> startup mount;
    SVRMGR> alter database datafile '/oracle/oradata/temp01.dbf'
    offline drop; (윗줄과 이어짐)
    SVRMGR> alter database datafile '/oracle/oradata/userdata05.dbf'
    offline drop; (윗줄과 이어짐)
    ... 이와 같이 resotre하지 않은 datafile을 모두 offline drop시킨다.
    SVRMGR> set autorecovery on
    SVRMGR> recover database until time '2001-01-30:21:00:00';
    SVRMGR> alter database open resetlogs;
    만약 이때, controlfile이 현재것이 존재하지 않아 datafile뿐 아니라,
    controlfile도 과거것을 사용했다면, recovery command를 다음과 같이
    지정하면 된다.
    SVRMGR>recover database using backup controlfile until time
    '2001-01-30:21:00:00'; (윗줄과 이어짐)
    (5) 원하는 data가 제대로 조회되는지 확인한 후 필요한 table을 export한다.
    예를 들어 scott user의 dept table을 복구하고자 한것이었다면 다음과
    같이 export를 받는다.
    os>exp scott/tiger file=dept.dmp tables=dept log=deptlog1
    (6) time based로 recovery후 open한 db는 shutdown하여 없애고 (1)번에서
    받아둔 현재 상태의 backup을 restore한다.
    shutdown은 immediate나 abort 모두 가능하며, shutdown후 restore한
    SYSTEM, RBS, user datafile모두 삭제하고, 사용하던 controlfiles,
    redo log files도 모두 삭제한다.
    그리고 (1)에서 mv나 cp로 (혹은 tar로) 옮겨놓은 모든 datafiles,
    controlfiles, redo log files를 원래의 directory와 이름으로
    위치시킨다. SYSTEM, RBS, user datafile도 반드시 (1)번 상태의
    현재 상태 backup을 restore하여야 한다.
    (7) db를 startup시킨 후 (5)에서 받은 export내용을 import시킨다.
    os>imp scott/tiger file=dept.dmp tables=dept ignore=y commit=y
    log=deptlog2 (윗줄과 이어짐)
    (8) 원하는 data가 제대로 조회되는지 확인한다.
    2. No archive mode 로 운영할 경우 복구 방법
    archive mode 로 운영하지 않았을 경우에는, 삭제한 data를 export 받아
    두었거나, 혹은 data가 삭제 전 받아둔 cold backup이 존재하는 경우
    복구가 가능하다.
    (1) export가 존재하는 경우
    다음과 같이 import한다. scott user의 dept가 삭제된 경우를 예로 들었다.
    os>imp scott/tiger file=dept.dmp tables=dept ignore=y commit=y log=log1
    (2) cold backup이 존재하는 경우
    다른 시스템에 사용가능한 oracle engine이 설치되어 있다면, 그곳을
    이용하거나, 혹은 현재 database를 backup받아둔후 현재 시스템에
    restore하여 export를 받아낼 수 있다.
    (1) 현재의 모든 datafiles과 redo log files, datafiles을 다음과 같이
    신중히 cold backup을 받아두도록 한다.
    이것은 이후에 삭제한 data를 복구하여 export를 받은 후에 다시 현재
    상태의 db로 되돌아오기 위한 것이다.
    단 이 때 tar 등을 이용하여 cold backup하는 대신에 disk 상에 cold
    backup을 받아 시간을 절약할 수 있다. disk로의 backup이 가능한
    이유는 export를 위한 time based recovery를 수행하기 위해 필요한
    datafile이 SYSTEM, RBS, 삭제된 data를 포함한 user datafile
    정도이기 때문이다.
    os>svrmgrl
    SVRMGR> connect internal
    SVRMGR> shutdown immediate;
    SVRMGR> exit
    os>mv /oracle/oradata/*.ctl /oracle/backup/*.ctl
    os>mv /oracle/oradata/redo*.log /oracle/backup/redo*.log
    os>mv /oracle/oradata/system01.dbf /oracle/backup/system01.bak
    os>mv /oracle/oradata/rbs01.dbf     /oracle/backup/rbs01.bak
    os>mv /oracle/oradata/tools01.dbf /oracle/backup/tools01.bak
    위의 예에 제시된 file이름들은 해당 환경에 맞게 변경되어야 하며,
    tar를 이용하여 tape에 옮겨도 관계는 없으나, 단 cold backup에
    해당하는 모든 datafiles, redo log files, control files를 받아두어야
    한다.
    (2) 삭제한 data가 존재할 당시에 받아둔 backup에서 SYSTEM datafile,
    RBS datafile, 그리고 삭제한 table이 들어있는 datafile만을
    restore한다.
    backup당시의 redo log files과 controlfiles도 restore하여야 한다.
    (3) restore하지 datafile, 즉 SYSTEM, RBS, 삭제된 data를 포함하는 user
    datafile을 제외한 모든 datafile은 모두 offline drop시키고,
    database를 open시킨다.
    os>svrmgrl
    SVRMGR> connect internal
    SVRMGR> startup mount;
    SVRMGR> alter database datafile '/oracle/oradata/temp01.dbf'
    offline drop; (윗줄과 이어짐)
    SVRMGR> alter database datafile '/oracle/oradata/userdata05.dbf'
    offline drop; (윗줄과 이어짐)
    ... 이와 같이 resotre하지 않은 datafile을 모두 offline drop시킨다.
    SVRMGR> alter database open;
    (4) 원하는 data가 제대로 조회되는지 확인한 후 필요한 table을 export
    한다.
    예를 들어 scott user의 dept table을 복구하고자 한 것이었다면
    다음과 같이 export를 받는다.
    os>exp scott/tiger file=dept.dmp tables=dept log=deptlog1
    (5) exprot받아낸 database는 shutdown하여 없애고 (1)번에서 받아둔
    현재 상태의 backup을 restore한다.
    shutdown은 immediate나 abort 모두 가능하며, shutdown후 restore한
    SYSTEM, RBS, user datafile모두 삭제하고, 사용하던 controlfiles,
    redo log files도 모두 삭제한다.
    그리고 (1)에서 mv나 tar로 옮겨놓은 모든 datafiles, controlfiles,
    redo log files를 원래의 directory와 이름으로 위치시킨다.
    (6) db를 startup시킨 후 (4)에서 받은 export내용을 import시킨다.
    os>imp scott/tiger file=dept.dmp tables=dept ignore=y commit=y
    log=deptlog2 (윗줄과 이어짐)
    (7) 원하는 data가 제대로 조회되는지 확인한다.

  • What is a time based scenario in TDMS?

    We need only to transfer the data from the last 90 days.
    We know we should use time based scenario. However we cannot find instruction how to implement this scenario.
    Could you help?  Thanks!

    Hi
    Describing how to do time based reduction using TDMS will not be possible over this medium. It is recommended that you refer to the TDMS guides (specifically the Solution operation guide). Refer to the following thread for the same -
    Links, Documents, Support Pack Schedule
    However i will brief you some steps - Once you are on the TDMS overview screen do the following steps -
    1) create project
    2) create sub-project
    3) create package (use the option "initial package for time based reduction from the popup")
    4) once the package screen appears execute various activities of the package in correct order. Detailed documentation for each activity is available on the package screen.
    I hope this helps
    Pankaj

  • How to find the size of data transferred(migrated) in the receiver system

    Hi all,
    I have successfully implemented time-based reduction Process type using TDMS.
    Now, I want to know the size of the data that was transferred to the receiver system. Is there a way to find out about this?
    Thanks ,
    Mohammed Tauseef Ahmed

    Thanks Eddy for the reply.
    The log shows the message Total data transfer volume (KB): 16.055.606
    How do I interpret  16.055.606 KB, doe this mean 16,055 KB or 16 million KB(which does not look possible)?
    Thanks,
    Mohammed Tauseef Ahmed

  • "Start Deletion of Data in Receiver System" does not start

    I faced the problem tha data deletion in Receiver system is not started in TDTIM synario.
    I started the activity "Start Deletion of Data in Receiver System" , the job "CNV_TDMS_09_RESTART_JOBS" is aborted till the job "CNV_MBT_CNV_TDMS_09_START_JOBS" is over in Receiver System.
    The error message is "No log was created for object CAGTFBS and sub-objekt SLOMBT".
    After The job "CNV_MBT_CNV_TDMS_09_START_JOBS" is over, the job  "CNV_TDMS_09_RESTART_JOBS" is finished normaly but the no deletion job is released.
    I carried out some deletion job by manual operation but other deletion job is not released.
    Does Anyone knows the solution of this problem ?
    Version:TDMS3.0 SP17
    Basis: 700
    Susumu

    Hi Joerg,
    Thank you for your reply.
    > Please check the syslog for any reference to bad defined batch jobs! (all on receiver)
    Syslog is nothing in receiver system.
    > Is there any deletion job that can not be startet manually on the receiver side?
    >  (So normally on top of the list of scheduled deletion jobs.
    I think all deletion job can be started manually.
    Nomally, some deletion job is finished, the job is released next deletion job but no deletion jobs is released.
    In addition, If rerun job "CNV_TDMS_09_RESTART_JOBS" is finished, no deletion jobs is released.
    Best regards,
    Susumu

  • Start Deletion of Data in Receiver System

    Hi,
    I've executed the phase "Start Deletion of Data in Receiver System". The jobs in the target system are stopped, and If I try to restart from the central system I receive the message "Activity 'Start Deletion of Data in Receiver System' is already scheduled". I've verified that I've enough free batch process in the target system.
    In target system the jobs "CNV_MBT_CNV_TDMS_09_START_JOBS" and "
    CNV_TDMS_09_RESTART_JOBS" are cancelled, and I've a lot "TD09P*****" jobs in scheduled status.
    How can I do to restart the process ?
    Thank's in advanced

    Log job CNV_TDMS_09_RESTART_JOBS :
    Job started
    Step 001 started (program CNV_TDMS_09_RESTART_JOBS, variant &0000000000000, user ID TDMS_USER)
    Enqueue on local entry package 90007, phase TDBAS_PHASE_DATA_TRANSFER, activity TD09P_START_JOBS not set
    Job cancelled
    Log job  CNV_MBT_CNV_TDMS_09_START_JOBS :
    Job started
    Step 001 started (program CNV_TDMS_09_START_JOBS, variant &0000000000001, user ID TDMS_USER)
    Internal RFC error in status management: package 90007, phase TDBAS_PHASE_DATA_TRANSFER, acty TD09P_START_JOBS
    Job cancelled

  • I cannot stop the activity "Start Deletion of Data in Receiver System"

    Hi all
    I tried to stop the activity "Start Deletion of Data in Receiver System" in my MDC package with the troubleshooting assistant. Unfortunately I get the following error:
    No destination exists for the PACKID:
    Message no. CNV_MBT042
    Have you got an idea, what the problem is? Or how I can stop in a propre different way the activity?
    Thanks a lot for your help
    Roger

    Hi Roger,
    why do you want to stop this phase ?
    Anyway : you can stop the process and than the phase can be restarted at a later stage.
    Regards,
    Eddy

  • Idoc in status 03 - not received by receiving system

    Hello All,
    I have researched a lot on this topic on the forum - but what I am facing is something peculiar - so posting the complete scenario.
    I have three interfaces based on change pointers mechanism where change pointers have been activated for message type HRMD_A.
    There are three distribution models which filter the same message type and send to receiving system GIS – for the logical systems:
    FI
    Concur
    Russia
    When IDoc is triggered using standard program RHALESMD (transaction RE_RHALESMD - HR: Evaluate ALE Change Pointers), there could be three or less IDocs produced depending on the filter criteria.
    For example, you could have an IDoc each for all above three partners.
    When the above program is triggered in the development system all three IDocs reach GIS.
    All the custom code and configuration is transported from DEv to QA. When I trigger the above program in QA, not all  IDoc reaches GIS. Others stay in the system in status 03.
    If I check tRFC queue (transaction BD75), there are no IDocs in the queue.
    If I use another program to change status from 03 to 12, the status changes, but IDoc still does not reach receiving system.
    I have compared Dev and QA systems, deleted and generated partner profile, distribution model, port in QA – but nothing works.
    Not all IDocs reach GIS.
    I read on the forum that I need a commit work. But because I am using a standard program - RHALESMD - where do I commit work?
    Your inputs will be helpful.

    Hi Suneel,
    Please go to transaction SM58 and check if the IDocs are stucked up on the t-rfc queue. If so, you can right click and choose Execute LUW to release them.
    or
    Execute the program RSARFCRD and get the corresponding Transaction id and then execute the program RSARFCSE for processing the entries in TRFC Queue.
    Regards,
    Ferry Lianto

  • File to multiple IDOCs scenario with the same receiver system

    Hi guys,
    I have to design and implement the following scenario:
    I will receive one file with many lines (Records) with data for materials, quantities, operations etc..
    Based on the values of some fields of each line, I will have to create an IDOC for each material.
    For example:
    if operation type = "INSERT", and Labor = 001 then create 3 Idocs of type MBGMCR with movement types=101, 261,311 that have to be posted one after the other to the same receiver system.
    else if operation type = "INSERT", and Labor <> 001 then create an Idoc MBGMCR with movement type=311 and plant = 1001.
    else if operation type = "Delete", and Labor = 001 the created 3 Idocs MBGMCR with movement type=312, 262 1002 and post them serially to the same receiver system.
    else if operation type = "Delete", and Labor <> 001 the created 1 Idoc MBGMCR with movement type=312.
    All IDOCS are posted to the same SAP R/3 system. We do not care about the sequence, except for the cases where 3 IDOCS are created.
    I am trying to think of a good design in performance terms.
    It is obvious that I will need BPM for sure.
    I am thinking of creating a mapping program that will produce 4 message types for the different cases from the initial file and then create a different message mapping for each case from the message type to the IDOC.
    I am asking you if I have to include everything (mappings) in BPM with a "fork" step?
    Or shall I produce only the 4 message types and then post them to R/3 and execute the mappings in R/3?
    Best Regards
    Evaggelos

    hi,
    >>I am thinking of creating a mapping program that will produce 4 message types for the different cases from the initial file and then create a different message mapping for each case from the message type to the IDOC.
    To me this seems to be the right solution.
    here u will create different message mappings and write them sequentially in interface determination. the multi mapping will then be utilised in transformation step in BPM.
    thus if this is the only requirement there is no need of using a fork step.
    [reward if helpful]
    regards,
    latika.

  • Time machine restore crashed system preferences

    Hi, just hoping someone can help with the following as my MacBook is not a happy bunny. I recently reformatted my hdd and restored my system using Time machine. This was to allow me to create a new partition so I could run bootcamp. All was fine for a few weeks, buy I have noticed recently that system preferences has become unreliable and fails to load on most occasions, freezing in the process - it shows in the dock, but does not load. I have OSX snow leopard all updates, two user accounts, guest activated. I have repaired disk permissions a number of times - sometimes works for a very limited time, other times not. I have deleted the finder and systempreferences .plist files in both user accounts, emptied trash and restarted, again limited effectiveness. When the sys prefs crashes, i force quit it, then it tends to load fine next time, but 2 mins later its up to its old tricks. On top of this, the Mac is now refusing to shut down - it will get to the spinning wheel phase then nothing, I have to force shutdown by holding the power button.
    I've also tried disabling the guest account, but no luck.
    Any ideas gratefully received as I'm worried that jts all going crash soon !
    Many thanks 
    Rich.

    Hi, just hoping someone can help with the following as my MacBook is not a happy bunny. I recently reformatted my hdd and restored my system using Time machine. This was to allow me to create a new partition so I could run bootcamp. All was fine for a few weeks, buy I have noticed recently that system preferences has become unreliable and fails to load on most occasions, freezing in the process - it shows in the dock, but does not load. I have OSX snow leopard all updates, two user accounts, guest activated. I have repaired disk permissions a number of times - sometimes works for a very limited time, other times not. I have deleted the finder and systempreferences .plist files in both user accounts, emptied trash and restarted, again limited effectiveness. When the sys prefs crashes, i force quit it, then it tends to load fine next time, but 2 mins later its up to its old tricks. On top of this, the Mac is now refusing to shut down - it will get to the spinning wheel phase then nothing, I have to force shutdown by holding the power button.
    I've also tried disabling the guest account, but no luck.
    Any ideas gratefully received as I'm worried that jts all going crash soon !
    Many thanks 
    Rich.

Maybe you are looking for

  • HT1926 I keep trying to install QuickTime on my Windows 7 PC and after download when I go to install it says it is not a Win32 application??? HELP please.

    I have the same iTunes on my PC and it and my iPad mini are synced. I haven't been able to install Quicktime on my Windows 7 PC because it says it is not a Win32 application???

  • How to generate report in a predefined naming convention for output file

    Hi Expert I have devloped one report in which i am getting the output as my required way. Now I need to name my report output in WCO.PM_sto.13-06-2011 format. where WCO is constant for all report output PM_sto is the parameter for which the report go

  • My new ipad photos are not in the correct albums

    Ok so I got the new iPad, and transferred everything including my photos from my iPad 2 backup. I have about 600 pictures divided into 7 folders. After syncing, I looked at the photos and saw that the pictures are on my iPad but not in the correct fo

  • Hierarchical Queries

    I've a table with these values. ID     Name          Parent_ID      0     Organization1     null 1     Organization2     0 2     Organization3     0 3     Organization4     0 4     Organization5     1 5     Organization6     1 6     Organization7    

  • Resource date assignment

    Dear All, I have created a resource with St date as 01.03.2009and end date as 31.03.2009.in change mode while changing the dates as st 01.04.2009 and end date 12.12.2009 system is getting saved but it is again showing the previous dates.pls help Rega