Time based scheduling  of adaptors in NW 2004S

Hi XI gurus,
I know that time based scheduling  for adaptors(Let say polling starts at 2 AM daily) can be done in SP19 of NW2004.
Could anyone tell me in NW2004S this feature  is availbale or not and if yes on which SP ?
Regards
Vijay

SAP NetWeaver 2004 SP Stack Release to Customer (RTC) Additional nformation 
SPS 21 Calender Week 42, 2007 (planned) 
SPS 20 May 3rd, 2007
SPS 18 August 22nd, 2006
more detail
https://websmp106.sap-ag.de/~sapidb/011000358700001130682005E
SAP NetWeaver 2004s SP Stack Release to Customer (RTC) Additional Information 
SPS 14 Calendar Week 45, 2007 (planned)
SPS 13 Calendar Week 32, 2007 (planned)
SPS 12 Calendar Week 21, 2007 (delayed)
SPS 11 March 5th, 2007
more detail
https://websmp106.sap-ag.de/~sapidb/011000358700004584092005E
hope help you!
Please Award points if help is useful .

Similar Messages

  • BPEL 11g based Scheduling Using BPM Timer Activity

    Hi Team,
    Can any one throw Some light how to implement Oracle BPEL 11g based Scheduling (Not using QUARTZ, the open source scheduler).
    My client is not want to use QUARTZ, the open source scheduler.
    So can any one on the same.. the requirement is like every morning Syatem automatically assign tasks to the Next level Manager. So we need to USe Timer Activity of BPM 11g??
    Regards,
    Pavan

    Hi Vlad,
    I am not sure weather I have framed the question properly... let me frame it again...
    I've a Manager has 10 team leads and he will receive 50 task.
    And he manually selects selects 6 Leads(This is a manual activity).
    Then,
    a) Where we need to fetch the role from OID for which Lead belongs to which manager
    b) Scheduling i.e., Using BPEL based Java API activity using Timer Activity of BPM(or using some other process if we have) he needs to Route those 50 tasks to this 6 selected leads based on their skills and department. and this scheduling has to be daily task at 7am or 8am after manual process from Manager i.e., selecting the leads for tasks.
    Correct me if I have mentioned inadequate information...
    Note: As my client dont want Quartz Java based API. I really dont know the reason...(also can u please send me example for my reference quatrz 2.0 based.)
    Regards,
    Pavan
    Edited by: 971442 on Nov 16, 2012 6:03 AM
    Edited by: 971442 on Nov 16, 2012 6:07 AM

  • Scheduling Time Based Jobs using Web Dynpros.

    Hello Again,
    I would like to know if it is possible to perform time based jobs using Web Dynpros, I have a situation where a Web Dynpro application updates a Database and then later during the day say at around 9:00 PM, I would like the Web Dynpro application to transfer all the updated Data to another Database, I know we can run a batch job but can we run create a Web Dynpro application to programatically fire a batch job at the required time.
    I am welcome to any suggestions.
    Best Wishes,
    John.

    Hi John,
    WD is not good option for this. I would suggest you to "extract" logic to separate layer and use Timeout service (http://help.sap.com/saphelp_webas630/helpdata/en/6b/2550d23ef1994580114d6064bc44a1/frameset.htm) - call logic from service.
    Best regards, Maksim Rashchynski.

  • Time-based publishing

    Hello All,
      I'm trying to use lifetime documents following this link
    http://help.sap.com/saphelp_nw2004s/helpdata/en/d1/5b6635f5e0ef428fb513336881679b/frameset.htm
    So I have done thoose steps:
                  activate lifetime property in my folder
                  activate tasks "valid from" and "valid to"
    I'm trying to activate time-based publishing service (TBP) but I'm unable to find it in the portal.
       I have loocked in System Administration->System configuration->KM->Content Management->Repository Services and in System Administration->System configuration->KM->Content Management->Global Services without sucess.
       How can I check/configurate if TBP is running on the Portal?
       Do I have to do something more for use lifetime documents service?
       I'm using NW 2004s and EP7.0
       Thanks in advance,
       Regards

    Hi Yaiza,
    TBP service should be running in your portal but the configuration of the service cannot be changed. You need only to register this service with a repository and you must also register the application property service as it is described here: <a href="http://help.sap.com/saphelp_nw2004s/helpdata/en/c1/c87d3cf8ff3934e10000000a11405a/frameset.htm">Time-Dependent Publishing Service</a>
    This registration is done by default for standard /documents repository in the portal.
    Please be aware that the user with read/write permission for the folder where you have activated Time-dependant Publishing will be able to see all documents in that folder. Please refer to this: <a href="http://help.sap.com/saphelp_nw2004s/helpdata/en/e8/a9a76828b8dc469969ff450ec81ced/frameset.htm">Lifetime of Documents</a>
    Please remember to set valid time periods for your documents. When time-dependant publishing is activated properties <b>valid from</b> and <b>valid to</b> have the same value by default.
    One more thing is to set correct CM systems for scheduler task for TBP (at least on system in clustered environment) <a href="http://help.sap.com/saphelp_nw70/helpdata/en/d1/5b6635f5e0ef428fb513336881679b/frameset.htm">Scheduler Tasks for Time-Dependent Publishing</a>
    Hope it helps,
    Best Regards,
    Michal M.

  • Time-based publishing stopped working

    Hi,
    We currently have a problem with time-based publishing in KM. Since a few days ago, documents stopped becoming visible once they reach their "valid from" date. We have not been able to publish documents with TBP since then on that system.
    These errors keep appearing in the knowledgemanagement.#.log files, which seem related to this issue :
    #1.5#C000AC10005900130000012D00000B3400040F325EE040B8#1142608921379#com.sapportals.wcm.WcmException#irj#com.sapportals.wcm.WcmException.WcmException(62)#System#0#####ThreadPool.Worker1##0#0#Error##Plain###application property service not found com.sapportals.wcm.repository.service.timebasedpublish.wcm.TimebasedPublishException: application property service not found
         at com.sapportals.wcm.repository.service.timebasedpublish.wcm.TimebasedPublishServiceManager.getApplicationPropertyService(TimebasedPublishServiceManager.java:589)
         at com.sapportals.wcm.repository.service.timebasedpublish.wcm.TimebasedPublishServiceManager.setValidEventSent(TimebasedPublishServiceManager.java:540)
         at com.sapportals.wcm.repository.service.timebasedpublish.wcm.TimebasedPublishServiceManager.handleVisibleResources(TimebasedPublishServiceManager.java:327)
         at com.sapportals.wcm.repository.service.timebasedpublish.wcm.CheckValidFromSchedulerTask.run(CheckValidFromSchedulerTask.java:65)
         at com.sapportals.wcm.service.scheduler.SchedulerEntry.run(SchedulerEntry.java:128)
         at com.sapportals.wcm.service.scheduler.crt.PoolWorker.run(PoolWorker.java:107)
         at java.lang.Thread.run(Thread.java:479)
    The KMC version is SP2 with Patch level 29 hotfix 1, and is running on Windows Server 2003 with an Oracle database. We have opened an OSS message but while we are waiting I thought I would post this here in case anyone ever experienced this.
    Best regards,
    Olivier

    Hi,
    1.  Have you checked that tbl service continue assigned to your repository ?
    2. If you create a new repository and assign these service, does it work ?
    Enables users to define a time frame during which documents are published (visible).
    Note that the time-dependent publishing service requires the application property service.
    This service cannot be configured.
    Patricio.

  • Time Based Publishing - Not Working

    Hello SAP KM Gurus-
    I had configured Time Based Publishing to work on our clustered portal.  Everything worked fine until we went to a central instance / dialog set-up.  Now Time Based Publishing no longer works and I can't seem to get it to work no matter what I do.  I have so far:  scheduled the job on only one instance (as per the clustering guidelines in SAP Library), turned it on with properties with the repository (and for the folder I wish to use) and have checked to make sure the service is okay in KM Configuration.  However, it seems like the job never comes by to hide the documents b/c they just show up for Read users no matter what I change.  As I stated before, this was working fine until we went to the new configuation.
    I've checked SAP Notes with no luck.  Anyone have any idea why this is not working?  I'm fresh out.
    Any help greatly appreciated...
    Jim

    Hello Anjali-
    Thanks for your post.  Yes, I have checked that.  Here are my settings - I have Check Valid From assigned to one instance running on  the Central Instance and Check Valid To assigned to the other instance (we have two instances on each server) as per the help docs.  In component monitor, tbp is coming up green and the properties it is State-Ok.  On the repositories, I have both tbp and properties assigned and when I enable tbp I can get the lifecycle tab for the documents.  It appears as if everything is set up right.  However, the read users can see the documents just fine when they shouldn't.  It seems as if the Check Valid From and Check Valid To jobs just never run.
    Is there anyway I can see if the jobs have run and what the schedule was?  The tbp report also showing nothing...  Does it look like I'm doing anything wrong above.  I'm on EP 14/KM 14 by the way...
    Thanks for your help-
    Jim

  • 실수로 TABLE을 DELETE 또는 DROP한 경우 복구 방법 (time-based recovery)

    제품 : ORACLE SERVER
    작성날짜 : 2004-07-29
    PURPOSE
    사용자의 실수로 중요한 데이타를 delete 하였거나, 테이블을 drop하였을
    경우 복구하는 절차를 정리한 자료이다.
    Explanation
    사용 중인 database 의 archive log mode 여부를 다음과 같이 확인 후
    archive log mode인지 아닌지에 따라 복구 방법이 달라진다.
    os>svrmgrl
    SVRMGR> connect internal;
    SVRMGR> archive log list
    Database log mode ARCHIVELOG
    Automatic archival ENABLED
    Archive destination ?/dbs/arch
    Oldest online log sequence 2525
    Current log sequence 2527
    1. Archive mode인 경우 복구 방법
    archive log mode로 운영중이면서 full backup및 archive file들이 모두
    존재하는 경우 복구가 항상 가능하다.
    이때 다시 다음과 같이 두가지의 경우를 구분하여 복구 작업을 수행할 필요가
    있으며 아래에 각각의 경우에 대한 절차를 자세히 설명한다.
    (1) drop이나 delete한 시점이 얼마 지나지 않았거나, 혹은 필요에 의해
    database 전체를 data 손실 이전 시점으로 돌리고자 하는 경우
    (2) 손실된 data만을 drop이나 delete이전 상태를 받아내고, 나머지 data는
    모두 drop/delete이후 발생한 transaction의 반영분을 유지시키기 위해
    현재 시점으로 맞추어햐 하는 경우
    1-1 database 전체를 drop이나 delete 이전시점으로 돌리고자 할 때
    이러한 경우는 데이타 손실이 발생하기 이전의 datafile들에 대한 backup을
    restore한 후 손실 발생 시점 직전까지 time based로 imcomplete recovery를
    수행하면 된다.
    (1) db를 shutdown시킨 후 현재 상태를 만약의 경우를 대비하여 cold
    backup을 받아둔다.
    shutdown immediate로 db를 내리고, datafiles, redo log files, control
    files 모두 tar등의 명령어를 이용하여 backup을 받아둔다.
    이것은 만약의 경우 이전 backup이나 archive file에 문제가 있는 경우,
    삭제된 data의 복구가 불가능할 경우 현재 상태로라도 돌리기 위한
    것이다.
    (2) 데이타를 삭제하기 이전 상태에서 받아둔 datafile full backup을
    restore한다.
    redo log files과 controlfiles은 현재 것을 그대로 이용하고, 현재
    상태의 모든 datafiles을 지우고 데이타가 삭제되기 전에 받아둔
    backup datafile을 restore시킨다.
    (temp datafile의 경우는 restore시간을 줄이기 위해 restore하지
    않아도된다.)
    (3) restore한 backup시점 이후의 archive file들을 archive log
    destination에 위치시킨다.
    full backup 이후의 archive file들을 일부 tape 장비등에 backup을
    받아두고 지운 상태라면 그러한 file을을 다시 원래 위치에 restore
    하여, full backup받은 시점부터 복구하고자 하는 시점까지의
    archive log file들이 모두 log archive destination에 존재하는지
    확인한다.
    (4) 데이타 삭제 이전 시점까지 time based로 recover를 수행한다.
    이때 날짜와 시간 지정은 항상 아래 예와 같은 형식으로 지정한다.
    만약 데이타를 삭제한 시간이후로 지정하게 되면 recovery후에도
    여전히 data는 지워진 상태로 보여진다.
    os>svrmgrl
    SVRMGR> connect internal
    SVRMGR> startup mount;
    SVRMGR> set autorecovery on
    SVRMGR> recover database until time '2001-01-30:21:00:00';
    SVRMGR> alter database open resetlogs;
    (a) 만약 이때, controlfile이 현재것이 존재하지 않아 datafile뿐
    아니라, controlfile도 과거것을 사용했다면, recovery command를
    다음과 같이 지정하면 된다.
    recover database using backup controlfile (아래줄과 이어짐)
    until time '2001-01-30:21:00:00';
    (b) temp datafile을 restore하지 않은 경우라면 startup mount수행
    직후 다음과 같은 문장을 추가하면 된다.
    alter databse datafile '/oracle/oradata/temp01.dbf'
    offline drop;
    /oracle/oradata/temp01.dbf는 temp datafile의 이름을 예로 든
    것이다.
    (5) 원하는 data가 제대로 조회되는지 확인한다.
    1-2. 삭제된 data만을 복구하고 나머지는 현재 상태를 유지하고자 하는 경우
    이러한 경우는 데이타가 삭제되기 이전 상태의 backup datafile 중에
    SYSTEM, RBS와 삭제된 data가 들어있는 datafile만을 restore시켜 time
    based로 recovery하여 db를 open시킨 후 원하는 table만 export를 받아낸다.
    그리고 다시 현재 상태의 db에 export받은 내용을 import시키는 것이다.
    이때 삭제된 data를 export받기까지의 절차에 대해서 다음과 같이 세가지
    경우의 방법을 생각할 수 있다.
    (a) 다른 시스템에 backup이나 test용으로 oracle이 설치되어 있는 경우
    그 시스템에 backup을 restore하여 해당 table을 export 받는 방법
    (b) 같은 시스템에 backup을 restore 하되, db name과 oracle_sid를 다른
    이름으로 변경하여 다른 db를 만들어 recovery후 open하여 export받는
    방법 <Bulletin:11616 참조>
    (c) 현재 db는 cold backup받아 두고, backup을 restore하여 export을
    받아내는 방법
    여기에서 (a)의 경우는 다른 시스템에 이용하고자 하는 datafile의 oracle
    version 이상의 oracle software가 이미 설치되어 있어야 한다.
    (b)의 경우는 현재 상태의 cold backup을 받았다가 export받은 후 다시
    현재 상태를 restore할 필요는 없애주지만, 대신 SYSTEM, RBS, user
    datafile 만큼의 disk space가 추가로 필요하며, 새로운 db를 추가로
    구성하는 것이므로 init.ora file 구성 등의 작업이 필요하다.
    이것에 관한 자세한 절차는 <Bulletin:11616>을 참조하도록 한다.
    이 자료에서는 이중 (c)에 대한 것만을 자세히 설명한다.
    (1) 현재의 모든 datafiles과 redo log files, datafiles을 다음과 같이
    신중히 cold backup을 받아두도록 한다.
    이것은 이후에 삭제한 data를 복구하여 export를 받은 후에 다시 현재
    상태의 db로 되돌아오기 위한 것이다.
    단 이 때 tar 등을 이용하여 cold backup하는 대신에 disk 상에 cold
    backup을 받아 시간을 절약할 수 있다. disk로의 backup이 가능한
    이유는 export를 위한 time based recovery를 수행하기 위해 필요한
    datafile이 SYSTEM, RBS, 삭제된 data를 포함한 user datafile
    정도이기 때문이다.
    os>svrmgrl
    SVRMGR> connect internal
    SVRMGR> shutdown immediate;
    SVRMGR> exit
    os>cp /oracle/oradata/*.ctl /oracle/backup/*.ctl
    os>cp /oracle/oradata/redo*.log /oracle/backup/redo*.log
    os>mv /oracle/oradata/system01.dbf /oracle/backup/system01.bak
    os>mv /oracle/oradata/rbs01.dbf     /oracle/backup/rbs01.bak
    os>mv /oracle/oradata/tools01.dbf /oracle/backup/tools01.bak
    redo log file이나 controlfile의 경우는 time based recovery시에도
    필요하므로 cp를 받아두고, datafile의 경우는 mv를 이용하도록 한다.
    위의 예에 제시된 file이름들은 해당 환경에 맞게 변경되어야 하며,
    tar를 이용하여 tape에 옮겨도 관계는 없으나, 단 cold backup에
    해당하는 모든 datafiles, redo log files, control files를 받아두어야
    한다.
    (2) 삭제한 data가 존재할 당시에 받아둔 backup에서 SYSTEM datafile, RBS
    datafile, 그리고 삭제한 table이 들어있는 datafile만을 restore한다.
    redo log files과 controlfiles은 현재 것을 그대로 이용한다.
    (3) restore한 backup시점 이후의 archive file들을 archive log
    destination에 위치시킨다.
    full backup 이후의 archive file들을 일부 tape 장비 등에 backup을
    받아두고 지운 상태라면 그러한 file을을 다시 원래 위치에 restore
    하여, full backup받은 시점부터 복구하고자 하는 시점까지의
    archive log file들이 모두 log archive destination에 존재하는지
    확인한다.
    (4) restore하지 않을 datafile, 즉 SYSTEM, RBS, 삭제된 data를 포함하
    는 user datafile을 제외한 모든 datafile은 모두 offline drop시키
    고, 데이타 삭제 이전 시점까지 time based로 recover를 수행한다.
    이때 날짜와 시간 지정은 항상 아래 예와 같은 형식으로 지정한다.
    만약 데이타를 삭제한 시간이후로 지정하게 되면 recovery후에도
    여전히 data는 지워진 상태로 보여진다.
    os>svrmgrl
    SVRMGR> connect internal
    SVRMGR> startup mount;
    SVRMGR> alter database datafile '/oracle/oradata/temp01.dbf'
    offline drop; (윗줄과 이어짐)
    SVRMGR> alter database datafile '/oracle/oradata/userdata05.dbf'
    offline drop; (윗줄과 이어짐)
    ... 이와 같이 resotre하지 않은 datafile을 모두 offline drop시킨다.
    SVRMGR> set autorecovery on
    SVRMGR> recover database until time '2001-01-30:21:00:00';
    SVRMGR> alter database open resetlogs;
    만약 이때, controlfile이 현재것이 존재하지 않아 datafile뿐 아니라,
    controlfile도 과거것을 사용했다면, recovery command를 다음과 같이
    지정하면 된다.
    SVRMGR>recover database using backup controlfile until time
    '2001-01-30:21:00:00'; (윗줄과 이어짐)
    (5) 원하는 data가 제대로 조회되는지 확인한 후 필요한 table을 export한다.
    예를 들어 scott user의 dept table을 복구하고자 한것이었다면 다음과
    같이 export를 받는다.
    os>exp scott/tiger file=dept.dmp tables=dept log=deptlog1
    (6) time based로 recovery후 open한 db는 shutdown하여 없애고 (1)번에서
    받아둔 현재 상태의 backup을 restore한다.
    shutdown은 immediate나 abort 모두 가능하며, shutdown후 restore한
    SYSTEM, RBS, user datafile모두 삭제하고, 사용하던 controlfiles,
    redo log files도 모두 삭제한다.
    그리고 (1)에서 mv나 cp로 (혹은 tar로) 옮겨놓은 모든 datafiles,
    controlfiles, redo log files를 원래의 directory와 이름으로
    위치시킨다. SYSTEM, RBS, user datafile도 반드시 (1)번 상태의
    현재 상태 backup을 restore하여야 한다.
    (7) db를 startup시킨 후 (5)에서 받은 export내용을 import시킨다.
    os>imp scott/tiger file=dept.dmp tables=dept ignore=y commit=y
    log=deptlog2 (윗줄과 이어짐)
    (8) 원하는 data가 제대로 조회되는지 확인한다.
    2. No archive mode 로 운영할 경우 복구 방법
    archive mode 로 운영하지 않았을 경우에는, 삭제한 data를 export 받아
    두었거나, 혹은 data가 삭제 전 받아둔 cold backup이 존재하는 경우
    복구가 가능하다.
    (1) export가 존재하는 경우
    다음과 같이 import한다. scott user의 dept가 삭제된 경우를 예로 들었다.
    os>imp scott/tiger file=dept.dmp tables=dept ignore=y commit=y log=log1
    (2) cold backup이 존재하는 경우
    다른 시스템에 사용가능한 oracle engine이 설치되어 있다면, 그곳을
    이용하거나, 혹은 현재 database를 backup받아둔후 현재 시스템에
    restore하여 export를 받아낼 수 있다.
    (1) 현재의 모든 datafiles과 redo log files, datafiles을 다음과 같이
    신중히 cold backup을 받아두도록 한다.
    이것은 이후에 삭제한 data를 복구하여 export를 받은 후에 다시 현재
    상태의 db로 되돌아오기 위한 것이다.
    단 이 때 tar 등을 이용하여 cold backup하는 대신에 disk 상에 cold
    backup을 받아 시간을 절약할 수 있다. disk로의 backup이 가능한
    이유는 export를 위한 time based recovery를 수행하기 위해 필요한
    datafile이 SYSTEM, RBS, 삭제된 data를 포함한 user datafile
    정도이기 때문이다.
    os>svrmgrl
    SVRMGR> connect internal
    SVRMGR> shutdown immediate;
    SVRMGR> exit
    os>mv /oracle/oradata/*.ctl /oracle/backup/*.ctl
    os>mv /oracle/oradata/redo*.log /oracle/backup/redo*.log
    os>mv /oracle/oradata/system01.dbf /oracle/backup/system01.bak
    os>mv /oracle/oradata/rbs01.dbf     /oracle/backup/rbs01.bak
    os>mv /oracle/oradata/tools01.dbf /oracle/backup/tools01.bak
    위의 예에 제시된 file이름들은 해당 환경에 맞게 변경되어야 하며,
    tar를 이용하여 tape에 옮겨도 관계는 없으나, 단 cold backup에
    해당하는 모든 datafiles, redo log files, control files를 받아두어야
    한다.
    (2) 삭제한 data가 존재할 당시에 받아둔 backup에서 SYSTEM datafile,
    RBS datafile, 그리고 삭제한 table이 들어있는 datafile만을
    restore한다.
    backup당시의 redo log files과 controlfiles도 restore하여야 한다.
    (3) restore하지 datafile, 즉 SYSTEM, RBS, 삭제된 data를 포함하는 user
    datafile을 제외한 모든 datafile은 모두 offline drop시키고,
    database를 open시킨다.
    os>svrmgrl
    SVRMGR> connect internal
    SVRMGR> startup mount;
    SVRMGR> alter database datafile '/oracle/oradata/temp01.dbf'
    offline drop; (윗줄과 이어짐)
    SVRMGR> alter database datafile '/oracle/oradata/userdata05.dbf'
    offline drop; (윗줄과 이어짐)
    ... 이와 같이 resotre하지 않은 datafile을 모두 offline drop시킨다.
    SVRMGR> alter database open;
    (4) 원하는 data가 제대로 조회되는지 확인한 후 필요한 table을 export
    한다.
    예를 들어 scott user의 dept table을 복구하고자 한 것이었다면
    다음과 같이 export를 받는다.
    os>exp scott/tiger file=dept.dmp tables=dept log=deptlog1
    (5) exprot받아낸 database는 shutdown하여 없애고 (1)번에서 받아둔
    현재 상태의 backup을 restore한다.
    shutdown은 immediate나 abort 모두 가능하며, shutdown후 restore한
    SYSTEM, RBS, user datafile모두 삭제하고, 사용하던 controlfiles,
    redo log files도 모두 삭제한다.
    그리고 (1)에서 mv나 tar로 옮겨놓은 모든 datafiles, controlfiles,
    redo log files를 원래의 directory와 이름으로 위치시킨다.
    (6) db를 startup시킨 후 (4)에서 받은 export내용을 import시킨다.
    os>imp scott/tiger file=dept.dmp tables=dept ignore=y commit=y
    log=deptlog2 (윗줄과 이어짐)
    (7) 원하는 data가 제대로 조회되는지 확인한다.

  • TDMS - time based reduction - receiver system deletion

    Experts,
    I'm doing a time based reduction.  I'm on the step "Start Deletion of Data in Receiver System".  It's been running for over 18hours.
    But I don't see any jobs running in SM66 on the Central/Control or Sender or Reciever systems.
    When I click on the "Task" button, I see it has completed 8,444 of 12,246  sub activites.  There are 3,802 not yet started.
    We're on all the latest levels of DMIS and ECC.
    Any ideas?
    Thanks
    NICK

    Ashley and Niraj,
    Hey, I'm all for tips/tricks so don't worry about messing up my thread.
    I completely shut down the central/control system via stopsap and restarted.  Still it was in "running" status but no jobs were running on sender/rec or central/control.
    So I tried the trouble-shooting but it was un-clear to me what to do.
    I ended up highlighting the phase I reference earlier, then doing "execute" again.  The status changes from the "truck" to a green flag and I started to see jobs run again on the receiver system.  Again they have stopped, but I see another job scheduled to run in a few minutes....It's just weird, I didn't run into this on my last time-based copy.
    I'll post a few things I've learned to increase performance:
    RDISP/MAX_WP_RUNTIME = 0
    At LEAST 25 WP and 25 BCK procs
    rec/client = OFF
    RDISP/BTCTIME = 60
    RUN STATS regularly
    TAKE OUT OF ARCHIVELOG MODE
    Read/Impl these notes:
    Read theseu2026Update these parameters
    o TD05X_FILL_VBUK_1 Note 1058864
    o TD05X_FILL_VBUK_2 Note 1054584
    o TD05X_FILL_BKPF Note 1044518
    o TD05X_FILL_EBAN Note 1054583
    o TD05X_FILL_EQUI Note 1037712
    Set these oracle index on rec system:
    Table: QMIH
      fields: MANDT, BEQUI
    Table: PRPR
      fields: MANDT, EQUNR
    Table: VBFA
      fields: MANDT, VBELN, VBELV, POSNV
    set parameter u2018P_CLUu2019 to u2018Yu2019 in the following
    activities before you start the activities for filling internal header tables:
    TD05X_FILL_BKPF
    TD05X_FILL_CE
    TD05X_FILL_EKKO
    TD05X_FILL_VBUK
    TD05X_FILL_VBUK_1
    TD05X_FILL_VBUK_2
    TD05X_FILL_VSRESB
    TD05X_FILL_WBRK_1
    run TCODE CNVMBTACTPAR, specify the project number to do this
    IMPORTANT TCODEs
    CNV_MBT_TDMS_MY  Main TDMS starting point     
    CNVMBTMON  Process Monitor (must know your project number)
    DTLMON  MWB transfer monitor
    CNVMBTACTPAR  activity parameters
    CNVMBTACTDEF  MBT PCL activity maint
    CNVMBTTWB  TDMS workbench to develop scrambling rules
    CNV_TDMS_HCM_SCRAM  run in SENDER system for scrambling functionality
    Reports
    CNV_MBT_PACKAGE_REORG  to reorganize TDMS projects..aka delete
    CNV_MBT_DTL_FUGR_DELETE  deletes function groups associated with old projects
    Tables
    CNVMBTUSEDMTIDS   lists obsolete MTIDs
    IMPORTANT NOTES
    Note 894307 - TDMS: Tips, tricks, general problems, error tracing
    Note 1405597 - All relevant notes for TDMS Service Pack 12 and above
    Note 1402704 - TDMS Composite Note : Support Package Independent
    Note 890797 - SAP TDMS - required and recommended system settings
    Note 894904 - TDMS: Problems during deletion of data in receiver system
    Note 916763 - TDMS performance "composite SAP note"
    Note 1003051 - TDMS 3.0 corrections - Composite SAP Note
    Note 1159279 - Objects that are transferred with TDMS
    Note 939823 - TDMS: General questionnaire for problem specification
    Note 897100 - TDMS: Generating profiles for TDMS user roles
    Note 1068059 - To obtain the optimal deletion method for tables (receiver)
    Note 970531 - Installation and delta upgrade of DMIS 2006_1
    Note 970532 - Installation of DMIS Content 2006_1
    Note 1231203 - TDMS release strategy (Add-on: DMIS, DMIS_CNT, DMIS_EXT...)
    Note 1244346 - Support Packages for TDMS (add-on DMIS, DMIS_CNT, ...)
    I'm doing this for an ECC system running ecc 6.0 EHP6 by the way.
    Still any help with my issue on the delete would be helpful. but post tips I don't kwnow about
    NICK

  • What is a time based scenario in TDMS?

    We need only to transfer the data from the last 90 days.
    We know we should use time based scenario. However we cannot find instruction how to implement this scenario.
    Could you help?  Thanks!

    Hi
    Describing how to do time based reduction using TDMS will not be possible over this medium. It is recommended that you refer to the TDMS guides (specifically the Solution operation guide). Refer to the following thread for the same -
    Links, Documents, Support Pack Schedule
    However i will brief you some steps - Once you are on the TDMS overview screen do the following steps -
    1) create project
    2) create sub-project
    3) create package (use the option "initial package for time based reduction from the popup")
    4) once the package screen appears execute various activities of the package in correct order. Detailed documentation for each activity is available on the package screen.
    I hope this helps
    Pankaj

  • Time based trigger in SAP ME

    We need to interface data, (do a web service call for instance) at a time based interval.
    What is the best way to trigger an interface time based using SAP ME?

    Hi J
    Take a look at SAP NW Scheduler. It can schedule the jobs and execute them. The job is implemented in ME as MDB.
    For more information you can see this wiki page
    [https://cw.sdn.sap.com/cw/docs/DOC-102605]
    Thanks
    Ivan

  • Timer-Based Triggers

    Is it possible to create Timer based triggers on a table?
    For example, I insert a row into a table, then after N
    minutes/seconds one of the values in that row gets
    updated by the Trigger.
    Is this possible ?
    thanks,
    Ronan

    Ronan,
    I agree with Justin and John as to dbms_job.submit begin better than dbms_lock.sleep for what you have described. However, I have to wonder, given the name of your table sources_locks, what your ultimate purpose is. If this is some sort of roundabout locking implementation, then there may be a better way. Anyhow, here is an example of implementing what you have described thus far, using dbms_job.submit. For the sake or rapid testing and demonstration, I have used SYSDATE + (1 / (24 * 60)) to have it update the status after one minute and DBMS_LOCK.SLEEP (45) to wait 45 seconds or DBMS_LOCK.SLEEP (90) to wait 90 seconds before checking the results. Just change it to SYSDATE + (5 / (24 * 60)) for five minutes and DBMS_LOCK.SLEEP (300) or a few seconds longer to give the job time to run, to wait to check the results. Notice that the job is not submitted until after a commit is issued. So, after the first check, after 90 seconds, without a commit, there is not update yet. Please see the example below.
    scott@ORA92> CREATE TABLE sources_locks
      2    (lock_id NUMBER,
      3       status     VARCHAR2(32))
      4  /
    Table created.
    scott@ORA92> CREATE OR REPLACE TRIGGER timeout
      2    AFTER INSERT ON sources_locks
      3    FOR EACH ROW
      4  DECLARE
      5    v_job     NUMBER;
      6    v_update VARCHAR2 (4000);
      7  BEGIN
      8    IF :NEW.status = 'ACTIVE' THEN
      9        v_update :=  'UPDATE sources_locks
    10                   SET    status = ''TIMED_out''
    11                   WHERE  lock_id = ' || :NEW.lock_id || ';';
    12        DBMS_JOB.SUBMIT
    13          (v_job,
    14           v_update,
    15           SYSDATE + (1 / (24 * 60)));
    16    END IF;
    17  END timeout;
    18  /
    Trigger created.
    scott@ORA92> SHOW ERRORS
    No errors.
    scott@ORA92> INSERT INTO sources_locks VALUES (1, 'ACTIVE')
      2  /
    1 row created.
    scott@ORA92> SELECT SYSDATE FROM DUAL
      2  /
    SYSDATE
    04-FEB-2004 05:26:49
    scott@ORA92> BEGIN
      2    DBMS_LOCK.SLEEP (90);
      3  END;
      4  /
    PL/SQL procedure successfully completed.
    scott@ORA92> COMMIT
      2  /
    Commit complete.
    scott@ORA92> SELECT SYSDATE FROM DUAL
      2  /
    SYSDATE
    04-FEB-2004 05:28:21
    scott@ORA92> SELECT * FROM sources_locks
      2  /
       LOCK_ID STATUS
             1 ACTIVE
    scott@ORA92> BEGIN
      2    DBMS_LOCK.SLEEP (45);
      3  END;
      4  /
    PL/SQL procedure successfully completed.
    scott@ORA92> INSERT INTO sources_locks VALUES (2, 'ACTIVE')
      2  /
    1 row created.
    scott@ORA92> COMMIT
      2  /
    Commit complete.
    scott@ORA92> SELECT SYSDATE FROM DUAL
      2  /
    SYSDATE
    04-FEB-2004 05:29:07
    scott@ORA92> SELECT * FROM sources_locks
      2  /
       LOCK_ID STATUS
             1 TIMED_out
             2 ACTIVE
    scott@ORA92> BEGIN
      2    DBMS_LOCK.SLEEP (45);
      3  END;
      4  /
    PL/SQL procedure successfully completed.
    scott@ORA92> SELECT SYSDATE FROM DUAL
      2  /
    SYSDATE
    04-FEB-2004 05:29:53
    scott@ORA92> SELECT * FROM sources_locks
      2  /
       LOCK_ID STATUS
             1 TIMED_out
             2 ACTIVE
    scott@ORA92> BEGIN
      2    DBMS_LOCK.SLEEP (45);
      3  END;
      4  /
    PL/SQL procedure successfully completed.
    scott@ORA92> SELECT SYSDATE FROM DUAL
      2  /
    SYSDATE
    04-FEB-2004 05:30:39
    scott@ORA92> SELECT * FROM sources_locks
      2  /
       LOCK_ID STATUS
             1 TIMED_out
             2 TIMED_out

  • Time Based Publishing not supported in WPC

    Hi all,
    I want to implement Time Based Publishing for some WPC resources (for example: web articles or paragraphs, ie: not for pages, but resources).
    Sadly, it seems that is not allowed. There is the following note: 1310768 - Time Based Publishing not supported in WPC.
    Does anyone know an alternative way to achieve this TBP behavior? I thought developing some task scheduler service, or perhaps a namespace filter.... , some help will be thanked!
    Question aside: I don't understand why standard TBP is not supported, being a WPC's web article a standard KM resource. Don't you find strange?
    Thanks in advance,
    Best regards,
    Marcelo

    In case anyone is interested, I asked SAP about it.
    They said that TBP for WPC is only available to pages, and since 7.3 is the only time based publishing feature available for WPC. Beside that, they made clear the difference betweeen WPC and KMC... I supposed (wrong) that WPC was relied on KMC,
    So.. I'm really stuck on this.
    I'll keep this question opened in case anyone could help or maybe share an idea,
    Thanks in advance
    Best regards,
    Marcelo

  • SCM230 - Time based Capacity Leveling by decreased storage (Days Supply)

    Hi,
    Reading through the SCM230 training course on the subject of Capacity Leveling, the manual describes the following functionality:
    If you choose Time-Based Capacity Leveling by decreasing storage (days'
    supply), the system levels the order with the largest days' supply first, then
    the one with the second largest supply, and so on.
    Can anyone explain this functionality in plain English and what / where are the parameters etc to influnce it?
    Thanks for your help
    Mark

    Hi Marius,
    I understand forward and combined forward/backward scheduling in capacity leveling using SNP Heuristics/Optimizer method. However, forward leveling or combined forward/backward leveling does not fulfil the requirement because then capacity overloads (aka SNP planned orders) are shifted in future causing shortage in weeks where demand exist. Business needs a way to shift capacity overloads in previous available buckets, if available capacity exists in previous weeks or months. If sufficient available capacity does not exist in previous weeks then the remaining capacity overload should stay in the week/month where that overload occurs due to excess demand. This will provide business an opportunity to meet demand by finding alternate methods of balancing that capacity overload by either adding capacity (running machines overtime, opening up production on a non-workday) or by outsourcing that overload to sub-contractors. However, neither heuristics nor optimizer method of capacity leveling in SNP fulfils this requirement.
    Regards,
    Jagjeet.

  • Sales orders in TDMS company/time based reduction  are outside the scope

    Guys,
    I have had some issues with TDMS wheras it didn't handle company codes without plants very well. That was fixed by SAP. But I have another problem now. If I do a company code and time based reduction, It doesn't seem to affect my sales orders in VBAK/VBUK as I would have expected. I was hoping it would only copy sales orders across that have a plant which is assigned to a company code that was specified in the company code based reduction scenario. That doesn't seem to be the case.
    VBAK is now about one third of the size of the original table (number of records). But I see no logic behind the reduction. I can clearly see plenty of sales documents that have a time stamp way back from what I specified in my copy procedure and I can see others that have plant entries that should have been excluded from the copy as they do belong to different company codes than the ones I specified.
    I was under the impression that TDMS would sort out the correct sales orders for me but somehow that doesn't seem to be happening. I have to investigate further as to what exactly it did bring across but just by looking at what's in the target system I can see plenty of "wrong" entries in there either with a date outside the scope or with a plant outside the scope.
    I can also see that at least the first 10'000 entries in VBAK in the target system have a valid from and to date of 00.00.0000 which could explain why the time based reduction didn't work?
    Did you have similar experiences with your copies? Do I have to do a more detailed reduction such as specifying tables/fields and values?
    Thanks for any suggestions
    Stefan
    Edited by: Stefan Sinzig on Oct 3, 2011 4:57 AM

    The reduction itself is not based on the date when the order was created but the logic enhances it to invoices and offers, basically the complete update process.
    If you see data that definitely shouldn't be there I'd open an OSS call and let the support check what's wrong.
    Markus

  • Calculation of SLA times based on Service Organization

    Is it possible to calculate the SLA times based only on Service org?
    a) Using Service contracts i.e create SC with only org and assign the Service & Response profiles.
    Else as mentioned below.
    Please give your more thoughts.
    I maintain the Service & response profiles at "Maintain Availability and Response Times" .
    Can I access these values directly in the BADI ?
    My scenario is
    a) An agent belongs to a service org.
    b) I define these Profiles seperately for each Org (Org1 Org2 etc) at the above tcode.This manual entry.I know we dont have org to profiles mapping in the above tcode.I just painly maintain.
    c) In the BADI i check the org entered in the complaint.
    d) for Ex if the Org1 is entered I want to access the profiles for Org1.If else ladder.
    e) then use these profiles to calculate the SLA times.
    f) then save the document.
    g) Also trigger an e-mail saying the above time lines.
    Is the above flow possible??
    Let me know if you want me to post this onto another thread.
    Thanks
    amol

    Shalini,
    I will be just maintaining the service and response profiles in the "Maintain Availa..." tcode.
    There wont be exact mapping stored in any table.
    My logic would be ,i dunno whether this right or wrong..
    1) Once i get the Org ,I would compare like ths
    if( orgdata = org1)
    then service profile 1 etc.
    2) then apply the profiles to the cal of SLA times.
    I think we can achieve what you said using CRM_ORDERADM_I_BADI
    Or we need to use the BADI's specifically mentioned for serv contract det and calculation of SLA.
    As you know in SAP for SLA times we need to have the service contract for a) customer b) org  and many other parameters.then to this SC we need to associate the service and response profiles.When the SC is determined in a complaint ,the serv and resp profiles will be used to get the SLA times.
    But my requirement is to have determine the SLA times based on the service organisations.Not based on the customer and any other parameters.
    For ex : If my serv org is in India the times would be diff ..if my serv org is in US the times would be diff.
    So let me know what approach would be best ?
    Use BADI's as above or does this approach of having define different Service contracts (without having Partner functions customer etc) for diff orgs?
    Thanks
    Amol

Maybe you are looking for

  • Window 8.1 64 bit - iTunes Install Error

    I have windows 8.1 64 bit.  When I try and install iTunes 64 bit, I get the following error, "Could not write value ManageLLRouting to key \SYSTEM/CurrentControlSet\Service\Bonjour Service/Parameters. Verify that you have suffiient access to that key

  • Problem on Install

    i've just downloaded some files which need to be opened with shockwave flash, i know that i do have it and that it works for everything but i do not see it in that list (the file is a .swf file), i tried to search it but all i find is some file that

  • Mini dvd stuck in drive - how do I eject it?

    I inserted a mini DVD into the dvd drive.  The dvd player doesn't show that anything is in there and therefore won't eject it.  Please help.

  • Delivery deleted but requirements are still shown in MD04

    Hi Gurus, I have an issue , where the item has been deleted from the delivery . However the requirements for the material are still being shown in MD04. I want the requirements should be removed from MD04. Please help me  in  resolving this issue. Th

  • [SOLVED] gimp can't open any files - probably an easy fix

    I can't open any files such as .png's or .jpg's in gimp. How can I change this. I installed it via pacman. I don't know if this helps, but I'm running kde. Last edited by itsbrad212 (2010-02-04 01:25:16)