/usr/sap/ccms is above the threshold value

Hi,
Can you guide me with this one, what are the files should be deleted with this directory? Where I can find those files?
Hoping for your immediate response.
Regards,

Hi,
Please login from os level
cd /usr/sap/ccms
Check the Sizes of the directory du -sk *
It will give the size of all the directory and files.
Please find that directory which is very large and go to that directory and find which files are very large and date of that files.(old files)
Check the old files which are present and also still they are using it or not.If they are not in use you can move that files into another directory which has free space.
Check for one week if there is no any inconsistency and then you can delete that files.
Please take the proper approvals before deleting it.
Other wise please inform to the backup team to take backup of that directory.
or
Each agent can generate 500mb data in the ../traces and ../data directories if this fills up the file system, in case of emergency when can delete the contents of
/usr/sap/ccms/wilyintroscope/traces
/usr/sap/ccms/wilyintroscope/data
Then restart the EM
To reduce the required space open the ./config/IntroscopeEnterpriseManager.properties file, and you can find the entries as below:
introscope.enterprisemanager.transactionevents.storage.max.data.age=N
#The trace files under "traces" folder will be deleted after N days.
introscope.enterprisemanager.smartstor.tier3.age=N
#The historical files under "data" folder will finally be deleted after N days
introscope.enterprisemanager.transactionevents.storage.optimize.time offsethour
#It's only about the time to run the "house keeping".
Please adjust the settings accordingly. These changes will not have a immediate effect to the current situation. The simplest way to solve the problem is just to delete the whole "data" and "traces" folder. EM will automatically re-create these new folders when restarting. But please be aware that the historical data will be lost, so we suggest backing up the historical files first. From then on, the "house keeping" work will run in the new way as we have modified.
Alternately one can put the /data directory on a different disk drive / disk controller. For this purpose, change
the following properties in the file config/IntroscopeEnterpriseManager.properties:
introscope.enterprisemanager.smartstor.directory=/your/separate/drive/data
introscope.enterprisemanager.smartstor.directory.archive=/your/separate/drive/archive
introscope.enterprisemanager.smartstor.dedicatedcontroller=true
Regards
Sreedhar Reddy
Edited by: Sreedhar Reddy on Feb 23, 2009 11:42 PM

Similar Messages

  • The directory /usr/sap/ccms No exist

    Hi friends.
    This quality system (SQ1) is a backup / restore with ABAP and JAVA instance and is operating its j2ee instance.
    I'm looking at the log for an error when you run a Query and publish to the web.
    0017A451263E004F0000000000006A5100049EB7A28AF12C # 1300411450061 # # # # com.sap.engine.library.monitor.mapping.ccms.RuntimeParameters com.sap.engine.library.monitor.mapping.ccms.RuntimeParameters SAPEngine_System_Thread ####### [impl: 5] 95 # # 0 # 0 # Error # # Plain # # # Can not create connector CCMS directory (/ usr/sap/ccms/SQ100/j2ee3903450). Check file system permissions. #
    1.5
    It is strange that there is no directory / usr / sap / ccms that exists in development and production. It should reinstall ccms? or manually created this directory with sub directories and restart the system to try to create log files that the system is trying to create but can not because there is this directory?
    Although not get something to help me solve this. What do you recommend?
    Regards
    Richard

    Hi,
    >
    >  0017A451263E004F0000000000006A5100049EB7A28AF12C # 1300411450061 # # # # >com.sap.engine.library.monitor.mapping.ccms.RuntimeParameters >com.sap.engine.library.monitor.mapping.ccms.RuntimeParameters SAPEngine_System_Thread ####### [impl: 5] 95 # # 0 # 0 # >Error # # Plain # # # Can not create connector CCMS directory (/ usr/sap/ccms/SQ100/j2ee3903450). Check file system
    >permissions. #
    >  # 1.5
    >
    As per your log, it is showing error with file system permission. Please check.
    Thanks
    Sunny

  • Average of all values between (first above and last above) a threshold value.

    Currently I have a VI which I programmed a year or more ago, which grabs any value out of a data set that is above a threshold value. This is used for capturing the average of all values over a threshold when there is one peak of values that go above that threshold. However, when there are two or more peaks that go above this threshold, only the values above that value are averaged, so the end result is the average of two or more averages.
    What I need is for every value that occurs after the threshold is reached and before the threshold is dropped below for the last time. Picturing a set of data whose graph looks like the letter "M," for example with the threshold being halfway up the M, I want to show the average of (the first hump, plus the dip below the humps, plus the last hump) but what I'm getting now is the average of (the first hump, plus the last hump). What I get now cuts out the data between the two peaks.
    Any assistance would of course be appreciated.
    Solved!
    Go to Solution.
    Attachments:
    AveAboveThresholdAll.vi ‏15 KB

    Actually I think that Tims solution has a little difference to the stated problem that both limits should be above a threshold value. Since the second array is inverted the rounding should be not towards +Infinity but towards -Infinity in order to really only go up to the last element that is above the threshold. The code as made by Tim takes one sample more, which could or couldn't have any significance.
    Nevermind, I take that back.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Configuration of FP-Modules. In the catalog is something written about sending data on data change. There is a checkbox in the FP-Explorer but I can�t find a configuration menu in FP-Explorer to set the threshold value at which the data will be send.

    Configuration of FP-Modules. In the catalog is something written about sending data on data change. There is a checkbox in the FP-Explorer but I can�t find a configuration menu in FP-Explorer to set the threshold value at which the data will be send.

    : Configuration of FP-Modules. In the catalog is something written about sending data on data change. There is a checkbox in the FP-Explorer but I can�t find a configuration menu in FP-Explorer to set the threshold value at which the data will be send.The FP-1600 modules automatically send data on-change only. The change threshold is by default 0%. Any change, even a single least significant bit change, is sent back to the computer. If the FP-1600 firmware revision supports deadbanding (firmware revision 3.0.x and later, can be downloaded from ftp://ftp.ni.com/support/fieldpoint/Update/FPEthernet0320.zip ). Then on each analog channel or count channel you can specify a percentage change for deadbanding in FieldPoint Explorer. This is done on a channel by channel basis, by right-clicking on the particular IO module and choosing Edit this Device, then choosing Channel Configuration. Each channel (if it supports deadbanding) will have a deadband entry box on the middle right side of the screen).
    The FP-1000/1001 seri
    al network modules do not support deadbanding. The checkbox in FieldPoint Explorer is simply for how FieldPoint Explorer displays the data to the user and does not effect client programs nor the behavior of the network module itself.
    Regards,
    Aaron

  • Can SAP/CCMS automatically clear the RZ20 alerts after 30 days?

    Can SAP/CCMS automatically clear the RZ20 alerts after 30 days?
    And if so, how?

    Hi William,
    You can reorganize the completed alerts using the analysis method for AlertsInDB. The method deletes older alerts that match your specifications and reduced the space occupied in the database. To do this, proceed as follows:
       1. Choose CCMS ® Control/Monitoring ® Alert Monitor, or call transaction RZ20.
       2. Expand the SAP CCMS Technical Expert Monitors monitor set, place the cursor on the CCMS Selfmonitoring monitor, and choose Start Monitor.
       3. Expand the alert monitoring tree. You will find the monitoring object for which you are looking under CCMS_Selfmonitoring ® Runtime ® AlertsInDB. Select the object and choose Start Analysis Method.
       4. Specify the date and time from which completed alerts are to be deleted.
    Regards,
    suraj

  • Changing the predefined sap logo and whats the px value for font size

    Hi,
    I am  unable to remove a Sap logo on ajax fwk: masthead . I want to replace my logo with that sap logo .
    but when i  try to set url to logo to that image .it doesn' change.
    Help me to resolve the problem
    and whats the px value for font size more than 10

    Hi,
    The URL to set logo  should work when you edit it in the correct theme. Usually cache issues prevent us from seeing the desired result. By Clearing cache your issue should get resolved. Otherwise if need be you can edit the com.sap.portal.navigation.masthead par file and define the manner in which you want your logo to be displayed in the masthead. Download the par file and open it in NWDS. Search for "logo" keyword you will find it. Edit the current image and upload the PAR file again. 
    Regards
    Anand Sekar

  • How do I clean up the directory /usr/sap/ccms?

    Can I clean it manually? or through running a job?

    Hi,
    Yes you should be able to run a job from cron, first cleanup folder "data\archive". (after your system backup)
    Thanks,
    George

  • CCMS threshold Value baseline calculation

    Hi Guys,
    I am looking CCMS threshold value baseline calculation for ECC, BW, XI, SRM, CRM, APO & Solman systems. Many portals are provided the description of each MTE class , but not provided baseline calculation.
    Please let us know some expertise advice
    Thanks.
    Manjunath

    Dear. Marc P. Gilomen.
    I totally understand what you are written above.
    The threshold value shouldn't be changed until I change the rule related to the product.
    However, after I changed the rule, I have expected the threshold value should be changed according to the rule.
    But, it's not.
    This is the point.
    After I change the rule, for example 35% to 45%, the threshold value of GTS preference determination transaction '/SAPSLL/PRECA01' is changed. But, The replicated sales order's threshold value is not changed. And delivery, billing is the same with salesorder also.
    It's not changed whatever I do.
    So, I want to solve this problem. But until now, I didn't find the answer.
    Thank you for reading.
    Best regards,
    Jong Hwan

  • CCMS threshold values

    Hi all,
    I'm in the process of configuring CCMS threshold values for the ECC server. We are trying to configure the virtual memory,virtual cpu and LAN error threshold values. By default the threshold value is 2147483,647 MB for all the three . Now our requirement is when the virtual CPU is exceeding 75% it need to trigger an alarm and needs to change from green to yellow. We dont have idea of where from this default value is being pickedup.
    How to calulate the threshold value for virtual cpu,memory and lan.
    Can anyone throw some light on this issue?
    Regards,
    Rajagopal

    Hi Michael Ruth ,
    Thanks for the response. The link gives just a over all view of changing the values. But what my requirement is how to calculate the virtual memory,cpu . Where can i see the virtual cpu,memory values. By default it is taking a value which is huge in size. My installations are on VM ware. Kindly let me know as how to proceed further to complete my configuration.
    Regards,
    Rajagopal

  • Where does the details of the files on usr/sap directories get saved?

    hi,
    when a (.dat) file is created by background job processing in the 'usr/sap/' directories, where does the details like last changed, last modified of these files get saved?
    which table does these detiails get saved?
    thanks,
    swamy

    Hi,
    Transaction AL11 does exactly wat you are looking for,
    it lost list of files present on the server SORTED on date and time.
    and also u can use function module EPS2_GET_DIRECTORY_LISTING which contains a date/time stamp
    Hope it helps u...

  • /usr/sap/trans/actlog - Increase log detail

    Hi everyone,
    I'm after away of logging when objects are added to deleted from a modifiable change request (eg I want to see the object details, who, and date/time.  I'm aware that when you create a new transport request/task that a log is created in /usr/sap/trans/actlog, however I was wondering if there was away to increase the detail level of what is written to the log file, or if someone knows of another way of tracking this information.
    Cheers
    Shaun

    Operating system level files in the transport process:
    The SAP C program TP, requires a special file structure for the transport process. The file system is operating system dependent. TP uses a transport directory or file system, which is called /usr/sap/trans.
    The /usr/sap/trans file system is generally NFS mounted form the development system to other systems unless a system is defined as a single system in the CTS pipeline. All the sub directories should have <SID>adm as the owner and sapsys as the group; and proper read, write and execute access should be given to owner and the group. The TP imports are always performed by <SID>adm.
    The following are the subdirectories in /usr/sap/trans:
    /data
    /cofiles
    /bin
    /log
    /actlog
    /buffer
    /sapnames
    /tmp
    /usr/sap/trans/data: holds the data of transport objects after they are released . This subdirectory contains files named R9<5 digits >. containing the exported objects. The example of a data file is R904073.DEV. The extension DEV means the data file was released from the DEV or development system.
    /usr/sap/trans/cofiles: The cofiles directory holds the command files for all change requests. This subdirectory contains command files named K9<5 digits>.. They contain, for example, import steps to be performed. files are like a command or control files used to import the data files. The common directory for CTS system is /usr/sap/trans. After a change request is released from the source system , the data is exported immediately to the file system of the operating system. The SAP transport utility TP uses the cofile to transport a data file. The example of a file in cofiles directory is K904073.DEV.
    /usr/sap/trans/bin: holds the most important file TPPARAM in the CTS system. TPPARAM file has all the information about the CTS systems in the CTS pipeline. TPPARAM file is the parameter file for the transport program TP and it is the common file for all the systems in the CTS pipeline. As you know already that /usr/sap/trans should be NFS mounted to all the systems in a CTS pipeline, TP program has access to the TPPARAM file from all the systems. The following is an example of typical TPPARAM file for five SAP systems in the CTS pipeline:
    #@(#) TPPARAM.sap 20.6 SAP 95/03/28
    # Template of TPPARAM for UNIX #
    # First we specify global values for some parameters, #
    # later the system specific incarnation of special parameters #
    # global Parameters #
    transdir = /usr/sap/trans/
    dbname = $(system)
    alllog = ALOG$(syear)$(yweek)
    syslog = SLOG$(syear)$(yweek).$(system)
    # System spezific Parameters #
    # Beispiel T11 #
    DEV/dbname = DEV
    DEV/dbhost = sap9f
    DEV/r3transpath = /usr/sap/DEV/SYS/exe/run/R3trans
    QAS/dbname = QAS
    QAS/dbhost = sap8f
    QAS/r3transpath = /usr/sap/QAS/SYS/exe/run/R3trans
    TRN/dbname = TRN
    TRN/dbhost = sap17
    TRN/r3transpath = /usr/sap/TRN/SYS/exe/run/R3trans
    PRE/dbname = PRE
    PRE/dbhost = sap19f
    PRE/r3transpath = /usr/sap/PRE/SYS/exe/run/R3trans
    PRD/dbname = PRD
    PRD/dbhost = sap18f
    PRD/r3transpath = /usr/sap/PRD/SYS/exe/run/R3trans
    /usr/sap/trans/log: holds the entire log files, trace files and statistics for the CTS system. This subdirectory contains all log files, such as ULOGs, ALOGs, SLOGs, log files named 9<5 digits>. for each executed step, and log files named . for steps that are collectively executed, for example, step N (structure conversion) and step P (move nametabs). When the user goes to SE09 (workbench organizer) or SE10 (customizing organizer) transaction and opens the log for a transport, the log file for that transport will be read from /usr/sap/trans/log directory. Each change request should have a log file. Examples of log files are DEVG904073.QAS, DEVI904073.QAS and DEVV904073.QAS. The name of a log file consists of the names of the change request, the executed step, and the system in which the step was executed:
    <source system><action><6 digits>.<target system>
    Now we can analyze the above example DEVG904073. QAS. The <source system> = DEV, <action> = G or report and screen generation, <6 digits> = 904073 (these six digits numbers are exactly the same number as the six digits of the transport) and the <target system> = QAS
    Possible values for <action> are:
    A: Dictionary activation
    D: Import of application-defined objects
    E: R3trans export
    G: Report and screen generation
    H: R3trans dictionary import
    I: R3trans main import
    L: R3trans import of the command files
    M: Activation of the enqueue modules
    P: Test import
    R: Execution of reports after put (XPRA)
    T: R3trans import of table entries
    V: Set version flag
    X: Export of application-defined objects.
    /usr/sap/trans/actlog: This subdirectory contains files named Z<6 digits> recording each action on a request or task, for example, creation, release, or change of ownership. The example of an action file is DEVZ902690.DEV. The following are the contents of the file:
    1 ETK220 “==================================================” “=================
    =============================
    1 ETK191 “04/30/1998″ Action log for request/task: “DEVK902690″
    1 ETK220 “==================================================” “=================
    =============================
    1 ETK185 “04/30/1998 18:02:32″ “MOHASX01″ has reincluded the request/task
    4 EPU120 Time… “18:02:32″ Run time… “00:00:00″
    1 ETK193 “04/30/1998 18:02:33″ “MOHASX01″ owner, linked by “MOHASX01″ to “DEVK902691″
    4 EPU120 Time… “18:02:33″ Run time… “00:00:00″
    1 ETK190 “05/04/1998 11:02:40″ “MOHASX01″ has locked and released the request/task
    1 ETK194 “05/04/1998 11:02:40″ **************** End of log *******************
    4 EPU120 Time… “11:02:40″ Run time… “00:00:09″
    ~
    ~”DEVZ902690.DEV” 10 lines, 783 characters
    /usr/sap/trans/buffer: This subdirectory contains an import buffer for each SAP System named after the SID. When a change request is released, the import buffer of the target systems is updated. Contains control information on which requests are to be imported into which systems and in what order the imports must occur. The /usr/sap/trans/buffer will have a directory for each system in the CTS pipeline. For example the buffer file for DEV system is /usr/sap/trans/buffer/DEV.
    /usr/sap/trans/sapnames: holds information pertaining to transport requests for each system user. This subdirectory contains files named after the user's logon name. A file is created for each SAP System user, who performs transport actions, and updated when the user releases a request. There are files for each user who released change requests from the system.
    /usr/sap/trans/tmp: holds information about temporary data and log files. While the transport is occurring the Basis administrator can find a file that is related to the transport in the tmp directory; that file shows the exact status if the transport (What objects are being imported at that time).
    Important SAP delivery class and table types and tables in the CTS process:
    Delivery class
    The delivery class defines who (i.e. the SAP system itself or the customer) is responsible for maintaining the table contents. In addition the delivery class controls how the table behaves in a client copy and an upgrade. For example when you select a SAP defined profiles to perform a client copy, certain tables are selected according to their delivery class. DD02L table can show what delevery class a table belongs to.
    The following delivery classes exist:
    A: Application table.
    C: Customizing table, maintenance by customer only.
    L: Table for storing temporary data.
    G: Customizing table, entries protected against overwriting.
    E: Control table.
    S: System table, maintenance only by SAP.
    W: System table, contents can be transported via own TR objects.
    Table type
    The table type defines whether a physical table exists for the logical table description defined in the ABAP/4 Dictionary and how the table is stored on the database.
    The following are different table types in SAP:
    Transparent Tables
    There is a physical table on the database for each transparent table. The names of the physical table and the logical table definition in the ABAP/4 Dictionary are same. For every transparent table in SAP, there is a table in database. The business and application data are stored in transparent tables.
    Structure
    No data records exist on the database for a structure. Structures are used for the interface definition between programs or between screens and programs.
    Append Structure
    An Append structure defines a subset of fields which belong to another table or structure but which are treated as a separate object in the correction management. Append structures are used to support modifications.
    The following table types are used for internal purposes, for example to store control data or for continuous texts:
    Pooled table
    Pooled tables can be used to store control data (e.g. screen sequences, program parameters or temporary data). Several pooled tables can be combined to form a table pool. The table pool corresponds to a physical table on the database in which all the records of the allocated-pooled tables are stored.
    Cluster table
    Cluster tables contain continuous text, for example documentation. Several cluster tables can be combined to form a table cluster. Several logical lines of different tables are combined to form a physical record in this table type. This permits object-by-object storage or object-by-object access. In order to combine tables in clusters, at least part of the keys must agree. Several cluster tables are stored in one corresponding table on the database.
    Tables in CTS process:
    TRBAT and TRJOB:
    TRJOB and TRBAT are the major tables in the CTS process. After TP program has sent the event to the r3 system, RDDIMPDP checks table TRBAT in the target system to find out if there is an action to be performed. Mass activation, distribution, or table conversions are the examples of actions. If there is action to be performed, RDDIMPDP starts the appropriate program in the background task. RDDIMPDP then reschedules itself.
    By checking table TRJOB, RDDIMPDP automatically recognizes if a previous step was aborted, and restarts this step. For each transport request , TP program inserts an entry into table TRBAT. If the return code 9999 in this table then the step is waiting to be performed. Return code 8888 indicates that the step is active and currently being processed. A return code of 12 or less indicates that the step is finished. In addition, TP inserts a header entry to let the RDDIMPDP program know to start processing. The column return code will therefore contain a B for begin. When RDDIMPDP is started, it sets the header entry to R(un), and starts the required program. When all the necessary actions are performed for all the transport requests, the column return code contains all the return codes received, and the column TIMESTAMP contains the finishing time. The header entry is set to F(inished). TP monitors the entries in TRBAT and TRJOB tables. When the header entry in TRBAT is set to finished. The entry in TRJOB is deleted.
    Transport Tables SE06
    TDEVC – Development classes
    TASYS – Details of the delivery. Systems in the group that should automatically receive requests, have to be specified in table TASYS.
    TSYST – The transport layers will be assigned to the integration systems. ( Define all systems)
    TWSYS – Consolidation routes ( define consolidation path)
    DEVL – Transport layers are defined here
    In “Configuring the CTS system” section, We will learn more about the transport tables in SE06 transaction
    Programs in the CTS process:
    In the CTS table section we learned about the RDDIMPDP program. RDDIMPDP program needs to be scheduled in all the clients in an instance. It is recommended to schedule the RDDIMPDP as event driven.
    RDDPUTPP and RDDNEWPP programs can be used to schedule RDDIMPDP program in the background.
    The ABAP/4 programs that RDDIMPDP starts are determined by the transport step to be executed that is entered in the function field of table TRBAT.
    Function Job Name Description of transport Steps
    J RDDMASGL Activation of ABAP/4 dictionary objects
    M RDDMASGL Activation of match codes and lock objects
    S RDDDISOL Analysis of database objects to be converted
    N RDDGENOL Conversion of database objects
    Y RDDGENOL Conversion of matchcode tables
    X RDDDICOL Export of AD0 objects
    D RDDDIC1L Import of AD0 objects
    E RDDVERSE Version management update during export
    V RDDVERSL Version management update during import
    R RDDEXECL Execution of programs for post – import processing
    G RDDDIC3L Generation of ABAP/4 programs and screens
    Version Management:
    One of the important features of Workbench Organizer is Version Management. This feature works for all the development objects. Using the version management feature the users can compare and retrieve previous versions of objects.
    Version management provides for comparisons, restore of previous versions, documentation of changes and assistance in the adjustment of data after upgrading to a new release. With the release of a change request, version maintenance is automatically recorded for each object. If an object in the system has been changed N times, it will have N delta versions and one active version. To display version management, for ABAPs use transaction SE38 and for tables, domains and data elements use SE11. The path to follow is Utilities -> Display version. Using version management the users can view existing version for previously created ABAP code, make changes to the code, compare code versions and restore original version of the code. Now the users will be restore previous versions without cut and paste steps of the past.
    TP and R3trans program:
    The basis administrator uses TP program to transport SAP objects from one system to another. TP is a C program delivered by SAP that runs independently of the R/3 system. TP program uses the appropriate files located in a common transport directory /usr/sap/trans. TP starts C programs, ABAP/4 programs and special operating system commands to its job. R3trans is one of the most important utility program called by TP. Before using the TP program, the basis administrator needs to make sure that the CTS system is setup properly and the right version of TP is running in the system. The TP program is located in the run time directory /usr/sap/<SID>/SYS/exe/run directory. It is automatically copied in the install process. A global parameter file TPPARAM that contains the databases of the different target systems and other information for the transport process controls TP. The global parameter file determines which R3trans is used for each system. If the parameter r3transpath is not defined properly then no export and import can be done. The basis administrator should make sure that the default value “r3transpath” is properly defined. Later in this chapter we will learn more about TP and R3trans; also we are going to see how they are used.
    Configuring the TPPARAM file:
    Each time TP is started, it must know the location of the global parameter file. As we have seen before TPPARAM file should be in directory /usr/sap/trans/bin. The parameters in TPPARAM can either global (valid for each and every system in the cts pipeline) or local to one system. Th parameters are either operating system dependant (these parameters preceded by a keyword corresponding to the specific operating system) or database dependant (contain a keyword corresponding to a specific database system).
    The global parameter file provides variables that can be used for defining parameters. The variables can be defined in format: $(xyz). The brackets can be substituted with the “\”-character if required.
    The following pre-defined variables are available for the global parameter file:
    $(cpu1): The CPU name can be sun or as4 for example. In heterogeneous networks this variable is very important.
    $(cpu2): Acronym for the name of the operating system. The example for this variable can be
    hp-ux, or sunos . This is an operating system specific variable.
    $(dname): Used for the day of the week (SUN,MON,….).
    $(mday): Used for the day of the current month (01-31).
    $(mname): Used for the name of the month (JAN…DEC).
    $(mon): Used for the Month (01-12).
    $(system): R/3 System name.
    $(wday): Day of the week (00-06, Sunday=00, Monday=01, Tuesday=02 and so on).
    $(yday): Day of the current year (001-366). Using the number any day of the year can be chosen.
    $(year): Year (Example:1998 or 1999).
    $(syear): Short form of the year (two positions).
    $(yweek): Calendar week (00-53). The first week begins with the first Sunday of the year.
    For the database connection:
    The transport environment also needs parameters to connect to the R/3 System database. As we know already the every instance in the R/3 CTS pipeline has its own database, therefore specific parameters should be defined for each database system. From dbtype parameter of RSPARAM file, TP program identifies the database system.
    The two parameters “dbname” and “dbhost” are required for ORACLE databases.
    DBHOST: is the name of the computer on which the database processes execute. TCP/IP name of the host if NT is being used.
    DBNAME: is the name of the database instance.
    As of Release 3.0E, two new parameters have been introduced.
    DBLOGICALNAME: The default value is $(system). The logical name that was used to install the database.
    DBCONFPATH: The default value is $(transdir).
    The parameters “dbname” and “dbhost” are also used for INFORMIX databases in an installation:
    DBHOST: Same as Oracle.
    DBNAME: Name of the database instance, uppercase and lowercase are distinguished here.
    INFORMIXDIR : “/informix/<SAPSID>” is the default value. Defines the directory namewhere the database software can be found.
    INFORMIXSQLHOSTS: “$(informixdir)/etc/sqlhosts[.tli|.soc]“is default value under Unix. The name of the SQLhosts file with its complete path is defined with this parameter.
    INFORMIX_SERVER: “$(dbhost)$(dbname)shm” is the default value. The name of the database server may be specified for a local connect.
    INFORMIX_SERVERALIAS: “$(dbhost)$(dbname)tcp”is the default vlue. The name of the database server can be specified for a remote connect.
    For Microsoft SQL Server database the two parameters “dbname” and “dbhost” are also required. DBHOST: The TCP/IP name of the host on which the database is running.
    DBNAME: The database instance name.
    For DB2 in AS/400 only “dbhost” is required.
    DBHOST: System name of the host on which the database is running.
    If the”OptiConnect” is used, the following line should be specified:
    OPTICONNECT 1
    For DB2/ AIX
    The two parameters “dbname” and “dbhost” are required
    DBHOST: The host on which the database processes are running. It is the TCP/IP name of the host for Windows NT (As we have seen in the earlier examples).
    DBNAME: Database instance name.
    The DB2 for AIX Client Application Enabler Software must also be installed on the host on which tp is running.
    ALLLOG: “ALOG” $(syear) $(yweek)”is the default value. This variable can be used in TPPARAM file to specify the name of a file in which tp stores information about every transport step carried out for a change request anywhere in the transport process. The file always resides in the log directory.
    SYSLOG: “SLOG $(syear) $(yweek).$(system)” is the default value. This variable can be used to name a file in which tp stores information about the progress of import actions in a certain R/3 System. The file does not store information for any particular change request. The file always resides in the log directory.
    tp_VERSION: Zero is the default value. If this parameter is set to not equal to zero, a lower version of tp may not work with this TPPARAM file. If the default value (zero) is set, the parameter has no affect.
    STOPONERROR: (Numeric value) The default value is 9. When STOPONERROR is set to zero, tp is never stopped in the middle of an “import” or “put” call. When STOPONERROR is set to a value greater than zero, tp stops as soon as a change request generates a return code that is equal to or greater than this value (The numeric value of the STOPONERROR parameter is stored in the variable BADRC). Change requests, which still have to be processed for the current step, are first completed. A “SYNCMARK” in the buffer of the R/3 System involved, sets a limit here. tp divides the value of this parameter between two internal variables. STOPONERROR itself is treated as a boolean variable that determines whether tp should be stopped, if the return code is too high.
    REPEATONERROR (Numeric value too): The default value is 9. The REPEATONERROR parameter is similar to STOPONERROR. The difference is, REPEATONERROR specifies the return code up to which a change request is considered to be successfully processed. Return codes less than REPEATONERROR are accepted as “in Order”. Change requests that were not processed successfully stay in the buffer.
    NEW_SAPNAMES: Default value is “FALSE”. A file is created for each user of the R/3 System group in the “sapnames” subdirectory of the transport directory. Except some of the operating system,the name of the user is the name of the file. It is very important to remember hat the special characters or length of the file name could cause problems. If all the R/3 Systems in the transport group have at least Release level 3.0.; TP program is efficient to handle this problem. The user names are modified to create file names that are valid in all operating systems and the real user names are stored in a corresponding file.
    Though we have seen so many parameters, for the minimum configuration the following two parameters are very important.
    TRANSDIR: specifies the name of the common transport directory. The following is a typical example from TPPARAM of Unix as we have seen before.
    transdir = /usr/sap/trans/
    DBHOST: contains the name of the database host. In Windows NT environment, this is the TCP/IP host name. The following is an example in Unix:
    DEV/dbname = DEV
    DEV/dbhost = sap9f
    DEV/r3transpath = /usr/sap/DEV/SYS/exe/run/R3trans
    For TP, to control ‘Start and Stop’ command files and database in R/3 the following important parameters are specified in TPPARAM:
    Parameters for the tp Function “PUT”: LOCK_EU (boolean) default value is “TRUE”. Though from version 3.1 onward the tp put command is used seldom in cts process still it is important to know how this parameter works. When “tp put” is used, it changes the system change option . If the parameter is set to “FALSE” nothing gets changed. If the parameter is set to “TRUE”, the system change option is set to “Objects cannot be changed” at the beginning of the call, and gets changed back to its previous value at the end of the call. The “tp put” command will give the exact status of the locking mechanism.
    LOCKUSER (used as boolean value): Default value is “TRUE”. This parameter is about the user login while tp put call is executed. If this parameter is set to “FALSE”, no locking mechanism for the users takes affect. If this parameter is defined as “TRUE” then a character is set in the database level; so only DDIC and SAP* can log on to the system. Users that have already logged on are not affected (this is a reason for activating the parameters STARTSAP and STOPSAP). The charactertor is removed at the end of the call, and all the users can log on to the SAP R/3 System again.
    STARTSAP: Default value is ” “.or “PROMPT” for Windows NT . This parameter is used by TP to start an R/3 System. It is not necessary for the clients to make tp start and stop R/3 system..
    STOPSAP: Default value is ” “or “PROMPT” for Windows NT. TP uses this parameter to stop an R/3 System.
    STARTDB: Default value is ” “. TP uses the value of this parameter to start the database of an R/3 System.
    The parameter is not active under Windows NT.
    STOPDB: Default value is ” “. TP uses the value of this parameter to stop the database of an R/3 System.
    This parameter is not active under Windows NT.
    The above parameters in UNIX can be used as following:
    STARTSAP = startsap R3
    STOPSAP = stopsap R3
    STARTDB = startsap db
    STOPDB = stopsap db
    In Windows NT:
    STARTSAP = \\$(SAPGLOBALHOST)\sapmnt\$(system)\sys\exe\run\startsap.exe
    R3 <SID> <HOST NAME> <START PROFILE>
    STOPSAP = \\$(SAPGLOBALHOST)\sapmnt\$(system)\sys\exe\run\stopsap.exe
    R3 <SID> <HOST NAME> <INSTANCE> <PROFILE PATH + Instance profile>
    The parameters STARTDB and STOPDB are not active under Windows NT.
    Parameters for the tp function “CLEAROLD”
    DATALIFETIME (Numeric): Default value is “200″. When the data file has reached a minimum age, it is moved to the subdirectory old data with tp check. tp clearold all. The life span of the data files in the data sub directory can be set in days with this all, parameter.
    OLDDATALIFETIME (Numeric): Default value is “365″. When a file located in the olddata subdirectory is no longer needed for further actions of the transport system and has reached a minimum age, it is removed with tp check.all, tp clearold all. The minimum age in days can be set with this parameter.
    COFILELIFETIME (Numeric): Default value is “365″. This parameter is used just like DATALIFETIME parameter.
    LOGLIFETIME (Numeric): Default value is “200″. This parameter applies to the life span of the log files. When the log files in log subdirectory is no longer needed for the transport system and has reached a minimum age, it is deleted with the calls tp check.all, tp clearold all. The minimum age in days can be defined with this parameter.
    The Three Key Utilities of the CTS system (TP, R3trans and R3chop):
    TP: Earlier in this chapter we have seen the objectives of TP. The TP transport control program is a utility program that helps the user to transport objects from one system to another. TP program is the front-end for the utility R3trans. TP stands for “Transports and Puts”. To make the TP work successfully the CTS system needs to be correctly configured. The following steps are very important for TP to run properly.
    The transport directory /usr/sap/trans must be installed and NFS mounted to all the systems in the CTS pipe line.
    RDDIMPDP program must be running (event driven is recommended) in each client. RDDIMPDP can be scheduled in the background by executing RDDNEWPP or RDDPUTPP. Use the tp checkimpdp <sap sid> command in /usr/sap/trans/bin directory as <sid>adm user to check RDDIMPDP program.
    Use the tp connect <sap sid> command in /usr/sap/trans/bin directory to see whether the tp program is connecting to the database successfully or not. To run TP command the user has to logon as <sid>adm in source or target system.
    The R/3 Systems in the CTS pipeline must have different names.
    The Global CTS Parameter File TPPARAM must be correctly configured.
    The source system (for the export) and target system ( for the import) must have at least two background work processes. TP always schedules the C class job, so if all the background jobs are defined as A class job then there will be problems in transport steps.
    Important Tips :.It is always better to have the up to date TP version installed in your system. A user can ftp a current version of TP from SAPSERV4 of SAP. Though R3trans and other utility programs can be used to do the transport, it is recommended to use TP whenever possible for the following reasons..
    The exports and imports are done separately using TP program. For example: when a transport is released from the system, the objects are exported from the source database to the operating system and then the import phase starts to transport those objects to the target system.
    TP takes care of the order of the objects. The order, that was followed to export the objects; the same order will be followed to import them to the target database.
    The TP command processes all change requests or transports in the SAP system buffer that have not yet been imported successfully. All the import steps are executed automatically after TP calls R3trans program to execute the following necessary steps:
    Dictionary Import: ABAP/4 dictionary objects will be imported in this step.
    Dictionary Activation: Name tabs or runtime descriptions will be written inactively. The R/3 system keeps running until the activation phase is complete. The enqueue modules are the exceptions in the running phase. After the activation of new dictionary structure the new actions are decided to get the runtime objects to the target system.
    Structure conversion: If necessary the table structure is changed in this phase.
    Move Nametabs: The new ABAP/4 Dictionary runtime objects which were inactive up to now are moved into the active runtime environment in this process. The database structures are adjusted accordingly. From the first step to the Main import step inconsistencies can occur to the R/3 system. After the main import phase all the inconsistency ca be solved.
    Main import with R3trans: All the data are imported completely and the system comes to a consistent state.
    Activation of enqueue-objects: The enqueue-objects cannot be activated in the same way as the objects of the ABAP/4 Dictionary, so they have to be activated after the main import in this step. They are then used directly in the running system.
    Structure Conversion of match codes, Import application defined objects, versioning and execution of user defined activities are some of the steps after activation of enqueue-objects. The next step is generation of ABAP/4 programs and screens, where all the programs and screens associated with the change request are generated. When all the import steps are completed successfully, the transport request is removed from the import buffer.
    It is recommended by SAP to schedule regular periods for imports into the target system (e.g. daily, weekly or monthly). Shorter periods between imports are not advisable. The transport to production should not be done in the off hours when the users are not working
    TP can be started with different parameters. The “tp help” command can help user to generate a short description about the use of the command.
    The following are the some important commands of TP:
    For export:
    tp export <change request>: The complete objects in the request from the source system will be transported. This command also used by SAP System when it releases a request.
    tp r3e <change request>: R3trans export of one transport request.
    tp sde <change request>: Application defined objects in one transport request can be exported.
    tp tst <change request> <SAP system >: The test import for transport request can be done using this command.
    tp createinfo <change request>: This command creates a information file that is automatically done during the export.
    tp verse <request>: This command creates version creates versions of the objects in the specified request.
    To Check the transport buffer, global parameter file and change requests:
    tp showbuffer <sid>: Shows all the change requests ready to be imported to the target system.
    tp count <sid>: Using this command users can find out the number of requests in the buffer waiting for import.
    tp go <sid>: This command shows the environment variables needed for the connection to the database of the <sid> or target system.
    tp showparams <sid>: All the values of modifiable tp parameters in the global parameter file. The default value is shown for parameters that have not been set explicitly.
    To import the change requests or transports:
    tp addtobuffer <request>.<sid>: If a change request is not in the buffer then this command is used to add it to the buffer, before the import step starts.
    tp import all <sid>: This command imports all the change requests from the buffer to the target system.
    tp put <sid>: The objective of this command is same as “tp import all <sid>”, but this command locks the system. This command also starts and stops the SAP system, if the parameters startsap and stopsap parameters are not set to ” “.
    tp import <change request> <sid>: To import a single request from the source system to target system.
    tp r3h <change request>| all <sid>: Using this command user can import the dictionary structures of one transport or all the transport from the buffer.
    tp act <change request>|all <sid>: This command activates all the dictionary objects in the change request.
    tp r3i <change request> | all <sid>: This command imports everything but dictionary structures of one.
    tp sdi <change request>|all <sid>: Import application-defined objects.
    tp gen <change request>|all <sid>: Screen and reports are generated using this command.
    tp mvntabs <sid>: All inactive nametabs will be activated with this command.
    tp mea <change request>|all <sid>: This command will activate the enqueue modules in the change request.
    When you call this command, note the resulting changes to the import sequence.
    Additional tp utility options:
    tp check <sid>|all (data|cofiles|log|sapnames|verbose): User uses this command to find all the files in the transport directory that are not waiting for imports and they have exceeded the minimum time specified using the COFILELIFETIME, LOGFILELIFETIME, OLDDATALIFETIME and DATALIFETIME parameters of TPPARAM file.
    tp delfrombuffer <request>.<sid>: This command removes a single change request from the buffer. In case of TMS, the request will be deleted from the import queue.
    tp setstopmark <sid>: A flag is set to the list of requests ready for import into the target system. When the user uses the command tp import all <sapsid> and tp put <sapsid>, the requests in front of this mark are only processed. After all the requests in front of the mark have been imported successfully, the mark is deleted.
    tp delstopmark <sid>: This command deletes the stop mark from the buffer if it exists.
    tp cleanbuffer <sapsid>: Removes all the change requests from the buffer that are ready for the import into the target system.
    tp locksys <sid>: This command locks the system for all the users except SAP* and DDIC. The users that have already logged on are not affected by the call.
    tp unlocksys <sid>: This command unlocks the system for all the users.
    tp lock_eu <sid>: This command sets the system change option to “system can not be changed” tmporarily.
    tp unlock_eu <sid>: This command unlocks the system for all the changes.
    tp backupall <sid>: This command starts a complete backup using R3trans command. It uses /usr/sap/trans/backup directory for the backup.
    tp backup delta <sid>: Uses R3trans for a delta backup into /usr/sap/trans/backup directory.
    tp sapstart <sid>: To start the R/3 system.
    tp stopsap <sid>: To stop the R/3 system.
    tp dbstart <sid>: To start the database.
    tp dbstop <sid>: To stop the database.
    Unconditional modes for TP: Unconditional modes are used with the TP program and these modes are intended for the special actions needed in the transport steps. Using unconditional mode user can manipulate the rules defined by the workbench organizer. The unconditional mode should be used when needed, otherwise it might create problems for the R/3 system database. Unconditional mode is used after the letter “U” in the TP command. Unconditional mode can be a digit between 0 to 9 and each has a meaning to it. The following is a example of a import having unconditional mode.
    tp import devk903456 qas client100 U12468
    0: Called a overtaker; change request can be imported from buffer without deleting it and then uncoditional mode 1 is used to allow another import in the correct location.
    1: If U1 is used with the export then it ignores the correct status of the command file; and if it is used with import then it lets the user import the same change request again.
    2: When used with tp export, it dictates the program to not to expand the selection with TRDIR brackets. If used in tp import phase, it overwrites the originals.
    3: When used with tp import, it overwrites the system-dependant objects.
    5: During the import to the consolidation system it permits the source systems other than the integration system.
    6: When used in import phase, it helps to overwrite objects in unconfirmed repairs.
    8: During import phase it ignores the limitations caused by the table classification.
    9: During import it ignores that the system is locked for this kind of transport.
    R3trans: TP uses R3trans program to transport data from one system to another in the CTS pipeline. efficient basis administrator can use R3trans directly to export and import data from and into any SAP systems. Using this utility transport between different database and operating system can e done without any problems. Different versions of R3trans are fully compatible with each other and can be used for export and import. The basis administrator has to be careful using R3trans for different release levels of R/3 software; logical inconsistency might occur if the up to date R3trans is not used for the current version of R/3 system.
    The syntax for using the control file is following:
    R3trans [<options>] <control file> (several options used at the same time; at least one option must be there)
    For example: R3trans –u 1 –w test.log test
    In the above example a unconditional mode is used, a log file “test.log” file is used to get the log result and a control file “test”, where the instructions are given for the R3trans to follow. The user needs to logon as <sid>adm to execute R3trans.
    The following options are available for the R3trans program:
    R3trans -d : This command is used to check the database connection .
    R3trans -u <int>: Unconditional mode can be used as we have seen in the above example.
    R3trans -v : This is used for verbose mode. It writes additional details to the log file
    R3trans -i <file>: This command directly imports data from data file without a control file.
    R3trans -l <file>: This provides output of a table of contents to the log file.
    R3trans -n : This option provides a brief information about new features of R3trans.
    R3trans –t: This option is used for the test mode. All modifications in the database are rolled back.
    R3trans -c <f1> [<f2>]: This command is used for conversion. The <f1> file will be copied to <f2> file after executing a character set conversion to the local character set.
    Important tips: Do not confuse the backup taken using R3trans with database backup. The backups taken using R3trans are logical backups of objects. In case something happens to the SAP system these backups can not be used for recovery. R3trans backups can be only used to restore a copy of a particular object that has been damaged or lost by the user.
    R3trans -w <file>: As we have seen in the above example this option can be used to write to a log file. If no file is mentioned then trans.log is default directory for the log.
    R3trans also can be used for the database backup.
    R3trans –ba: This command is used for a complete backup. we will see in the next paragraph how to use
    the control file for the backup.
    R3trans –bd: This command is used for a delta backup if the user does not want a complete backup.
    R3trans –bi: This option

  • How can i get the two values?

    There are two peaks above the threshold in spectrum diagram,I want to get their coordinates,how can I?
    thanks!

    LabView has a VI that does that.
    From your diagram, goto the Analyze palette, then select Waveform Monitoring, then Waveform Peak Detection.vi. You specify the threshhold and width and the VI tells you how many peaks it found and their location and amplitude. If you want to see the highest two peaks, you can use the Array Max and Min function (on the Array palette) and then search the array for the next highest: use a For loop to auto-index the array, a shift register for the next highest value (initilized to 0 or some number below any of your expected values), and search for a value > prvious but < max.

  • Threshold value problem.

    Hi, experts.
    Now, I'm testing the preference calculateion.
    After I save sales order document, the sales order I saved is replicated to the GTS server.
    And, the threshold value is calculated by the system.
    In my understanding, the threshold value should be come from the rule that is assigned to the relevant product.
    When I excuted the preference calculation in GTS system, the threshold value is calculated correctly.
    But, after the GTS system get the sales order document from the ERP system the value is different from the value of the preference calculation excuted in GTS system.
    And even I changed the rule's condition(%), the threshold value of the sales document in GTS system is not changed.
    But, the threshold value of GTS's preference calculation transaction is changed whenever I changes the rule.
    I want to know how to solve this issue.
    The threshold values shouldn't be different from each other.
    Thank you for reading.
    Best regards,
    Jong Hwan.
    Edited by: Jong-Hwan Park on May 26, 2011 5:11 AM

    Dear. Marc P. Gilomen.
    I totally understand what you are written above.
    The threshold value shouldn't be changed until I change the rule related to the product.
    However, after I changed the rule, I have expected the threshold value should be changed according to the rule.
    But, it's not.
    This is the point.
    After I change the rule, for example 35% to 45%, the threshold value of GTS preference determination transaction '/SAPSLL/PRECA01' is changed. But, The replicated sales order's threshold value is not changed. And delivery, billing is the same with salesorder also.
    It's not changed whatever I do.
    So, I want to solve this problem. But until now, I didn't find the answer.
    Thank you for reading.
    Best regards,
    Jong Hwan

  • Tables for threshold value in GTS 11

    Good morning,
    to get away from our old ERP preference calculation we bought a license for GTS. At the moment we are discussing implementing version 10.1 or even 11.
    I am worrying as I am not sure, how to put data out of GTS to BI (BW) with version 11. We need the price and the threshold value (former table MMPREF_PRO_....) and those for the vendor declaration from supplier and the compression. (LFEI/MAPE).
    Does anybody has experience with getting this into BI? And the "new" tablenames in GTS?
    Hope, my questions are not too dumb, but we are just starting.
    Thanks for any help and have a nice day.
    Rgds Alex Linck
    Kostal Germany

    Hi ,
    Customize as per the path given below for the GTS to BI(BW) data replication.
    IMG : Integration with Other mySAP.com Components >> Data Transfer to the SAP Business Information Warehouse >> General Settings >> Maintain Control Parameters for Data Transfer. 
    GTS > General Setting > Document Structure > Define Document Types Turn on > Transfer to SAP Netweaver BI Active flag .
    GTS > General Setting > Document Structure > Define Item Categories > Transfer to SAP Netweaver BI Active flag .
    GTS > General Setting > Organizational Structure > Control at Foreign Trade Organization [FTO] for SAP NW BI > In the BI Active column, this flag is turned on for active FTOs.
    Ashish

  • DSWP threshold values all change

    Hi,
    I have a SolMan EHP1 system being used as a CEN . Now i would like to setup Email alert monitoring and to test it i change the threshold values on the filesystem and see if it triggers an email. Now problem is when i change the threshold value of a filesystem lets say sapreorg
    G->Y
    Y->R
    R->Y
    Y->G
    it seems that the top value populates the entire column with identical values for all the filesystem when i actually want them to have individual values. and these values do not get transferred to RZ20 on the CEN for that system. I would have expected the values set in DSWP system monitoring to be reflected in rz20 under filesystem and it seems the values do not change.
    is there a bug in this or is it s standard feature? I have trawled the marketplace for oss notes and found nothing. Checked forum but i cannot find exactly what i am trying to look for
    your help is much appreciated
    Thanks,
    Mani

    Hi Nibu,
    Yes all the threshold values have been set through this method and its quite easy to setup. in the CEN's rz20 i could see entries to the dev and test servers for "Filesystems" node so i could assign it 'z' auto reaction methods to send emails with specific messages. in RZ21 it seems i can only assign auto reaction methods with a generic message for, for example "Filesystems" but i cannot dig down further into Filesystems and select a specific alert like sapreorg or saparch or /usr/<SID> for a specific system has reached threshold levels so we could quickly respond to it rather than trawl through the alerts to see which is giving the issue.
    Anyone know how i can assign email auto reaction methods to specific alerts  in the CEN without setting this up locally in the monitored system?
    Mani

Maybe you are looking for