Fast growing db2diag.log (FP4 V9.5)

Does anyone have experiences with V9.5 FP4SAP?
We have implemented this FP4 last week (from FP2a). Since this a db2diag.log of about 1MB were written (and the system usage is really low).
We have another system on FP2a which is used heavier (as a productive system) and the db2diag.log is about 2,5 MB within two months.
We have checked SAP note 1086130 for DB parameters and did not find a required change (from FP2a to FP4).
Any idea is welcome!
Best regards.

db2diag.log extract 2009-08-02 23:00-23:59
2009-08-02-23.24.20.577179+120 I1019336A362       LEVEL: Warning
PID     : 602270               TID  : 4114        PROC : db2sysc 0
INSTANCE: db2mes               NODE : 000
EDUID   : 4114                 EDUNAME: db2logmgr (MES) 0
FUNCTION: DB2 UDB, data protection services, sqlpgArchiveLogFile, probe:3108
MESSAGE : Started archive for log file S0001678.LOG.
2009-08-02-23.24.21.914535+120 I1019699A422       LEVEL: Warning
PID     : 602270               TID  : 4114        PROC : db2sysc 0
INSTANCE: db2mes               NODE : 000
EDUID   : 4114                 EDUNAME: db2logmgr (MES) 0
FUNCTION: DB2 UDB, data protection services, sqlpgArchiveLogFile, probe:3180
MESSAGE : Completed archive for log file S0001678.LOG to TSM chain 0 from
          /db2/MES/log_dir/NODE0000/.
2009-08-02-23.28.53.621596+120 I1020122A483       LEVEL: Event
PID     : 602270               TID  : 3342        PROC : db2sysc 0
INSTANCE: db2mes               NODE : 000         DB   : MES
APPHDL  : 0-8                  APPID: *LOCAL.DB2.090729131318
AUTHID  : MESADM 
EDUID   : 3342                 EDUNAME: db2stmm (MES) 0
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:20
CHANGE  : STMM CFG DB MES: "Locklist" From: "3712" <automatic>  To: "3552" <automatic>
2009-08-02-23.28.53.627724+120 I1020606A545       LEVEL: Event
PID     : 602270               TID  : 3342        PROC : db2sysc 0
INSTANCE: db2mes               NODE : 000         DB   : MES
APPHDL  : 0-8                  APPID: *LOCAL.DB2.090729131318
AUTHID  : MESADM 
EDUID   : 3342                 EDUNAME: db2stmm (MES) 0
FUNCTION: DB2 UDB, access plan manager, sqlra_resize_pckcache, probe:150
CHANGE  : APM : Package Cache : FROM "5208760" : TO "4191641" : success
IMPACT  : None
DATA #1 : String, 29 bytes
Package Cache Resized (bytes)
2009-08-02-23.28.53.629400+120 I1021152A485       LEVEL: Event
PID     : 602270               TID  : 3342        PROC : db2sysc 0
INSTANCE: db2mes               NODE : 000         DB   : MES
APPHDL  : 0-8                  APPID: *LOCAL.DB2.090729131318
AUTHID  : MESADM 
EDUID   : 3342                 EDUNAME: db2stmm (MES) 0
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:20
CHANGE  : STMM CFG DB MES: "Pckcachesz" From: "1311" <automatic>  To: "1055" <automatic>
2009-08-02-23.28.54.510395+120 I1021638A507       LEVEL: Info
PID     : 602270               TID  : 3342        PROC : db2sysc 0
INSTANCE: db2mes               NODE : 000         DB   : MES
APPHDL  : 0-8                  APPID: *LOCAL.DB2.090729131318
AUTHID  : MESADM 
EDUID   : 3342                 EDUNAME: db2stmm (MES) 0
FUNCTION: DB2 UDB, buffer pool services, sqlbAlterBufferPoolAct, probe:90
MESSAGE : Altering bufferpool "IBMDEFAULTBP" From: "19552" <automatic> To:
          "15648" <automatic>
2009-08-02-23.29.24.543188+120 I1022146A494       LEVEL: Event
PID     : 602270               TID  : 3342        PROC : db2sysc 0
INSTANCE: db2mes               NODE : 000         DB   : MES
APPHDL  : 0-8                  APPID: *LOCAL.DB2.090729131318
AUTHID  : MESADM 
EDUID   : 3342                 EDUNAME: db2stmm (MES) 0
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:20
CHANGE  : STMM CFG DB MES: "Database_memory" From: "205060" <automatic>  To: "185520" <automatic>
2009-08-02-23.29.54.555357+120 I1022641A483       LEVEL: Event
PID     : 602270               TID  : 3342        PROC : db2sysc 0
INSTANCE: db2mes               NODE : 000         DB   : MES
APPHDL  : 0-8                  APPID: *LOCAL.DB2.090729131318
AUTHID  : MESADM 
EDUID   : 3342                 EDUNAME: db2stmm (MES) 0
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:20
CHANGE  : STMM CFG DB MES: "Locklist" From: "3552" <automatic>  To: "3392" <automatic>
2009-08-02-23.29.54.559234+120 I1023125A545       LEVEL: Event
PID     : 602270               TID  : 3342        PROC : db2sysc 0
INSTANCE: db2mes               NODE : 000         DB   : MES
APPHDL  : 0-8                  APPID: *LOCAL.DB2.090729131318
AUTHID  : MESADM 
EDUID   : 3342                 EDUNAME: db2stmm (MES) 0
FUNCTION: DB2 UDB, access plan manager, sqlra_resize_pckcache, probe:150
CHANGE  : APM : Package Cache : FROM "4191641" : TO "3937361" : success
IMPACT  : None
DATA #1 : String, 29 bytes
Package Cache Resized (bytes)
2009-08-02-23.29.54.560050+120 I1023671A484       LEVEL: Event
PID     : 602270               TID  : 3342        PROC : db2sysc 0
INSTANCE: db2mes               NODE : 000         DB   : MES
APPHDL  : 0-8                  APPID: *LOCAL.DB2.090729131318
AUTHID  : MESADM 
EDUID   : 3342                 EDUNAME: db2stmm (MES) 0
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:20
CHANGE  : STMM CFG DB MES: "Pckcachesz" From: "1055" <automatic>  To: "991" <automatic>
2009-08-02-23.29.55.097984+120 I1024156A507       LEVEL: Info
PID     : 602270               TID  : 3342        PROC : db2sysc 0
INSTANCE: db2mes               NODE : 000         DB   : MES
APPHDL  : 0-8                  APPID: *LOCAL.DB2.090729131318
AUTHID  : MESADM 
EDUID   : 3342                 EDUNAME: db2stmm (MES) 0
FUNCTION: DB2 UDB, buffer pool services, sqlbAlterBufferPoolAct, probe:90
MESSAGE : Altering bufferpool "IBMDEFAULTBP" From: "15648" <automatic> To:
          "13914" <automatic>
2009-08-02-23.30.25.137828+120 I1024664A494       LEVEL: Event
PID     : 602270               TID  : 3342        PROC : db2sysc 0
INSTANCE: db2mes               NODE : 000         DB   : MES
APPHDL  : 0-8                  APPID: *LOCAL.DB2.090729131318
AUTHID  : MESADM 
EDUID   : 3342                 EDUNAME: db2stmm (MES) 0
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:20
CHANGE  : STMM CFG DB MES: "Database_memory" From: "185520" <automatic>  To: "176480" <automatic>
2009-08-02-23.41.12.098793+120 I1025159A362       LEVEL: Warning
PID     : 602270               TID  : 4114        PROC : db2sysc 0
INSTANCE: db2mes               NODE : 000
EDUID   : 4114                 EDUNAME: db2logmgr (MES) 0
FUNCTION: DB2 UDB, data protection services, sqlpgArchiveLogFile, probe:3108
MESSAGE : Started archive for log file S0001679.LOG.
2009-08-02-23.41.13.334331+120 I1025522A422       LEVEL: Warning
PID     : 602270               TID  : 4114        PROC : db2sysc 0
INSTANCE: db2mes               NODE : 000
EDUID   : 4114                 EDUNAME: db2logmgr (MES) 0
FUNCTION: DB2 UDB, data protection services, sqlpgArchiveLogFile, probe:3180
MESSAGE : Completed archive for log file S0001679.LOG to TSM chain 0 from
          /db2/MES/log_dir/NODE0000/.

Similar Messages

  • Fast growing object in tablespace

    Hi Experts
    can any tell me how to find the fast growing objects in database. database is growing very fast and i want to know what all objects are those?
    thanks in advance
    anu

    can any tell me how to find the fast growing objects in database. database is growing very fast and i want to know what all objects are those?I would change this query to take it object wise it as OP is interested in growing object size not in segment size
    select owner,segment_name,segment_type,sum(bytes)/1024/1024 "SIZE(MB)",blocks from dba_segments where segment_type = 'TABLE' and owner = 'LOGS' group by owner,segment_name,segment_type;Regards.

  • Error Database DB2 (db2diag.log)

    Hi, I nedd help.
    The log db2diag.log show message error, and database colapse every 48 hours.
    2011-08-01-08.59.46.316326-300 I30710990A559      LEVEL: Warning
    PID     : 14564                TID  : 427         PROC : db2sysc 0
    INSTANCE: db2epd               NODE : 000         DB   : EPD
    APPHDL  : 0-94                 APPID: 10.20.0.61.53713.110801135412
    AUTHID  : SAPEPDDB
    EDUID   : 427                  EDUNAME: db2agent (EPD) 0
    FUNCTION: DB2 UDB, data protection services, sqlpgWaitForLrecBufferLimit, probe:410
    MESSAGE : ZRC=0x870F0151=-2029059759=SQLO_WP_TERM
              "The waitpost area has been terminated"
    DATA #1 : String, 10 bytes
    Ignore rc.
    please help.

    Hi I would like to have more informations. In db2diag, do you  have any LEVEL : Error or Severe messages ?.  The log you put is a warning
    thanks

  • Fast growing Basis tables....

    Hi,
    I have bees using one note 706478 to find out the measure against fast growing basis tables. I use this for R/3. Is there any note like this for BW and CRM as well. I search OSS but could not come up with any solid note.
    Does anybody knwo abot this? Your help will be appreciated.
    Thanks in advance to everybody.
    Thanks for the help.

    There are several factors to consider. For example:
    - Is the table a TimesTen only table or is it a cached Oracle table using AWT to push the inserts down to Oracle?
    - How long does the data need to be retained? Forever, for 1 hour, for 1 minute...
    If the table is an AWT cached table from Oracle then the inserted data ultimately ends up in Oracle. It is therefore likely that you can safely discard some data from TimesTen to prevent the datastore filling up. You can do this using the automnatic aging feature in TT 7.0 or you can implemnent it as a periodic UNLOAD CACHE GROUP statement executed from a script or from application code.
    If the table is a TimesTen only table then in addition to the above you need to consider if you can just discard 'old' data or if you have to keep it somewhere. If you need to keep it then you will first need to copy it out of TT before it gets deleted. In this case aging is probably not a good solution and you should implement some application mechanism to copy the data somewhere else and then delete it. If you do not need to keep the data then aging may still be an option.
    In any event, you will want to give yourself as much headroom as possible by making the datastore as big as you can subject to available memory etc. If you use aging, you will likely have to configure very aggressive aging parameters in order to keep the table size under control. It is possible that aging may not be up to the job if the insert rate is extremely high in which case you may anyway need to implement some application based cleanup mechanism.
    Chris

  • Automatic archiving of db2diag.log

    Good Morning everyone,
    I have a question regarding the db2diag.log. After a couple of month this logfile could increase dramatically. In order to make root cause analysis easier I would like to archive the file every month and clean it up to get a better overview in it.
    Is there a possibility do to this via DB2 (Alertlog archiving) or should I realize this with a planned Batchjob every month.
    Greetings
    Marco

    Hello Marco,
    You can archive the db2diag.log using the db2diag tool. You can specify the -A or -archive <dirname> option:
    http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/index.jsp?topic=/com.ibm.db2.luw.admin.cmd.doc/doc/r0011728.html
    -A | -archive dirName
    Archives a diagnostic log file. When this option is specified, all other options are ignored. If one or more file names are specified, each file is processed individually. A timestamp, in the format YYYY-MM-DD-hh.mm.ss, is appended to the file name.
    You can specify the name of the file and directory where it is to be archived. If the directory is not specified, the file is archived in the directory where the file is located and the directory name is extracted from the file name.
    If you specify a directory but no file name, the current directory is searched for the db2diag.log file. If found, the file will be archived in the specified directory. If the file is not found, the directory specified by the DIAGPATH configuration parameter is searched for the db2diag.log file. If found, it is archived in the directory specified.
    If you do not specify a file or a directory, the current directory is searched for the db2diag.log file. If found, it is archived in the current directory. If the file is not found, the directory specified by the DIAGPATH configuration parameter is searched for the db2diag.log file. If found, it is archived in the directory specified by the DIAGPATH configuration parameter.
    Regards,
    Adam Wilson
    SAP development Support

  • Very fast growing STDERR# File

    Hi experts,
    I have stderr# files on two app-servers, which are growing very fast.
    Problem is, I can't open the files via ST11 as they are to big.
    Is there a guide, which explains what is it about and how I can manage this file (reset, ...)?
    May it be a locking-log?
    As I have a few entries in SM21 about failed locking.
    I also can find entries about "call recv failed" and "comm error, cpic return code 020".
    Thx in advance

    Dear Christian,
    Stderr* are used to record syslog and logon check, when the system is up, there should be only one being used, you can delete the others. for example, if the stderr1 is being used, then you can delete the stderr0.
    stderr2,stderr3... Otherwise only shutting down the application server will allow deletion. When deleted the files will be created
    again and only increase in size if the original issue causing it still exists, switching is internal and not controlled by size.
    Some causes of 'stderr4' growth:
    In the case of repeated input/output errors of a TemSe object (in particular in the background), large portions of trace information are written to stderr. This information is not necessary and not useful in this quantity.
    Please review carefully following Notes :
       48400 : Reorganization of TemSe and Spool
      (here delete old 'temse' objects)
    RSPO0041 (or RSPO1041), RSBTCDEL: To delete old TemSe objects
    RSPO1043 and RSTS0020 for the consistency check.
    1140307 : STDERR1 or STDERR3 becomes unusually large
    Please also run a Consistency Check of DB Tables as follows:
    1. Run Transaction SM65
    2. Select Goto ... Additional tests
    3. Select "Consistency check DB Tables" and click execute.
    4. Once you get the results check to see if you have any inconsistencies
       in any of your tables.
    5. If there are any inconsistencies reported then run the "Background
       Procesing Analyses" (SM65 .. Goto ... Additional Tests) again.
       This time check both the "Consistency check DB Tables" and the
       "Remove Inconsistencies" option.
    6. Run this a couple of times until all inconsistencies are removed from
       the tables.
    Make sure you run this SM65 check when the system is quiet and no other batch jobs are running as this would put a lock on the TBTCO table till it finishes.  This table may be needed by any other batch job that is running or scheduled to run at the time SM65 checks are running.
    Running these jobs daily should ensure that the stderr files do not increase at this rate in the future.
    If the system is running smoothly, these files should not grow very fast, because most of they just record the error information when it happening.
    For more information about stderr please refer to the following note:
       12715: Collective note: problems with SCSA
              (the Note contains the information about what is in the  stderr and how it created).
    Regards,
    Abhishek

  • Kackup fast growing table

    I have a table for xml messages in which one of the column size is 4000, the table grows very fast (10G+/week). I also need to keep these messages by backup or some other ways, but need to be loaded into the DB easily if required to be online.
    The table is written at any time, there is no way to know when it will.
    Does anybody know what is the common practice for this kind of operation, where should I store the data, and how to load into the DB, or if I need special tools. I have only 60G in my DB server.
    Thanks in advance

    Robert Geier wrote:
    This does not indicate what is growing.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/statviews_4149.htm
    "DBA_TAB_MODIFICATIONS describes modifications to all tables in the database that have been modified since the last time statistics were gathered on the tables. Its columns are the same as those in "ALL_TAB_MODIFICATIONS".
    Note:
    This view is populated only for tables with the MONITORING attribute. It is intended for statistics collection over a long period of time. For performance reasons, the Oracle Database does not populate this view immediately when the actual modifications occur. Run the FLUSH_DATABASE_MONITORING_INFO procedure in the DIMS_STATS PL/SQL package to populate this view with the latest information. The ANALYZE_ANY system privilege is required to run this procedure."-----------------
    Thank you.

  • LabVIEW/TestStand/PXI Engineering Architect Role in fast growing Semiconductor Services Company

    A reputed Semiconductor Services company is on the cusp of major growth due to recent Brand Recognition and happy customers. The company is looking for a capable, motivated senior engineer or developer who wants to take the next step toward technical/architecture leadership, team leadership and opportunities to make lasting impressions on customers in order to grow the business and themselves. Some questions to ask yourself before you apply:
    a) Do you have 2+ years of experience in LabVIEW/TestStand/PXI with a strong foundation in Electrical/Electronics/Communications/Computer Engineering
    b) Do you feel that your technical skills in the LabVIEW/TesStand/PXI space have evolved to the level that you can punch above your weight compared to the number of years of experience. We are looking for go-getters who may be 2-3 years experience but make a lasting impression on any customers and come across as 4-5 years of experience because of your innate smartness, command of engineering/architectural concepts, communication skills and can-do attitude
    c) Are you driven by a sense of integrity, respect to your colleagues and a strong team spirit
    d) Do you believe that every meeting and deliverable to a customer is a vehicle for company and personal growth?
    e) Do you enter every project and opportunity with a view to ensuring customer delight and loyalty?
    f) Are you fearless about entering new allied technologies such as LabVIEW FPGA/Xilinx//Altera based FPGA/ Microcontroller programming and system design
    If the answer to these questions is yes, please email [email protected] with your most updated resume and prepare to embark on a career that will fuel your  job satisfaction and growth in the years to come. A strong technical background in the areas mentioned is essential and will be tested.
    Company Information:
    Soliton Technologies Inc. is a value-driven Engineering Services company with over 15 years of experience and steady services growth in the Semiconductor, Automotive, Biomedical and Test and Measurement industries. (www.solitontech.com). Soliton's services range from LabVIEW and TestStand based Validation Automation (often PXI based), GUI Frameworks, Embedded Development Services on NI Embedded targets as well as Microcontrollers, High Speed FPGA Design, Enterprise Software Development on Multiple programming platforms ( C, C++, C#, .NET, Python, Java, HTML5 etc) and Vision Based Test Systems. The company has a strong Semiconductor focus in the past decade with multiple Fortune 500 customers with steady growth and a track record of customer retention.
    Compensation: Not a constraint for the right candidate.

    Hi,
    Kindly arrange an interview process.
    I have attached my resume
    Regards,
    Bharath Kumar

  • Fast growing tablespaces

    Hi Experts,
    The following tablespaces are consuming max free space.
    PSAPBTABD : 50% of the total space aquired 77.5 GB   
    PSAPBTABI : 38.5 GB                     
    PSAPCLUD : 15 GB          
    85 % of total space is consumed by these tablespaces.
    Tables with max growth of increasing are :
    BSIS, RFBLG, ACCTIT, ACCTCR, MSEG, RSEG   etc.
    Average increase of  2GB space per month.
    Kindly help me to find out the solution.
    Regards,
    Praveen Merugu

    Hi praveen,
    Greetings!
    I am not sure whether you are a BASIS or Functional guy but, if you are BASIS then you can discuss with your functional team on selecting the archiving objects inline with your project. Normally, func consultants will have the knowledge of which archive object will delete entries from which tables... You can also search help.sap.com for identifying the archiving objects.
    Once you identified the archiving objects, you need to discuss with your business heads and key users about your archiving plan. This is to fix the data retention period in the production system and to fix the archiving cycle for every year.
    Once these been fixed, you can sit along with func guys to create varients for the identified archiving objects. Use SARA and archivie the concerned objects.
    Initiating a archiving project is a time consuming task. It is better to start a seperate mini project to kick off the initial archiving plan. You can test the entire archiving phase in QA system by copying the PRD client.
    The below summary will give you a idea to start the archiving project,
    1. Identify the tables which grow rapidly and its module.
    2. Identify the relevent archiving object which will delete the entries in rapidly growing table.
    3. Prepare a archive server to store the archived data ( Get 3rd party archiving solution if possible). Remeber, the old data should be reterived from the archive server when ever the business needs it.
    4. Finalise the archving cycle inline with your business need.
    5.Archvie the objects using SARA.
    6.Reorganize the DB after archiving.
    Hope this will give some idea on archiving project.
    regards,
    VInodh.

  • Db2diag.log Severe msg db2fmp, common communication sqlccipcdarihandshake

    I am new to SAP and DB2 as a Netweaver Administrator of a new project.  We do not have anyone who has experience on the following messages on the db2dialog.log.
    Is is normal to see lots of the following messages during a DB2 offline backup on the ECC server?
    2006-11-18-22.00.49.xxxxxx-480 I457336A288        LEVEL: Severe
    PID     : xxxxxx               TID  : 1           PROC : db2fmp (idle) 0
    INSTANCE: db2dss               NODE : 000
    FUNCTION: DB2 UDB, common communication, sqlccipcdarihandshake, probe:3
    RETCODE : ZRC=0x83000024=-2097151964
    There is a warning message before these messages:
    2006-11-18-22.00.49.422496-480 I456932A403        LEVEL: Warning
    PID     : 421914               TID  : 1           PROC : db2sysc 0
    INSTANCE: db2dss               NODE : 000
    FUNCTION: DB2 UDB, routine_infrastructure, sqlerKillAllFmps, probe:5
    MESSAGE : Bringing down all db2fmp processes as part of db2stop
    DATA #1 : Hexdump, 4 bytes
    0x0FFFFFFFFFFFE270 : 0000 0000
    Thanks a lot.                                 ....

    Hi,
    I assume you have scheduled the offline backup in the SAP system.
    Because the application servers are connected permanently to the database offline backups cannot be done without forcing of all connected appliations. On SAP systems this is achived by a db2stop force call before performing the offline backup.
    The call to db2stop force is the reason for the errors and warnings you see during an offline backup of a SAP system.
    As an alternative to offline backups you should evaluate the usage of online backup or online backups include log files.
    Best regards, Jens

  • Db2diag.log issue in DB2

    Hi All,
    We are repetitively getting severe error message in DB2  database. Please advise on this.
    2014-08-07-10.45.03.030954+060 I14065240A184      LEVEL: Severe
    PID:10944994 TID:74889 NODE:000 Title: **** DRDA CMNMGR CB ****
    Dump File: /db2/fmp/db2dump/10944994.74889.000.dump.bin
    2014-08-07-10.45.03.032407+060 I14065425A181      LEVEL: Severe
    PID:10944994 TID:74889 NODE:000 Title: **** DSS SEGMENT ****
    Dump File: /db2/fmp/db2dump/10944994.74889.000.dump.bin
    2014-08-07-10.45.03.033169+060 I14065607A184      LEVEL: Severe
    PID:10944994 TID:74889 NODE:000 Title: **** RECEIVE BUFFER ****
    Dump File: /db2/fmp/db2dump/10944994.74889.000.dump.bin
    2014-08-07-10.45.03.033957+060 I14065792A238      LEVEL: Severe
    PID:10944994 TID:74889 NODE:000 Title: **** SEND BUFFERS ****
    Dump File: /db2/fmp/db2dump/10944994.74889.000.dump.bin
    Dump with error: Error 65706 occurred. ( rc = 0x100aa )
    1833 Aug 07 07:04 db2locktimeout.0.3856.2014-08-07-07-04-58
    -rw-r--r--    1 db2fmp   dbfmpadm     250773 Aug 07 07:05 10944994.221612.000.dump.bin
    -rw-r--r--    1 db2fmp   dbfmpadm    1733457 Aug 07 07:05 10944994.279501.000.dump.bin
    -rw-r--r--    1 db2fmp   dbfmpadm     512284 Aug 07 07:10 10944994.189348.000.dump.bin
    -rw-r--r--    1 db2fmp   dbfmpadm    1666934 Aug 07 07:15 10944994.251255.000.dump.bin
    -rw-r--r--    1 db2fmp   dbfmpadm    1456576 Aug 07 07:15 10944994.220776.000.dump.bin
    -rw-r--r--    1 db2fmp   dbfmpadm     716526 Aug 07 07:15 10944994.186209.000.dump.bin
    -rw-r--r--    1 db2fmp   dbfmpadm     226391 Aug 07 07:20 10944994.202921.000.dump.bin
    -rw-r-----    1 db2fmp   dbfmpadm       1833 Aug 07 07:20 db2locktimeout.0.3856.2014-08-07-07-20-03
    -rw-r--r--    1 db2fmp   dbfmpadm     427893 Aug 07 07:20 10944994.258979.000.dump.bin
    -rw-r--r--    1 db2fmp   dbfmpadm    1250050 Aug 07 07:30 10944994.234475.000.dump.bin
    -rw-r--r--    1 db2fmp   dbfmpadm    1188238 Aug 07 07:30 10944994.90388.000.dump.bin
    -rw-r--r--    1 db2fmp   dbfmpadm    1274291 Aug 07 07:35 10944994.259795.000.dump.bin
    Regards,
    Raja

    Hi,
    Also getting below error message.
    2014-08-07-10.55.01.768463+060 I14131030A1270     LEVEL: Severe
    PID     : 10944994             TID  : 186209      PROC : db2sysc 0
    INSTANCE: db2fcp               NODE : 000
    APPHDL  : 0-51945
    EDUID   : 186209               EDUNAME: db2agent (idle) 0
    FUNCTION: DB2 UDB, DRDA Communication Manager, sqljcReceive, probe:30
    MESSAGE : ZRC=0x81360010=-2127167472=SQLZ_RC_CMPARM, SQLT_SQLJC
              "CM parameter bad"
    DATA #1 : String, 11 bytes
    CCI Error:
    DATA #2 : unsigned integer, 8 bytes
    68
    CALLSTCK: (Static functions may not be resolved correctly, as they are resolved to the nearest symbol)
      [0] 0x090000000D21AFB0 pdLog + 0xE0
      [1] 0x090000000B9AA2DC sqljcLogCCIError__FP10sqljCmnMgrPCcP12SQLCC_COND_TUclN35 + 0x234
      [2] 0x090000000B9AB5AC sqljcLogCCIError__FP10sqljCmnMgrPCcP12SQLCC_COND_TUclN35@glueDAC + 0x78
      [3] 0x090000000A594B84 sqljcReceive__FP10sqljCmnMgr + 0x278
      [4] 0x090000000D2D859C @63@sqljsRecv__FP8sqeAgentP13sqljsDrdaAsCb + 0x100
      [5] 0x090000000D2D8370 @63@sqljsDrdaAsInnerDriver__FP18SQLCC_INITSTRUCT_Tb + 0x254
      [6] 0x090000000D2D7E60 sqljsDrdaAsDriver__FP18SQLCC_INITSTRUCT_T + 0xEC
      [7] 0x090000000D2540B0 RunEDU__8sqeAgentFv + 0x2F4
      [8] 0x090000000D2502EC EDUDriver__9sqzEDUObjFv + 0xE4
      [9] 0x090000000D25C4B0 sqloEDUEntry + 0x260
    Rgards,
    Raj

  • System.log file super bloat!

    Tonight my hard disk suddenly started filling rapidly on it's own and I traced the problem down to an amazingly fast growing system.log file. It appeared to be corrupt because the Console app didn't see it when I tried to read it there to see what was filling it. It got up to 6GB!
    I pulled the file out of there and things seem okay (I still have it in visible form on another drive). Is is at problem that now my OS X drive has no system.log file? Will it create one on it's own eventually? Is there some way for me to replace it?
    Any ideas why it would have done that?
    Thanks for any education!
    -Dave

    Hi Tenfresh, you don't list your system specs so it's hard to tell but this may have been your issue:
    http://docs.info.apple.com/article.html?artnum=300893
    Yes, the system.log will recreate, if it hasn't done so already.
    -mj
    [email protected]

  • DB2 Log file Management is not happening properly

    Hi experts,
    I am running SAP ERP6.0 EHP4 on DB2 9.5 FP4
    I have enabled Archive log by providing First Log Archive Method (LOGARCHMETH1) =DISK: /db2/PRD/log_archive
    I have brought down SAP & Database and started database .After starting DB ,  taken full offline backup.
    Didnu2019t change below parameters as these are replaced by LOGARCHMETH1 in newer versions as per documents.
    Log retain for recovery enabled=OFF
    User exit for logging enabled =OFF
    /db2/PRD/log_dir (Online transaction Logs) is 25% full but I couldnu2019t find even single file in /db2/PRD/log_archive
    I am suspecting DB2 Log file Management is not happening properly as I couldnu2019t find any offline transaction logs in /db2/PRD/log_archive file system
    Request you let me know where it went wrong & what could be the issue
    thanks

    Hello,
    From your post it seems that there is a space character between "DISK:" and the path you have provided.
    Maybe this is only a wrong display here in the forum.
    Nevertheless, can you check it in your system, it should rather look like DISK:/db2/PRD/log_archive.
    Can you initiate an archive command with the "db2 archive log for db prd" command.
    What does the db2diag.log tell for this archiving attempt?

  • Fast Refresh Materialized View - failure

    Okay, this issue currently has me stumped. I searched the world wide web and still cannot seem to identify the cause of this issue.
    Here is the scenario:
    I created a materialized view log for the source table (let's say table1) found at DB1.
    I then create a materialized view (fast refresh, with logging) at a remote database (let's say DB2). The query for the MV is basic: select * from schema1.table1@db1_link;
    I set the materialized view to refresh every 30 seconds (for testing purposes).
    This creates a dbms job that executes every 30 seconds. The materialized view gets created successfully. I purposely enter new data into table1. The materialized view log for table1 successfully captures these changes. However, the materialized view found at DB2 does not get refreshed (updated) successfully.
    In fact, the dbms job errors-out. It keeps failing and failing. After 16 times, it gets marked as 'broken.'
    The error message is as such:
    RDBA WARNING: ORA-12012: error on auto execute of job 1472
    RDBA WARNING: ORA-00604: error occurred at recursive SQL level 3
    RDBA WARNING: ORA-01017: invalid username/password; logon denied
    RDBA WARNING: ORA-02063: preceding line from db1
    RDBA WARNING: ORA-06512: at "SYS.DBMS_SNAPSHOT", line 820
    RDBA WARNING: ORA-06512: at "SYS.DBMS_SNAPSHOT", line 877
    RDBA WARNING: ORA-06512: at "SYS.DBMS_IREFRESH", line 683
    RDBA WARNING: ORA-06512: at "SYS.DBMS_REFRESH", line 195
    RDBA WARNING: ORA-06512: at line 1
    The strange thing is, that the error supposedly claims invalid username/password. But I've used the db links successfully before & the username/password is not invalid. Has anyone encountered this issue before?

    Justin,
    From reading earlier posts about issues with materialized views, I remember reading something about materialized views (fast refreshable) not working with sysdate. What did you mean by such a statement? (I would copy & paste it here, but I am unable to find it at the moment).
    The reason I am curious, is the fact that I am setting my MV to refresh every 30 seconds let's say (for testing). At this moment - it starts failing & failing & failing.

  • Log File Issue In SQL server 2005 standard Edition

    We have database of size 375GB .The data file has 80 GB free space within .When trying to rebuild the index we had 450 GB free space on the disk where Log file is residing.The rebuild index activity failed due to space issue.added more space and got the
    job done successfully
    The Log file has grow up to 611GB to complete the rebuild index.
    version :SQL server 2005 Standard Edition .Is ther a way to estimate the space required for rebuild index in this version.
    I am aware we notmaly allocate 1.5 times of data file.But in this case It was totaly wrong.
    Any suggestion with examples would be appreciated.
    Raghu

    OK, there's a few things here.
    Can you outline for everybody the recovery model you are using, the frequency with which you take full, differential and transaction log backups.
    Are you selectively rebuilding your indexes or are you rebuilding everything?
    How often are you doing this? Do you need to?
    There are some great resources on automated index maintenance, check out
    this post by Kendra Little.
    Depending on your recovery point objectives I would expect a production database to be in the full recovery mode and as part of this you need to be taking regular log backups otherwise your log file will just continue to grow. By taking a log backup it will
    clear out information from inactive VLF's and therefore allow SQL Server to write back to those VLF's rather than having to grow the log file. This is a simplified version of events, there are caveats.
    A VLF will be marked as active if it still has an open transaction in it or there is a HA option that still requires that data to be available as that data has not been copied to another node yet.
    Most customers that I see take transaction log backups every 15 - 30 minutes, but this really does depend upon how much data your company can afford to lose. That's another discussion for another day.
    Make sure that you take a transaction log backup prior to your job that does your index rebuilds (hopefully a smart job not a sledge hammer job).
    As mentioned previously swapping to bulk logged can help to reduce the size of the amount of information logged during index rebuilds. If you do this make sure to swap back into the full recovery model straight after and perform a full backup. There are
    problems with the ability to do point in time restores whilst in the bulk logged recovery model, so you need to reduce the amount of time you use it.
    Really you also need to look at how your indexes are created does the design of them lead to them being fragmented on a regular basis? Are they being used? Are there better indexes out there that can help performance?
    Hopefully that should put you on the right track.
    If you find this helpful, please mark the post as helpful,
    If you think this solves the problem, please propose or mark it an an answer.
    Please provide details on your SQL Server environment such as version and edition, also DDL statements for tables when posting T-SQL issues
    Richard Douglas
    My Blog: Http://SQL.RichardDouglas.co.uk
    Twitter: @SQLRich

Maybe you are looking for