WSUS catalogue size, location and log file

Hi All,
Quick question, where is the catalogue stored in the WSUS server? I am trying to find the size of the latest update synchronization that the client use and the size in total of the catalogue, is this possible? As our clients caused an issue pulling a 35MB
catalogue file from the server, does this seem the correct size for a SCCM/WSUS delta update?
I had a look at WSUS related logs (change and software distribution)but couldn't find the right info, it was a long day so any help appreciated on this as I could simply have missed it in those log files, or a point in the right direction would be great,
thanks for your help
many thanks

Hi,
Have you install SCCM with WSUS? If yes, to get better help, please post your question on the SCCM forum.
https://social.technet.microsoft.com/Forums/systemcenter/en-US/home
Best Regards.
Steven Lee Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

Similar Messages

  • Tmp file locations and Log file locations

     

    I have been having a real headache too trying to get WebLogic to put all its
    log files and temporary files in directories that I specify. It seems that
    WebLogic has a mind of its own as files get created all over the place.
    Trying to configure these really basic settings has proved extremely
    awkward. Why is it such a nightmare to do?
    "Scott Jones" <[email protected]> wrote in message
    news:3af0179d$[email protected]..
    OK, I changed the relative path for the log files.
    1. I am still getting app-startip.log
    app0000.tlog
    in the root directory and not in the ./logs directory. Any other
    settings?
    2. I sill do not know how to redirect the tmp_ejbdomain.port directory.
    Any suggestions?
    Scott
    "Sanjeev Chopra" <[email protected]> wrote in message
    news:3aef0a42$[email protected]..
    "Scott Jones" <[email protected]> wrote in message
    news:3aef05be$[email protected]..
    I a domain configured and running with two applications. WLS 6 is
    placing
    the following logs for each application at the same dir level as
    config
    dir. It is also creating tmp_ejb directory at the same level.
    1. How do I tell WLS 6 to place log files in a diff directoryIn Admin Console: modify the property Server -> Configuration ->Logging ->
    FileName
    In config.xml: the 'FileName' attr can be set to an absolute path OR apath
    relative to Server.RootDirectory
    <Server EnabledForDomainLog="true" ListenAddress="localhost"
    ListenPort="7701" Name="managed"
    StdoutDebugEnabled="true" ThreadPoolSize="15">
    <Log FileCount="10" FileMinSize="50" FileName="managed.log"
    Name="managed" NumberOfFilesLimited="true"
    RotationType="bySize"/>
    </Server>
    2. How do I tell WLS 6 to place tmp_ejb directories in a diff
    directory
    >>>
    Thanks,
    Scott

  • Process Flow ignores name and location for Control- and Log-Files

    Hi!
    Our OWB Version is 10.1.0.3.0 - DB Version 9.2.0.7.0 - OWF Version 2.6.2
    Clients and server are running on Windows. Database contains target schemas as well as OWB Design and Runtime, plus OWF repositories. The source files to load reside on the same server as the database.
    I have for example a SQL*Loader Mapping MAP_TEXT which loads one flat file "text.dat" into a table stg_text.
    The mapping MAP_TEXT is well configured and runs perfect. i.e. Control file "text.ctl" is generated to location LOC_CTL, flat file "text.dat" read from another location LOC_DATA, bad file "text.bad" is written to LOC_BAD and the log file "text.log" is placed into LOC_LOG. All locations are registered in runtime repository.
    When I integrate this mapping into a WorkFlow Process PF_TEXT, then only LOC_DATA and LOC_BAD are used. After deploying PF_TEXT, I execute it and found out, that Control and Log file are placed into the directory <OWB_HOME>\owb\temp and got generic names <Mapping Name>.ctl and <Mapping Name>.log (in this case MAP_TEXT.ctl and MAP_TEXT.log).
    How can I influence OWB to execute the Process Flow using the locations configured for the mapping placed inside?
    Has anyone any helpfull idea?
    Thx,
    Johann.

    I didn't expect to be the only one to encounter this misbehaviour of OWB.
    Meanwhile I found out what the problem is and had to recognize that it is like it is!
    There is no solution for it till Paris Release.
    Bug Nr. 3099551 at Oracle MetaLink adresses this issue.
    Regards,
    Johann Lodina.

  • Do we need to format data and log files with 64k cluster size for sql server 2012?

    Do we need to format data and log files with 64k cluster size for sql server 2012?
    Does this best practice still applies to sql server 2012 & 2014?

    Yes.  The extent size of SQL Server data files, and the max log block size have not changed with the new versions, so the guidance should remain the same.
    Microsoft SQL Server Storage Engine PM

  • Change the Data and Log file locations in livecache

    Hi
    We have installed livecache in unix systems in the /sapdb mount directory where the installer have created sapdata and sapdblog directories. But the unix team has already created two mount direcotries as follows:
    /sapdb/LC1/lvcdata and /sapdb/LC1/lvclog mount points.
    While installing livecache we had selected this locations for creating the DATA and LOG volumes. Now they are asking to move the DATA and LOG volumes created in sapdata and saplog directories to these mount points. How to move the data and log file and make the database consistent. Is there any procedure to move the files to the mount point directories and change the pointers of livecahce to these locations.
    regards
    bala

    Hi Lars
    Thanks for the link. I will try it and let u know.
    But this is livecache (even it uses MaxDB ) database which was created by
    sapinst and morover is there any thing to be adjusted in SCM and as well as
    any modification ot be done in db level.
    regards
    bala

  • 2005 database and log file locations

    Is there a SQL query to list where exactly the database and log files reside for all databases on an instance (sql server 2005)?

    You can query the Information form DMV
    sys.master_files (Transact-SQL)
    SELECT *
    FROM [sys].[master_files]
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • How to increase the size of Redo log files?

    Hi All,
    I have 10g R2 RAC on RHEL. As of now, i have 3 redo log files of 50MB size. i have used redo log size advisor by setting fast_start_mttr_target=1800 to check the optimal size of the redologs, it is showing 400MB. Now, i want to increase the size of redo log files. how to increase it?
    If we are supposed to do it on production, how to do?
    I found the following in one of the article....
    "The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance, however it must balanced out with the expected recovery time.Undersized log files increase checkpoint activity and increase CPU usage."
    I did not understand the the point however it must balanced out with the expected recovery time in the above given paragraph.
    Can anybody help me?
    Thanks,
    Praveen.

    You dont have to shutdown the database before dropping redo log group but make sure you have atleast two other redo log groups. Also note that you cannot drop active redo log group.
    Here is nice link,
    http://www.idevelopment.info/data/Oracle/DBA_tips/Database_Administration/DBA_34.shtml
    And make sure you test this in test database first. Production should be touched only after you are really comfortable with this procedure.

  • How to design SQL server data file and log file growth

    how to design SQL DB data file and log file growth- SQL server 2012
    if my data file is having 10 GB sizze and log file is having 5 GB size
    what should be the size in MB (not in %) of autogrowth. based on what we have to determine the ideal size of file auto growth.

    It's very difficult to give a definitive answer on this. Best principal is to size your database correctly in advance so that you never have to autogrow, of course in reality that isn't always practical.
    The setting you use is really dictated by the expected growth in your files. Given that the size is relatively small why not set it to 1gb on the datafile(s) and 512mb on the log file? The important thing is to monitor it on an on-going basis to see if that's
    the appropriate amount.
    One thing you should do is enable instant file initialization by granting the service account Perform Volume Maintenance tasks in group policy. This will allow the data files to grow quickly when required, details here:
    https://technet.microsoft.com/en-us/library/ms175935%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396
    Also, it possible to query the default trace to find autogrowth events, if you wanted you could write an alert/sql job based on this 
    SELECT
    [DatabaseName],
    [FileName],
    [SPID],
    [Duration],
    [StartTime],
    [EndTime],
    CASE [EventClass]
    WHEN 92 THEN 'Data'
    WHEN 93 THEN 'Log' END
    FROM sys.fn_trace_gettable('c:\path\to\trace.trc', DEFAULT)
    WHERE
    EventClass IN (92,93)
    hope that helps

  • Why size of archive log file increasing in merge clause

    my database is running in archive log mode.
    someone is running oracle merge statement. still it is running.
    He will issue commit after the operation.
    in that period redolog file increasing now.
    my question is why size of archive log file increasing with redolog file.
    i know that after commit archive log file should generate.(may be it is wrong).
    please suggest........
    Edited by: 855516 on Mar 13, 2012 11:18 AM

    855516 wrote:
    my database is running in archive log mode.
    someone is running oracle merge statement. still it is running.
    He will issue commit after the operation.
    in that period redolog file increasing now.
    my question is why size of archive log file increasing with redolog file.
    i know that after commit archive log file should generate.(may be it is wrong). No this is not correct that after commit archive log will generate....You know merge statement causes the insert (if data not present already) or update if database is present.. Obviously these operation will generate lots of redo if the amount of data been processed is high.
    If you feel that this operation is causing excessive of redo then root cause analysis should be done...
    For that use Logminer (excellent tool to provide segment level breakdown of redo size). V$logmnr_contens has columns redo block and redo byte address associated with the current redo
    change
    There are some gudlines in order to reduce redos( which may vary in any environment)
    1) check if there are unwanted indexes being used in tables which are refereed in merge. If yes then remove those could bring down the redo
    2) Use global temporary tables to reduce redo (if there is a need to keep data only temporarily in a session)
    3) Use nologging if possible (but see its implications)
    Hope this helps

  • Private strand flush not complete how to find optimal size of redo log file

    hi,
    i am using oracle 10.2.0 on unix system and getting Private strand flush not complete in the alert log file. i know this is due to check point is not completed.
    I need to increase the size of redo log files or add new group to the database. i have log file switch (checkpoint incomplete) in the top 5 wait event.
    i can't change any parameter of database. i have three redo log group and log files are of 250MB size. i want to know the suitable size to avoid problem.
    select * from v$instance_recovery;
    RECOVERY_ESTIMATED_IOS     ACTUAL_REDO_BLKS     TARGET_REDO_BLKS     LOG_FILE_SIZE_REDO_BLKS     LOG_CHKPT_TIMEOUT_REDO_BLKS     LOG_CHKPT_INTERVAL_REDO_BLKS     FAST_START_IO_TARGET_REDO_BLKS     TARGET_MTTR     ESTIMATED_MTTR     CKPT_BLOCK_WRITES     OPTIMAL_LOGFILE_SIZE     ESTD_CLUSTER_AVAILABLE_TIME     WRITES_MTTR     WRITES_LOGFILE_SIZE     WRITES_LOG_CHECKPOINT_SETTINGS     WRITES_OTHER_SETTINGS     WRITES_AUTOTUNE     WRITES_FULL_THREAD_CKPT
    625     9286     9999     921600          9999          0     9     112166207               0     0     219270206     0     3331591     5707793please suggest me or tell me the way how to find out suitable size to avoid problem.
    thanks
    umesh

    How often should a database archive its logs
    Re: Redo log size increase and performance
    Please read the above thread and great replies by HJR sir. I think if you wish to get concept knowledge, you should add in your notes.
    "If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle automatically tries to checkpoint as frequently as necessary. Under this condition, the size of the log files should be large enough to avoid additional checkpointing due to under sized log files. The optimal size can be obtained by querying the OPTIMAL_LOGFILE_SIZE column from the V$INSTANCE_RECOVERY view. You can also obtain sizing advice on the Redo Log Groups page of Oracle Enterprise Manager Database Control."
    Source:http://download-west.oracle.com/docs/cd/B13789_01/server.101/b10752/build_db.htm#19559
    Pl also see ML Doc 274264.1 (REDO LOGS SIZING ADVISORY) on tips to calculate the optimal size for redo logs in 10g databases
    Source:Re: Redo Log Size in R12
    HTH
    Girish Sharma

  • Location of Log file

    Hi,
    Could anyone help me to find the  location of log file in crystal reports serverXI. I need the information of crystal reports execution time,parameters used etc. Anything related to crystal reports is fine with me.
    Thanks in advance.
    -Vijay Kanth
    91- 9036431531

    Here are instructions to enable the trace and where it can be found.
    1199303 - How to trace Crystal Report Print Engine errors in BusinessObjects Enterprise XI and Crystal Enterprise    
    Version   1     Validity: 11/27/2007 - active   
    Language   English 
    Edit Show change log Show Internal Memos 
    Content:    Summary   |   Header Data   |   Product   |   Other Properties
    Symptom
    How do I trace Crystal Report Print Engine errors in BusinessObjects Enterprise XI and Crystal Enterprise?
    Cause
    Various error messages appear and poor behavior occurs when viewing Crystal Reports on demand or scheduling reports within Enterprise XI and Crystal Enterprise. Advanced logging techniques can reveal the causes of these issues.
    Resolution
    To help troubleshoot issues, add -crpetrace 7 and -trace to the command line parameters for the Crystal Reports Job Server or the Crystal Reports Page Server.
    The Crystal Reports Job Server handles scheduled reports, and the Crystal Reports Page Server processes on-demand reports.
    Here are the steps:
    Click Start > Programs > BusinessObjects11 > BusinessObjects Enterprise > Central Configuration Manager.
    Right-click the server requiring the advanced logging. Click Stop.
    Right-click the server. Click Properties.
    Go to the end of the Command field.
    Press the spacebar once. Type "-crpetrace 7" and "-trace".
    Click OK.
    Right click the server. Click Start.
    Advanced logging is now enabled.
    The default logging folder path is: <installation directory>:\Program Files\Business Objects\BusinessObjects Enterprise 11\Logging\.
    The CRPE log files will be named similar to the following:
    pageserver_20071108_193611_5008_crpe_bkgrnd.log
    pageserver_20071108_193611_5008_crpe_Diagnostics.log
    pageserver_20071108_193611_5008_crpe_functions.log
    ====================
    NOTE:
    These parameters provide extensive logging. Every call to the Crystal Report Print Engine will be logged. If a support case is still required, it will be helpful to attach the results of this trace to facilitate the diagnosis and expedite a solution.
    ====================
    Keywords
    crpe crpetrace pageserver jobserver enterprise , 463766
    Header Data
    Released on  11/27/2007 06:55:21 by Commodore Tom Concannon (I817183) 
    Release Status  Released to Customer 
    Component  BOJ-BIP 
    Responsible  Commodore Tom Concannon ( I817183 ) 
    Processor  Commodore Tom Concannon ( I817183 ) 
    Category  How To 
    Product
    Product Product Version
    Crystal Enterprise CRYSTAL ENTERPRISE 10
    CRYSTAL ENTERPRISE 9
    Crystal Reports Server CRYSTAL REPORTS SERVER XI
    CRYSTAL REPORTS SERVER XI R2
    Crystal Reports Server, OEM edition CR SERVER EMBED XI R2
    SAP BusinessObjects Enterprise BOBJ ENTERPRISE XI
    BOBJ ENTERPRISE XI R2
    Other Properties
    Business Objects Article ID  463766
    Business Objects ProductFamilyMajorVersion  BusinessObjects Enterprise XI
    Crystal Enterprise 10
    Crystal Enterprise 9
    Crystal Reports Server XI
    Business Objects ProductName  BusinessObjects Enterprise
    Crystal Enterprise
    Crystal Reports Server
    Business Objects ProductMajorVersion  BusinessObjects Enterprise XI
    Crystal Enterprise 10
    Crystal Enterprise 9
    Business Objects BuildVersion  10.0.0.0
    10.2.0.0
    11.0.0.0
    11.0.0.x
    11.0.1.x
    11.0.2.x
    11.1.0.0
    11.2.0.0
    11.3.0.0
    11.5.0.0
    Business Objects SupportQueue  Architecture
    Business Objects ProductLanguage  English

  • How to limit the size of a log file.

    Hi,
    I am developing an application that creates a log file with all actions that take place. My problem is that this log file grows a lot because the application acts as a server that is up 24 hours a day. For that reason, I wanted to limit the size of the log file by deleting de older lines, how can implement is?
    Thanks.

    One way is to periodically (say every week) create a new file to store the actions and then, when the space taken up by all the files is too large, delete the oldest one.
    You can also have your application periodically (say the first time you write to the file on any calendar day) check the size of the file and, when it gets too big, copy all of the actions in the file after a given time (presumably there is an associated timestamp with each action) to a new file. Then delete the old file.

  • Change location of log file from pkg, due to logrotate problem?

    Hello I am now the maintainer of the bacula packages in aur. http://aur.archlinux.org/packages.php?ID=4510
    The package ships with a log rotate file which points to /usr/var/bacula/working/log as the log file. However, log rotate gives
    error: bacula:14 olddir /var/log/archive and log file /usr/var/bacula/working/log are on different devices
    error: found error in file bacula, skipping
    invalid password file entry
    delete line ''? No
    invalid password file entry
    delete line ''? No
    pwck: no changes
    The solution to this is to change the log path to /var/log/bacula. My question is if I should leave the log path as is and have the user change it, or should I patch both the config file and syslog file to contain /var/log/bacula?
    Thanks,
    ~pyther
    Last edited by pyther (2009-01-27 22:41:06)

    It was talking about a trace will be placed under your background dump destination. As you would have using
    alter database backup controlfile to trace;Check your background dump destination where your alert log is.

  • Steps to move Data and Log file for clustered SQL Server

    Hi guys 
    we have Active'passive SQL 2008R2 cluster environment.
    looking for steps to move Data and log files from user Database  and System Database for  SQL Server Clustered Instance. 
    Currently Data and log  files resides on same drive for user and system Databases..
    Thanks
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    Try the below link
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/468de435-3432-45c2-a50b-23519cd2686e/moving-the-system-databases-in-a-sql-cluster?forum=sqldisasterrecovery
    -Prashanth

  • Shell Script to grep Job File name and Log File name from crontab -l

    Hello,
    I am new to shell scripting. I need to write a shell script where i can grep the name of file ie. .sh file and log file from crontab -l.
    #51 18 * * * /home/oracle/refresh/refresh_ug634.sh > /home/oracle/refresh/refresh_ug634.sh.log 2>&1
    #40 17 * * * /home/oracle/refresh/refresh_ux634.sh > /home/oracle/refresh/refresh_ux634.sh.log 2>&1
    In crontab -l, there are many jobs, i need to grep job name like 'refresh_ug634.sh' and corresponding log name like 'refresh_ug634.sh.log '.
    I am thinking of making a universal script that can grep job name, log name and hostname for one server.
    Then, suppose i modify the refresh_ug634.sh script and call that universal script and echo those values when the script gets executed.
    Please can anyone help.
    All i need to do is have footer in all the scripts running in crontab in one server.
    job file name
    log file name
    hostname
    Please suggest if any better solution. Thanks.

    957704 wrote:
    I need help how to grep that information from crontab -l
    Please can you provide some insight how to grep that shell script name from list of crontab -l jobs
    crontab -l > cron.log -- exporting the contents to a file
    cat cron.log|grep something -- need some commands to grep that infoYou are missing the point. This forum is for discussion of SQL and PL/SQL questions. What does your question have to do with SQL or PL/SQL.
    It's like you just walked into a hardware store and asked where they keep the fresh produce.
    I will point out one thing about your question. You are assuming every entry in the crontab has exactly the same format. consider this crontab:
    #=========================================================================
    # NOTE:  If this is on a clustered environment, all changes to this crontab
    #         must be replicated on all other nodes of the cluster!
    # minute        (0 thru 59)
    # hour          (0 thru 23)
    # day-of-month  (1 thru 31)
    # month         (1 thru 12)
    # weekday       (0 thru 6, sunday thru saturday)
    # command
    #=========================================================================
    00 01 1-2 * 1,3,5,7 /u01/scripts/myscript01  5 orcl  dev
    00 04 * * * /u01/scripts/myscript02 hr 365 >/u01/logs/myscript2.lis
    00 6 * * * /u01/scripts/myscript03  >/u01/logs/myscript3.lisThe variations are endless.
    When you get to an appropriate forum (this on is not it) it will be helpful to explain your business requirement, not just your proposed technical solution.

Maybe you are looking for

  • Error when uploading Sub-asset numbers using LSMW

    Hi All, We have a requirement in which we need to first load Main asset numbers from Legacy system through LSMW and then in the next load the sub-assets for the Main Assets created. We are using Direct Input method(program RAALTD11) in LSMW for trans

  • BPM PS4FP UCM Integration

    Hi there I've integrated BPM PS4FP and Webcenter content (UCM), I've configured both servers using this post BPM 11.1.1.5.0 Features Pack: UCM integration issue how ever I'm having this error during upload or download documents using the bpm workspac

  • XMLdatasets: How to combine multiple xml data sources??

    What I'm trying to do (without any results so far...) is to combine data from two different xml sources. I have one source with a list of cultural events (agenda.xml) and another with a list of contacts (contacts.xml). Each source has a column with a

  • What encoder MP3 uses iTunes for import CD?

    I have a PC with Windows XP SP2 c Windows Media Player 9 with a installed Intervideo XPack (MP3 encoder and DVD codec). My WMP uses to import CD encoder from Intervideo. *My question is: what encoder uses iTunes? Own or from Windows?* P.S. Sorry for

  • How to join output of two or more query ?

    suppose a query : select count(id) COL1 from table_1 gives : COL1 59and another query : select count(id) COL2 from table_2 gives : COL2 23. HOW can i get this output : COL1 COl2 59 23Edited by: bootstrap on Oct 14, 2011 2:08 PM