SCCM Database keeps growing

The ConfigMgr database has grown to 30gb.  I have to shrink it on a daily basis and it continues to grow.  The Services_HIST table seems to grow by 2gb every couple of days.  Is there maintenance for this table that can be run to reduce
the size?  What else can I do to get the database under control?

So I recently answer a question just like this in another forums.
http://www.myitforum.com/forums/ConfigMgr-Database-growing-m243369.aspx
There are many reasons why the DB will grow and how big it should be.
How many client do you have?
Exact what options have you enabled, HW, AI, Power Management, CI, etc.
Exactly what SU classification and Products have you enabled?
How often are you doing CI, SW, HW , Heartbeat, etc. ?
How many deployment are you doing and how often?
What is you Aged inventory retention setting set to?
Etc.
I’m currently using for all of my clients 6-10 GB for the based db, + (# of CM12 Client * 10-15MB)
Now as to why is you Services History table is “large” that is a hard question to answer. A row will get added to the history table anytime, the PCs current inventory doesn’t match the existing inventory. So if you History table is growing then the question
should be why is it growing , For the most part it should be fairly stable.
Use these two queries to track down which computer or services is causing you the biggest headaches.
This query will tell you which service is changing the most.
Select
HS.Displayname0,
Count(*)
from
dbo.v_HS_SERVICE HS
Group by
HS.Displayname0
Order by
2 desc
This query will tell you computers is showing the most services with changes.
Select
R.Netbios_Name0,
Count(*)
from
dbo.v_R_System R
join dbo.v_HS_SERVICE HS on R.ResourceID = HS.ResourceID
Group by
R.Netbios_Name0
Order by
2 desc
Garth Jones | My blogs: Enhansoft and
Old Blog site | Twitter:
@GarthMJ

Similar Messages

  • Mod_wl_ohs_0202.log keep growing

    Grid control 11.1.0.1.0 installed on Redhat 5.2. The reposotory database is Oracle 11.2.0.2 on the same Linux box.
    File /u01/app/gc_inst/WebTierIH1/diagnostics/logs/OHS/ohs1/mod_wl_ohs_0202.log keeps growing, 6.5 gb after 6 months. renamed the file and created a empty mod_wl_ohs_0202.log. But the the old file still gets writing. Not sure if I should remove the file.
    What is the best practice to manage this file to prevent it grow too big?
    Thanks

    please check article-ID
    11G Grid Control Performance: Webtier log - mod_wl_ohs.log in the OMS Home is very Large in Size and not Rotated [ID 1271676.1]
    in MOS...
    HTH

  • Time machine keeps growing in size or backing up a size far greater than it

    Apologies if this is covered elsewhere but after hours of searching I still haven't been able to find a resolution which is actually fixing my problem.
    When my time machine tries to back up the size of the back up just keeps growing and growing - a problem which has been reported several times but none of the solutions are fixing the problem for me. I've completely reset time machine (disconnect external hard drive, deleting the preferences from ~library/preference, then reset up with my exclusions) and the problem just reappeared.
    I've gone through this routine several times and in most cases had no joy. On one occasion I thought the problem had been resolved as it did the initial backup and then a couple of hourly backs and then suddenly it returned a message saying it had insufficient space (trying to back up 470GB but disk only has 300GB available) which was just ridiculous as the full back up size was under 50GB.
    I've verified the disk on several occasions and it always comes up clean. In a fit of desperation I've even tried reformatting the external drive with no improvements
    I'm at the point of deleting time machine from my laptop and going back to the old Backup program. Can anyone offer any thing else I could try.

    Hi, and welcome to the forums.
    Have you Verified your internal HD (and Repaired any externals that are also being backed-up)?
    If so, it's probably something damaged or corrupted in your installation of OSX. I'd suggest downloading and installing the 10.6.4 "combo" update. That's the cleverly-named combination of all the updates to Snow Leopard since it was first released, so installing it should fix anything that's gone wrong since then, such as with one of the normal "point" updates. Info and download available at: http://support.apple.com/kb/DL1048 Be sure to do a +Repair Permissions+ via Disk Utility (in your Applications/Utilities folder) afterwards.
    If that doesn't help, reinstall OSX from your Snow Leopard Install disc (that won't affect anything else), then apply the "combo" again.

  • Sparsebundle keeps growing rapidly

    I have 2 macs with 500 GB Drives backing up to a 2TB time capsule, about 250 GB of used space on each.  The spares bundle on one keeps growing until it hogs all the space.  One sparsebundle now it occupies 1.5TB, will not allow further backups, and has deleted all but the most recent backup.  The other one stays at about 370 GB, but still contains several months of backups.  Why does one keep growing, and what can I do to correct this problem.
    Thanks.
    Ray R

    It sounds like more than the internal HD is getting backed-up.  Are there other drives in or connected to the "rogue" Mac?  
    See what it shows for Estimated size of full backup under the exclusions box in Time Machine Preferences > Options.  If that's larger than the 250 GB or so that's on the internal HD, exclude the other drive(s) you don't want backed-up.
    It's also possible that sparse bundle is damaged.   A clue may be lurking in your logs.  Use the widget in #A1 of Time Machine - Troubleshooting to display the backup messages from your logs.  Locate the most recent backup from that Mac, then copy and post all the messages here.
    Either way, your best bet might be to delete the sparse bundle for that Mac, per #Q5 in Using Time Machine with a Time Capsule (via Ethernet if at all possible, as that will take quite a while).
    The next backup of that Mac will, of course, be a full one (so do it via Ethernet, too).   Then see what the next one does.  If it's a full backup, see #D7 in Time Machine - Troubleshooting.

  • Oracle 11g flash recovery keeps growing

    Oracle 11g flash recovery (E:\ partition in a Windows 2003 server) keeps growing. It already used 200GB of disk space and only 46GB of disk space is left in that partition. Can we tell how often Oracle 11g delete the old backups?
    Thanks.
    Andy

    andychow wrote:
    SQL> set linesize 1000
    SQL> select substr(name,1,30) name,
    2         space_limit,
    3         space_used,
    4         space_reclaimable,
    5         number_of_files
    6  from V$RECOVERY_FILE_DEST
    7  ;
    NAME                           SPACE_LIMIT SPACE_USED SPACE_RECLAIMABLE NUMBER_OF_FILES
    E:\flash_recovery_area_11       1.0486E+11 1.6752E+10                 0               5
    SQL> select *
    2  from V$FLASH_RECOVERY_AREA_USAGE
    3  ;
    FILE_TYPE            PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
    CONTROL FILE                          0                         0               0
    REDO LOG                              0                         0               0
    ARCHIVED LOG                        .09                         0               2
    BACKUP PIECE                      15.88                         0               3
    IMAGE COPY                            0                         0               0
    FLASHBACK LOG                         0                         0               0
    FOREIGN ARCHIVED LOG                  0                         0               0
    7 rows selected.
    SQL>
    It shows only 5 files in E:\flash_recovery_area_11. Can you verify ?

  • Index size keep growing while table size unchanged

    Hi Guys,
    I've got some simple and standard b-tree indexes that keep on acquiring new extents (e.g. 4MB per week) while the base table size kept unchanged for years.
    The base tables are some working tables with DML operation and nearly same number of records daily.
    I've analysed the schema in the test environment.
    Those indexes do not fulfil the criteria for rebuild as follows,
    - deleted entries represent 20% or more of the current entries
    - the index depth is more then 4 levels
    May I know what cause the index size keep growing and will the size of the index reduced after rebuild?
    Grateful if someone can give me some advice.
    Thanks a lot.
    Best regards,
    Timmy

    Please read the documentation. COALESCE is available in 9.2.
    Here is a demo for coalesce in 10G.
    YAS@10G>truncate table t;
    Table truncated.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                         65536
    TIND                      65536
    YAS@10G>insert into t select level from dual connect by level<=10000;
    10000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     196608We have 10,000 rows now. Let's delete half of them and insert another 5,000 rows with higher keys.
    YAS@10G>delete from t where mod(id,2)=0;
    5000 rows deleted.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>insert into t select level+10000 from dual connect by level<=5000;
    5000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     327680Table size is the same but the index size got bigger.
    YAS@10G>exec show_space('TIND',user,'INDEX');
    Unformatted Blocks .....................               0
    FS1 Blocks (0-25)  .....................               0
    FS2 Blocks (25-50) .....................               6
    FS3 Blocks (50-75) .....................               0
    FS4 Blocks (75-100).....................               0
    Full Blocks        .....................              29
    Total Blocks............................              40
    Total Bytes.............................         327,680
    Total MBytes............................               0
    Unused Blocks...........................               0
    Unused Bytes............................               0
    Last Used Ext FileId....................               4
    Last Used Ext BlockId...................          37,001
    Last Used Block.........................               8
    PL/SQL procedure successfully completed.We have 29 full blocks. Let's coalesce.
    YAS@10G>alter index tind coalesce;
    Index altered.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     327680
    YAS@10G>exec show_space('TIND',user,'INDEX');
    Unformatted Blocks .....................               0
    FS1 Blocks (0-25)  .....................               0
    FS2 Blocks (25-50) .....................              13
    FS3 Blocks (50-75) .....................               0
    FS4 Blocks (75-100).....................               0
    Full Blocks        .....................              22
    Total Blocks............................              40
    Total Bytes.............................         327,680
    Total MBytes............................               0
    Unused Blocks...........................               0
    Unused Bytes............................               0
    Last Used Ext FileId....................               4
    Last Used Ext BlockId...................          37,001
    Last Used Block.........................               8
    PL/SQL procedure successfully completed.The index size is still the same but now we have 22 full and 13 empty blocks.
    Insert another 5000 rows with higher key values.
    YAS@10G>insert into t select level+15000 from dual connect by level<=5000;
    5000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        262144
    TIND                     327680Now the index did not get bigger because it could use the free blocks for the new rows.

  • Is there any API for pushing update patch related data from qualysguard in SCCM database?

    The problem is that, my company want to integrate SCCM with qualysguard. The qualys will scan for missing patches and will generate a patch report. From this patch report
    required data will be extracted and pushed into the SCCM database.I have sorted out the issue of extracting data from qualys, but I am stuck up at the point of pushing patch data in update repository. I tried searching for some API which could push data in
    database of SCCM, but unable to find. I thought of making my own script to run the sql query but this will ultimately screw the database of SCCM since there may be 60-70 table dependency.
    Please suggest any SCCM API which can help me to push data in its database(particularly in the tables interacting with the update repository). 

    We had looked into doing something similar, and this post is the closest we found.
    https://community.qualys.com/thread/11816
    Basically you will need a middle-man between Qualys and ConfigMgr to house the data. This may be a new database, or a whole seperate platform. I expect this could easily be done with SQL and SSRS.
    Also note, database edits to the ConfigMgr database are
    not supported  by Microsoft, I would recommend using a central system to pull data from Qualys and ConfigMgr without modifying either.
    Daniel Ratliff | http://www.PotentEngineer.com

  • Help!! database keeps crashing

    I've been working on this problem for 4 months. The database keeps crashing randomly. Sometimes it would not produce any error in the alert log and sometimes it would give me multiple ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [unable_to_trans_pc] [PC:0x77FCB491] [ADDR:0x38] [UNABLE_TO_WRITE] [].
    I'm starting to think it is backup exec 11d using rman that is causing the problem. I also heard that running rman with backup exec can strain the database.
    What else can I look at?
    Is ADDM a good method to use to diagnostic this problem and how do i access it?

    Refer to Metalink note 466370.1.
    Snippet:Symptoms
    These errors were encountered and the DB crashed.:
    ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [unable_to_trans_pc]
    [PC:0x7C34126B] [ADDR:0x0] [UNABLE_TO_WRITE] []
    ORA-27300: OS system dependent operation:CreateThread failed with status: 8
    ORA-27301: OS failure message: Not enough storage is available to process this command.
    ORA-27302: failure occurred at: ssthrddcr.
    You may observe this in the trace file.:
    "Current SQL information unavailable - no SGA."
    Changes
    This could be triggered by hardware issues or an increase in volume or by day to day operations.
    Cause
    Insufficient memory
    Solution
    Ensure that the existing memory is functioning properly.
    If there is no hardware issue, then you have simply run out of available memory and you need to purchase more.
    Check your OS log for hardware errors.

  • Mac os x lion installation give negative times and keep growing?

    Why my mac os x lion installation give negative times and keep growing?
    Please help me since i've already try to install more then 10 times and wait until -2 hours but the problem still persist.

    On the bottom of your MBP should be a serial number.  Enter it here and post back the model/year information:
    https://selfsolve.apple.com/agreementWarrantyDynamic.do
    Do you already have Lion installed or are trying to install it to replace another OSX?  If you are replacing, which OSX is currently installed?
    Ciao.

  • I have a problem with my iPad, the "other" part of the capacity summary keeps growing like ****

    Dear all: I have a problem with my iPad, the "other" part of the capacity summary keeps growing like ****, every time I sync the iPad gets bigger and bigger. What is this "other" and what do I do to keep it from getting bigger? Thanks to all!

    Marcos,
    I did not try your trick, didn't see your post until after i visited the Apple store.  The tech at the genius bar did not know why my "Other" memory had grown so huge.  It was over 24GB, which only left me with 1 GB free on my 32GB iPad.   I had been loading and unloading video's yesterday thru itunes to try various compression settings.  I did not notice the other categrory growing until I tried to load a 3 GB movie file and it told me I was out of memory.
    I probably loaded and unloaded 30 different times, trying different compression settings.  In the process of doing this, it apparently loaded up the "Other: stuff.  To me it seems like recycle bin in windows, but no one knows how to free it up. 
    So here is what he did and it worked.  He reset the iPad.  Press and hold both the Sleep/Wake button and the Home button for at least ten seconds, until the Apple logo appears.  Then he hooked it up to itunes on his computer.  Fortunately I have everything backed up on my home computer.  When my iPad showed up on iTunes at the store, it showed the 24 MB of "Other" momentarily, then it reduced to about .7 GB giving me 24.5 GB free.  Resetting it seemed to do the trick, but he still could tell me how this is happening and how to prevent it. 
    I think I'll try to call someone at applecare to get an explanation, hopefully.

  • Error message repairing library database keeps appearing in iPhoto in iMac

    Why does the error message "repairing library database" keeps appearing?

    What version of iPhoto? Assuming 09 or later...
    Option 1
    Back Up and try rebuild the library: hold down the command and option (or alt) keys while launching iPhoto. Use the resulting dialogue to rebuild. Choose to Rebuild iPhoto Library Database from automatic backup.
    If that fails:
    Option 2
    Download iPhoto Library Manager and use its rebuild function. This will create a new library based on data in the albumdata.xml file. Not everything will be brought over - no slideshows, books or calendars, for instance - but it should get all your albums and keywords back.
    Because this process creates an entirely new library and leaves your old one untouched, it is non-destructive, and if you're not happy with the results you can simply return to your old one. .
    Regards
    TD

  • Proxy 4 - Cache size keeps growing

    I may have a wrong cache setting somewhere, but I can't find it. I am running Proxy 4.0.2 (for windows).
    Under Cache settings, I have "Cache Size" set to 800MB. Under "Cache Capacity" I have it set to 1GB (500 MB-2GB).
    The problem is my physical cache size on the hard drive keeps growing and growing and is starting to fill the partition on the hard drive. At last count, the "cache" directory on the hard drive which holds the cache files is now using 5.7GB of space and still growing.
    Am I mis-understanding something? I thought the max physical size would be a lot lower, and stop at a given size. But the cache directory on the hard drive is now close to 6GB and still growing day by day. When is it going to stop growing, or how do I stop it and put a cap on the physical size it can grow to on the hard drive?
    Thanks

    Until 4.03 is out, you can use this script..
    Warning: experimental, run this on a copy of cache first to make sure that it works as you want it.
    The firs argument is the size in MB's that you want to remove.
    I assume your cachedir is "./cache" if it is not, then change the variable $cachedir to
    the correct value.
    ==============cut-here==========
    #!/bin/perl
    use strict;
    use File::stat;
    my $cachedir = "./cache";
    my $gc_size; #bytes
    my $verbose = 0;
    sub gc_file {
        my $file = shift;
        my $sb = stat($file);
        $gc_size -= $sb->size;
        unlink $file;
        print "$gc_size more after $file\n" if $verbose;
        exit 0 if $gc_size < 0;
    sub main {
        my $size = shift;
        $gc_size = $size * 1024 * 1024; #in MB's
        opendir(DIR, $cachedir) || die "can't opendir $cachedir: $!";
        my @sects = grep {/^s[0-9]\.[0-9]{2}$/} readdir(DIR);
        closedir DIR;
        foreach my $sect (@sects) {
            chomp $sect;
            opendir (CDIR, "$cachedir/$sect") || die "cant opendir $cachedir/$sect: $!";
            my @ssects = grep {/^[A-F0-9]{2}$/} readdir(CDIR);
            closedir CDIR;
            foreach my $ssect (@ssects) {
                chomp $ssect;
                opendir (SCDIR, "$cachedir/$sect/$ssect") || die "cant opendir $cachedir/$sect/$ssect: $!";
                my @files = grep {/^[A-Z0-9]{16}$/} readdir(SCDIR);
                closedir SCDIR;
                foreach my $file (@files) {
                    gc_file "$cachedir/$sect/$ssect/$file";
    main $ARGV[0] if $ARGV[0];
    =============cut-end==========On your second problem, the easiest way to recover a corrupted partition is to list out the sections in that partition, and delete those sections that seem like odd ones
    eg:
    $ls ./cache
    s4.00 s4.01 s4.02 s4.03 s4.04 s4.05 s4.06 s4.07 s4.08 s4.09 s4.10 s4.11 s4.12 s4.13 s4.14 s4.15 s0.00
    Here the s0.00 is the odd one out, so remove the s0.00 section. Also keep an eye on the relative sizes of the sections. if the section to be removed is larger than the rest of the sections combinde, you might not want to remove that.
    WARNING: anything you do, do on a copy

  • Oracle Text Context index keeps growing. Optimize seems not to be working

    Hi,
    In my application I needed to search through many varchar columns from differents tables.
    So I created a materialized view in which I concatenate those columns, since they exceed the 4000 characters I merged them concatenating the columns with the TO_CLOBS(column1) || TO_CLOB(column)... || TO_CLOB(columnN).
    The query is complex, so the refresh is complete on demand for the view. We refresh it every 2 minutes.
    The CONTEXT index is created with the sync on commit parameter.
    The index then is synchronized every two minutes.
    But when we run the optimize index it does not defrag the index. So it keeps growing.
    Any idea ?
    Thanks, and sorry for my poor english.
    Edited by: detryo on 14-mar-2011 11:06

    What are you using to determine that the index is fragmented? Can you post a reproducible test case? Please see my test of what you described below, showing that the optimization does defragment the index.
    SCOTT@orcl_11gR2> -- table:
    SCOTT@orcl_11gR2> create table test_tab
      2    (col1  varchar2 (10),
      3       col2  varchar2 (10))
      4  /
    Table created.
    SCOTT@orcl_11gR2> -- materialized view:
    SCOTT@orcl_11gR2> create materialized view test_mv3
      2  as
      3  select to_clob (col1) || to_clob (col2) clob_col
      4  from   test_tab
      5  /
    Materialized view created.
    SCOTT@orcl_11gR2> -- index with sync(on commit):
    SCOTT@orcl_11gR2> create index test_idx
      2  on test_mv3 (clob_col)
      3  indextype is ctxsys.context
      4  parameters ('sync (on commit)')
      5  /
    Index created.
    SCOTT@orcl_11gR2> -- inserts, commits, refreshes:
    SCOTT@orcl_11gR2> insert into test_tab values ('a', 'b')
      2  /
    1 row created.
    SCOTT@orcl_11gR2> commit
      2  /
    Commit complete.
    SCOTT@orcl_11gR2> exec dbms_mview.refresh ('TEST_MV3')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> insert into test_tab values ('c a', 'b d')
      2  /
    1 row created.
    SCOTT@orcl_11gR2> commit
      2  /
    Commit complete.
    SCOTT@orcl_11gR2> exec dbms_mview.refresh ('TEST_MV3')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- query works:
    SCOTT@orcl_11gR2> select * from test_mv3
      2  where  contains (clob_col, 'ab') > 0
      3  /
    CLOB_COL
    ab
    c ab d
    2 rows selected.
    SCOTT@orcl_11gR2> -- fragmented index:
    SCOTT@orcl_11gR2> column token_text format a15
    SCOTT@orcl_11gR2> select token_text, token_first, token_last, token_count
      2  from   dr$test_idx$i
      3  /
    TOKEN_TEXT      TOKEN_FIRST TOKEN_LAST TOKEN_COUNT
    AB                        1          1           1
    AB                        2          3           2
    C                         3          3           1
    3 rows selected.
    SCOTT@orcl_11gR2> -- optimizatino:
    SCOTT@orcl_11gR2> exec ctx_ddl.optimize_index ('TEST_IDX', 'REBUILD')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- defragmented index after optimization:
    SCOTT@orcl_11gR2> select token_text, token_first, token_last, token_count
      2  from   dr$test_idx$i
      3  /
    TOKEN_TEXT      TOKEN_FIRST TOKEN_LAST TOKEN_COUNT
    AB                        2          3           2
    C                         3          3           1
    2 rows selected.
    SCOTT@orcl_11gR2>

  • Amount being backed up by Time Machine keeps growing during full backup

    After some problems with Time Machine backup, I removed a bunch of stuff from my internal hard drive, reformatted my external HD used for backups, and started a new full backup.  Time Machine has been running for about 24 hours and gives me a status of Backing up: xx GB of yy GB.  Both xx and yy keep growing (apace) so that I don't believe xx will ever equal yy. 
    To cryptic?  Here are real numbers.  My internal disk being backed up contains 469.39 GB.  Backup status now says Backing up: 685.78GB of 754.42 GB.  Both numbers just keep growing.  What's going on here?  I'm happy to let Time Machine overnight (again) but, really, this can't be normal.

    Start up in Recovery mode, launch Disk Utility, select the startup volume ("Macintosh HD," unless you gave it a different name), and run Repair Disk (not Repair Permissions.) If any problems are found, repeat. Then restart as usual.
    If you don't already have a current backup, you must back up your data before you take the above step. You may be able to back up, even if the system isn't fully functional. Ask if you need guidance.
    Directory corruption in a MacOS journaled volume is always the result of a drive malfunction. It's not caused by power failures, system crashes, or anything else. You might choose to tolerate such a malfunction once in the life of a drive. If it's repeated, the drive must be replaced, or there is some other hardware fault that needs to be corrected. Ignoring repeated directory errors will result in data loss.

  • Extending the SCCM Database to have a custom table

    We have almost completed the Windows 7 migration process in our organisation using SCCM however now a new requirement has been put forward. Our organisation has a few large branch offices but the rest, which forms the majority of the estate, is made up of
    around 400 subnets (i.e. tiny buildings with 3-4 machines at max at each site). The requirement that has now been put forward is that each machine at these 400 odd sites should have an environment variable/registry value set to reflect the name of that building.
    Setting environment variable sounds like the way forward. But if it is to be processed as part of SCCM inventory (for later reporting), should the DB be extended to accommodate this extra information and how?
    The next challenge is how to configure this environment variable on all the rolled out Windows 7 machines? Any ideas? The ones to be built can somehow be accommodated by changing the build process to include a custom application to select the building name
    and set the TS variable which can be used later in the TS (Thanks to OSD++ Jason Sandys). But what about those already in the estate?
    Thanks in advance.
    Best regards

    Thanks Jörgen. Will look into it.
    The Win32_Environment class is already enabled in the client policy, so presumably it will include the new environment variable during the subsequent inventory scan? Do I need to
    do anything extra on the inventory side of things?
    Thinking in another direction, say if I had a list (csv) of subnet to "building name" mapping, would that come in handy in achieving the goal? This is where I was think if SCCM database
    can be extended so that we will have a mapping between subnet table to have this extra information.

Maybe you are looking for