SQL 2008 R2 - Mirror sys.master_files data file size not updating

Hi,
I have an odd issue where the primary server is SQL 2008 R2 SP1 and the secondary is SQL 2008 R2 SP2. I have databases mirroring between these servers. We recently moved the log file on the secondary to a different drive. The file sizes match and are updating
exactly the same between both servers. The mirror monitor shows the mirroring successful and everything is up to date. When we check the sys.master_files tables on the mirror, the size of the data file has not updated since moving the log file. Tried restarting
SQL but not joy.
Anyone have any ideas?
Thanks
Rob

Hi,
I would say you wait for some time it would get reflected I have seen this with sys.master_files they are updated late. Like when you delete heavy data, data would be gone but space captured by data would be  released slowly due to internal ghost operation
cleaning and after this operation completes sys.master_files catalog is updated I guess this is what happening here as well , this is pure speculation. I will stick to this thread please look for a day and then update again
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
My TechNet Wiki Articles

Similar Messages

  • Estimated File Size not updating

    I have a project in vers. 5.5 that I am exporting.
    I am saving as a Windows Media File and am going from 1080 to 720.  File size is critical, however the Estimate File Size in Encoder seems to get stuck.  For instance, if I half the frame rate I see no difference in file size.  I would think that there would be significant reduction in estimated file size if half the frames are being tossed.  Am I not understanding something about how this works?

    There are only two things that significantly affect file size - duration and bitrate.  Changing other specs may affect quality, but those two determine file size.

  • SQL*Loader-510: Physical record in data file (clob_table.ldr) is long

    If I generate loader / Insert script from Raptor, it's not working for Clob columns.
    I am getting error:
    SQL*Loader-510: Physical record in data file (clob_table.ldr) is long
    er than the maximum(1048576)
    What's the solution?
    Regards,

    Hi,
    Has the file been somehow changed by copying it between windows and unix? Ora file transfer done as binary or ASCII? The most common cause of your problem. Is if the end of line carriage return characters have been changed so they are no longer /n/r could this have happened? Can you open the file in a good editor or do an od command in unix to see what is actually present?
    Regards,
    Harry
    http://dbaharrison.blogspot.co.uk/

  • Sql loader maximum data file size..?

    Hi - I wrote sql loader script runs through shell script which will import data into table from CSV file. CSV file size is around 700MB. I am using Oracle 10g with Sun Solaris 5 environment.
    My question is, is there any maximum data file size. The following code from my shell script.
    SQLLDR=
    DB_USER=
    DB_PASS=
    DB_SID=
    controlFile=
    dataFile=
    logFileName=
    badFile=
    ${SQLLDR} userid=$DB_USER"/"$DB_PASS"@"$DB_SID \
              control=$controlFile \
              data=$dataFile \
              log=$logFileName \
              bad=$badFile \
              direct=true \
              silent=all \
              errors=5000Here is my control file code
    LOAD DATA
    APPEND
    INTO TABLE KEY_HISTORY_TBL
    WHEN OLD_KEY <> ''
    AND NEW_KEY <> ''
    FIELDS TERMINATED BY ','
    TRAILING NULLCOLS
            OLD_KEY "LTRIM(RTRIM(:OLD_KEY))",
            NEW_KEY "LTRIM(RTRIM(:NEW_KEY))",
            SYS_DATE "SYSTIMESTAMP",
            STATUS CONSTANT 'C'
    )Thanks,
    -Soma
    Edited by: user4587490 on Jun 15, 2011 10:17 AM
    Edited by: user4587490 on Jun 15, 2011 11:16 AM

    Hello Soma.
    How many records exist in your 700 MB CSV file? How many do you expect to process in 10 minutes? You may want to consider performing a set of simple unit tests with 1) 1 record, 2) 1,000 records, 3) 100 MB filesize, etc. to #1 validate that your shell script and control file syntax function as expected (including the writing of log files, etc.), and #2 gauge how long the processing will take for the full file.
    Hope this helps,
    Luke
    Please mark the answer as helpful or answered if it is so. If not, provide additional details.
    Always try to provide actual or sample statements and the full text of errors along with error code to help the forum members help you better.

  • Negative data file size

    RDBMS: oracle 10g R2
    when execute to statement to determinate size of data files, the data file DATA8B.ORA is negative, why?
    before droped a table with 114,000,000 rows in this tablespace.
    FILE_NAME FILE_SIZE USED PCT_USED FREE
    G:\DIL\DATA5D.ORA 4096 3840.06 93.75 255.94
    Total tablespace DATA5---------------------------> 16384 14728.24 89.9 1655.76
    I:\DIL\DATA6A.ORA 4096 3520.06 85.94 575.94
    I:\DIL\DATA6B.ORA 4096 3456.06 84.38 639.94
    I:\DIL\DATA6C.ORA 4096 3520.06 85.94 575.94
    I:\DIL\DATA6D.ORA 4096 3520.06 85.94 575.94
    Total tablespace DATA6---------------------------> 16384 14016.24 85.53 2367.76
    G:\DIL\DATA7A.ORA 4096 3664.06 89.45 431.94
    G:\DIL\DATA7B.ORA 4096 3720.06 90.82 375.94
    G:\DIL\DATA7C.ORA 4096 3656.06 89.26 439.94
    G:\DIL\DATA7D.ORA 4096 3728.06 91.02 367.94
    G:\DIL\DATA7E.ORA 4096 3728.06 91.02 367.94
    Total tablespace DATA7---------------------------> 20480 18496.3 90.3 1983.7
    G:\DIL\DATA8A.ORA 3500 2880.06 82.29 619.94
    G:\DIL\DATA8B.ORA 4000 -2879.69 -71.99 6879.69
    Total tablespace DATA8---------------------------> 7500 0.37 5.14 7499.63

    the query is:
    select substr(decode(grouping(b.file_name),
    1,
    decode(grouping(b.tablespace_name),
    1,
    rpad('TOTAL:', 48, '=') || '>>',
    rpad('Total tablespace ' || b.tablespace_name,
    49,
    '-') || '>'),
    b.file_name),
    1,
    50) file_name,
    sum(round(Kbytes_alloc / 1024, 2)) file_size,
    sum(round((kbytes_alloc - nvl(kbytes_free, 0)) / 1024, 2)) used,
    decode(grouping(b.file_name),
    1,
    decode(grouping(b.tablespace_name),
    1,
    sum(round(((kbytes_alloc - nvl(kbytes_free, 0)) /
    kbytes_alloc) * 100 / b.nbtbs,
    2)),
    sum(round(((kbytes_alloc - nvl(kbytes_free, 0)) /
    kbytes_alloc) * 100 / b.nbtbsfile,
    2))),
    sum(round(((kbytes_alloc - nvl(kbytes_free, 0)) / kbytes_alloc) * 100,
    2))) pct_used,
    sum(round(nvl(kbytes_free, 0) / 1024, 2)) free
    from (select sum(bytes) / 1024 Kbytes_free,
    max(bytes) / 1024 largest,
    tablespace_name,
    file_id
    from sys.dba_free_space
    group by tablespace_name, file_id) a,
    (select sum(bytes) / 1024 Kbytes_alloc,
    tablespace_name,
    file_id,
    file_name,
    count(*) over(partition by tablespace_name) nbtbsfile,
    count(distinct tablespace_name) over() nbtbs
    from sys.dba_data_files
    group by tablespace_name, file_id, file_name) b
    where a.tablespace_name(+) = b.tablespace_name
    and a.file_id(+) = b.file_id
    group by rollup(b.tablespace_name, file_name);
    the same negative data file size on Database Control...

  • S1000 Data file size limit is reached in statement

    I am new to Java and was given the task to trouble shoot a java application that was written a few years ago and no longer supported. The java application creates database files the user's directory: diwdb.properties, diwdb.data, diwdb.lproperties, diwdb.script. The purpose of the application is to open a zip file and insert the files into a table in the database.
    The values that are populated in the diwdb.properties file are as follows:
    #HSQL Database Engine
    #Wed Jan 30 08:55:05 GMT 2013
    hsqldb.script_format=0
    runtime.gc_interval=0
    sql.enforce_strict_size=false
    hsqldb.cache_size_scale=8
    readonly=false
    hsqldb.nio_data_file=true
    hsqldb.cache_scale=14
    version=1.8.0
    hsqldb.default_table_type=memory
    hsqldb.cache_file_scale=1
    hsqldb.log_size=200
    modified=yes
    hsqldb.cache_version=1.7.0
    hsqldb.original_version=1.8.0
    hsqldb.compatible_version=1.8.0
    Once the databsae file gets to 2GB it brings up the error meessage 'S1000 Data file size limit is reached in statement (Insert into <tablename>......
    From searching on the itnernet it appeared that the parameter hsqldb.cache_file_scale needed to be increased & 8 was a suggested value.
    I have the distribution files (.jar & .jnlp) that are used to run the application. And I have a source directory that was found that contains java files. But I do not see any properties files to set any parameters. I was able to load both directories into NetBeans but really don't know if the files can be rebuilt for distribution as I'm not clear on what I'm doing and NetBeans shows errors in some of the directories.
    I have also tried to add parameters to the startup url: http://uknt117.uk.infores.com/DIW/DIW.jnlp?hsqldb.large_data=true?hsqldb.cache_file_scale=8 but that does not affect the application.
    I have been struggling with this for quite some time. Would greatly appreciate any assistance to help resolve this.
    Thanks!

    Thanks! But where would I run the sql statement. When anyone launches the application it creates the database files in their user directory. How would I connect to the database after that to execute the statement?
    I see the create table statements in the files I have pulled into NetBeans in both the source folder and the distribution folder. Could I add the statement there before the table is created in the jar file in the distribution folder and then re-compile it for distribution? OR would I need to add it to the file in source directory and recompile those to create a new distribution?
    Thanks!

  • MDT 0xc0000098 The windows boot configuration data file does not contain a valid OS entry

    hello,
    i installed WDS+MDT for OS deployment. all was working fine.
    Now i need our branch office to be able to deploy OS as well.
    I installed a new Win 2008R2 server, and installed the WDS service on it.
    When reading about this on the internet i saw i need, on the MDT at our main office, to make a link, under "Advanced Configuration" - "Linked Deployment Shares" to the new WDS server (I used this: http://www.balm.se/?p=115")
    when doing so i entered the UNC path to the new server and then, on the created link, i right click and choose "Replicate now".
    after doing so, i tried booting a laptop just to see if there is any impact, and now, I'm getting this error:
    Windows failed to start. A recent hardware or software change might be the cause. To fix the problem:
    1. insert your windows installation disk and restart your computer
    2. choose your language, and then click "Next"
    3. Click "Repair your computer"
    If you do not have this disc, contact your system administrator or computer manufacturer for assistance.
    File: \Boot\BCD
    Status: 0xc0000098
    Info: The windows Boot Configuration Data file does not contain a valid OS entry.
    I know that usually when I'm getting this message, it means that a previous MDT installation was started and did not finished correctly, and the are some [stuff] on the HD, so i do a partition erase and start over and all working fine.
    Now it's not working. I tried several computer, even new ones, and NADA.
    someone know the cause?
    I know I read something, i don't remember where or how to find it, that when doing the steps I did, it alter\change some ini or xml, or something like this. is that true? any one know on what file do i need to look and see if something is misconfigured?
    thanks a lot for your help

    hello,
    i installed WDS+MDT for OS deployment. all was working fine.
    Now i need our branch office to be able to deploy OS as well.
    I installed a new Win 2008R2 server, and installed the WDS service on it.
    When reading about this on the internet i saw i need, on the MDT at our main office, to make a link, under "Advanced Configuration" - "Linked Deployment Shares" to the new WDS server (I used this: http://www.balm.se/?p=115")
    when doing so i entered the UNC path to the new server and then, on the created link, i right click and choose "Replicate now".
    after doing so, i tried booting a laptop just to see if there is any impact, and now, I'm getting this error:
    Windows failed to start. A recent hardware or software change might be the cause. To fix the problem:
    1. insert your windows installation disk and restart your computer
    2. choose your language, and then click "Next"
    3. Click "Repair your computer"
    If you do not have this disc, contact your system administrator or computer manufacturer for assistance.
    File: \Boot\BCD
    Status: 0xc0000098
    Info: The windows Boot Configuration Data file does not contain a valid OS entry.
    I know that usually when I'm getting this message, it means that a previous MDT installation was started and did not finished correctly, and the are some [stuff] on the HD, so i do a partition erase and start over and all working fine.
    Now it's not working. I tried several computer, even new ones, and NADA.
    someone know the cause?
    I know I read something, i don't remember where or how to find it, that when doing the steps I did, it alter\change some ini or xml, or something like this. is that true? any one know on what file do i need to look and see if something is misconfigured?
    thanks a lot for your help
    For a branch solution I would use DFS(R) rather than linked deploymentshares.
    If there is infrastructure at the remote sites you could use the MDT environment at your main site, and  then at each other site use an existing server or build a new server (VM is available). Create a Deploymentshare on each sever
    and copy the contents of the MDT environment from the first site into this share.  Next you will have to add the WDS service to each site's server. For WDS add the boot image. Of course as Keith noted ensure that you
    have the same versions of MDT and the WAIK/ADK on all systems
    You can use DFS to replicate to all of the other sites, but you will still have to create a deploymentshare, and install WDS at each site. So each site talks to its own WDS server you can use the DefaultGateway parameter instead -
    [Settings]
    Priority=Default, DefaultGateway
    [Default]
    SkipBDDWelcome=YES
    [DefaultGateway]
    192.168.10.1=Bergen
    192.168.20.1=Oslo
    [Bergen]
    DeployRoot=\\MDT-Bergen\DeploymentShare$
    [Oslo]
    DeployRoot=\\MDT-Oslo\DeploymentShare$
    If this post is helpful please click "Mark for answer", thanks! Kind regards

  • Jpeg attachments appearing as a .dat file and not .jpeg files

    Please can someone tell me how I can change jpeg attachments that are coming through on emails on my new iPhone 5 as a .dat file and not as a .jpeg file.
    These .dat files cannot be opened - I assume that this will happen with PDF files as well.  
    Thanks

    I am surprised at this point in time no one has chimed in, seeing how many people have tried this in the pasta. I actually solved the whole clusters issue, but now I have discovered another problem that is making me throw in the towell on this one as I just can't explain it.
    I am finding for some silly reason while the code will work in one VI it won't seem to work in another. See my photos below. The one where the code works, I have created the whole thing as a subroutine inside of a greater VI which I can not publish due to proprietary code. Next I tried pulling out the functioning part of the code and putting that in a stand alone file. See the photo where the code doesn't work. I have also ran the same code that doesn't work with both 1 and 2 variables. Each time Matlab can't open it. I am using the same to MAT subvi in each of these. For some reason though when I run the code in the working picture it runs, and I get a nice little mat file that I can open in Matlab. When I run the same exact code as a standalone I get the matlab error code every time.
    I have tried reviewing every possible variable I can think of and worked with them. In all cases the two new VI's I made the code doesn't want to give me a valid mat file. However in the greater program it does! I am really stumped at this point. So unless I can find a better convert to matlab file code I don't think this will work.
    Attachments:
    where_code works.png ‏14 KB
    where code fails.png ‏15 KB
    matlab_error_code.png ‏23 KB

  • Impact of data file size on DB performance

    Hi,
    I have a general query regarding size of data files.
    Considering DB performance, which of the below 2 options are better?
    1. Bigger data file size but less number of files (ex. 2 files with size 8G each)
    2. Smaller data file size but more number of files (ex. 8 files with size 2G each)
    I am working on a DB where I have noticed where very high I/O.
    I understand there might be many reasons for this.
    However, I am checking for possibility to improve DB performance though optimizing data file sizes. (Including TEMP/UNDO table spaces)
    Kindly share your experiences with determining optimal file size.
    Please let me know in case you need any DB statistics.
    Few details are as follows:
    OS: Solaris 10
    Oracle: 10gR2
    DB Size: 80G (Approx)
    Data Files: UserData - 6 (15G each), UNDO - 2 (8G each), TEMP - 2 (4G each)
    Thanks,
    Ullhas

    Ullhas wrote:
    I have a general query regarding size of data files.
    Considering DB performance, which of the below 2 options are better?Size or number really does not matter assuming other variables constant. More files results in more open file handles, but in your size db, it matters not.
    I am working on a DB where I have noticed where very high I/O.
    I understand there might be many reasons for this.
    However, I am checking for possibility to improve DB performance though optimizing data file sizes. (Including TEMP/UNDO table spaces)Remember this when tuning I/O: The fastest I/O is the one that never takes place! High I/O may very well be a symptom of unnecessary FTS or poor execution plans. Validate this first before tuning I/O and you will be much better off.
    Regards,
    Greg Rahn
    http://structureddata.org

  • Maximum Data file size in 10g,11g

    DB Versions:10g, 11g
    OS & versions: Aix 6.1, Sun OS 5.9, Solaris 10
    This is what Oracle 11g Documentation
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/limits002.htm
    says about the Maximum Data file size
    Operating system dependent. Limited by maximum operating system file size;typically 2^22 or 4 MB blocksI don't understand what this 2^22 thing is.
    In our AIX machine and ulimit command show
    $ ulimit -a
    time(seconds)        unlimited
    file(blocks)         unlimited  <-------------------------------------------
    data(kbytes)         unlimited
    stack(kbytes)        4194304
    memory(kbytes)       unlimited
    coredump(blocks)     unlimited
    nofiles(descriptors) unlimited
    threads(per process) unlimited
    processes(per user)  unlimitedSo, this means, In AIX that both the OS and Oracle can create a data file of any Size. Right?
    What about 10g, 11g DBs running on Sun OS 5.9 and Solaris 10 ? Is there any Limit on the data file size?

    How do i determine maximum number of blocks for an OS?df -g would give you the block size. OS blocksize is 512 bytes on AIX.
    Lets say the db_block_size is 8k. What would the maximum file size for data file in Small File tablespace and Big File tablespace be?Smallfile (traditional) Tablespaces - A smallfile tablespace is a traditional Oracle tablespace, which can contain 1022 datafiles or tempfiles, each of which can contain up to approximately 4 million (222) blocks. - 32G
    A bigfile tablespace contains only one datafile or tempfile, which can contain up to approximately 4 billion ( 232 ) blocks. The maximum size of the single datafile or tempfile is 128 terabytes (TB) for a tablespace with 32K blocks and 32TB for a tablespace with 8K blocks.
    HTH
    -Anantha

  • SAP Data File size considerably reduced after Unicode Conversion

    Hello Experts
    I have just performed a CUUC (Upgrade along with unicode conversion) from R/3 4.7 to ECC 6.0 EHP5. The data size that i earlier had was close to 463 GB (MSSQL MDF, LDF Files), after the data export to be converted to unicode the size was 45 GB (10% of actual data size, i feel this is normal after heterogeneous system copy), but after the import again the DATA file size is only 247 GB, is this normal ? or have i lost some data ? For ex, I tried checking tables like MSEG and the number of entries have reduced from 15,678,790 to 15,290,545.
    Could you kindly let me know if there is a way to check from Basis perspective if i have not lost any data ? I have followed all the procedures as per SAP Standards.
    Waiting for your quick reply.
    Best Regards
    Pritish

    Hi Nicholas
    Data is compressed during the new R3load procedure is understood, but why number of table entries have reduced is something which is still a question to me (in some cases increased as well) ? For example      
                                 Source          Target
    STPO                  415,725          412,150
    STKO                 126,710          126,141
    PLAF                 74,671                   78,336
    MDKP                 193,487          192,747
    MDPB                 55,329                   63,557
    Any suggestions or ideas ?
    Best Regards
    Pritish

  • Essbase cube data file size

    Hi,
    Why is it showing different numbers about my data file size in EAS>database>edit>properties>storage and a full database export.
    Thanks,
    KK

    Tim, in all seriousness, I am not a stalker. Honestly. You just post about things that interest me/we share some of the same skills. Alternatively, Glenn stalks me, I stalk you, it's time for you to become a sociopath too and stalk someone else.
    Okay, with that insanity out of the way, the other thing that could have a big impact on export size is if the OP did a level zero or input level export as opposed to a full export. In BSO databases in particular, that can lead to a very, very large set of .PAG (and to some extent .IND files) and a rather small export file size as the calculated values aren't gettting written out.
    If the export is done through EAS I think the default is level zero.
    Regards,
    Cameron Lackpour
    Edited by: CL on Sep 23, 2011 2:38 PM
    Bugger, the OP wrote "full database export". Okay, never mind, I am a terrible stalker. I agree with everything Tim wrote re compression.
    In an effort to add something useful, if you use the new-ish database archive feature in MaxL, you will essentially get .PAG files + .IND files combined into a single binary file. I used to be a big fan of full restructures, clears, and reloads to do defrags, but now go with the restructure command. Despite the fact that it's single threaded, in my limited testing (only done it at one client) it was faster as the export all, clear, reload. If you combine that with the archive mode you might have a better approach.
    Regards,
    Cameron Lackpour

  • Music files are not updating in nokia n-79.

    I added mp3 files to my mobile nokia n79,but files is not updating in music library,when we refresh manually it nont stops saying "0 file added"...,but when i play songs in filemanager it plays well..pls help me out of this ASAP.

    Try the procedure mentioned here:
    /t5/Nseries-and-S60-Smartphones/Can-t-refresh-N97-music-library/ta-p/948327

  • How to avoid db_recover "file size not a multiple of the pagesize" error?

    I am writing some backup and recovery scripts for a Berkeley DB application. I'm using the Oracle supplied db_hotbackup.exe and db_recover.exe executables (DB 4.5.20). My very first recovery unit test has resulted in the following error from db_recover:
    "file size not a multiple of the pagesize"
    What is the cause of this error, and how do I create my backup files to avoid this problem?

    This error message indicates that a file that should be made up of pages of a certain size is not a multiple of that size. For example, if a database file configured with 4KB pages was found in a 10KB file, you would get this error message.
    Can you check that however the scripts manipulate the database files, they don't add or remove any bytes from the end?
    Regards,
    Michael Cahill, Oracle.

  • When Firefox opens my MSN homepage is out of date and will not update

    When Firefox (v 4) opens my homepage is out of date and will not update to the current date even when reloaded

    I'd recommend you delete the URL to your homepage which is causing the problem via Tools | Options | General and then go to the MSN homepage again and click the button "Use Current Page".
    MSN may have changed the coding in the URL which is causing Firefox to load an outdated page with your current settings.

Maybe you are looking for

  • How to create Nested (Multi level ) tag in XML using DBMS_XMLQUERY function

    Hi, I need Following out put in CLOB Column. XML format Like : <?xml version="1.0" encoding="UTF-8"?> <ReceiptHeader> <Id>1234556</Id> <Type>DD</Type> <Receipts> <ReceiptDEO> <StoreId>11380001</StoreId> <EmployeeId>NOLO980</EmployeeId> <LineItems> <R

  • SAP Graphics

    Hello; I am planning to use graphics related fm.s to display data. Is it possible to display two different data sets in a single graphic? That is i will have one x-axis and two y-axis at different ends of the graphic. Thx in advance Ali

  • ADF Edit Form with column spacing

    Hello, I have an ADF edit form, right now, all fields are displayed one below one, I want to put the fields in two rows. How can I do this? Tahnks

  • 2.1: Can't type the word "MacBU"

    Since upgrading to 2.1, I can't type the word "MacBU" without deleting at least one letter. Every time I try, in any keyboard context, it ends up as "MacBI". No, I'm not just a poor touch-typist (pun not intended): even by sliding my finger from one

  • Cookie.GetValue()

    Hello all i am creating cookie with value of comma separated string. example string1,string2,string3 etc and when reading value with getValue() i get the correct data in embeded OC4J and Standalone OC4J, but not in OC4J instance of application server