Weird Database Size Issue

So I have a daily Exchange Environment Report running that kicks the output to a public folder and I check this stuff daily.  The script is found here:
http://www.stevieg.org/2011/06/exchange-environment-report/
So what I'm concerned about is the amount of "Database Disk Free"  from the above report and the output is as follows:
I have 1TB LUNs on my Exchange DBs
DB01, "Database Size" is 250GB.
DB01, "Database White Space" is 2.5GB.
DB01, "Database Disk Free is 50%.
When I run via PS: Get-MailboxDatabase -Status | select ServerName,Name,DatabaseSize it shows DB01 as being 250GB.
I asked my SAN guy to see if he can take a look at it and he stated you are using 50% of the volume.  That doesn't make sense.  I should only be using 25% of the 1TB volume.  I have another DB02 showing the exact same discrepency. If I look at
the physical location on the actual server I see the .edb file as being 250GB and there is a folder in there CatalogData... which is about 13GB.  All this doesn't add up.   How can I have .edb files that are 250GB and very little white space
and this report is showing I'm using 50% of a 1TB volume?
I have 4 other DBs that seem to show the correct space usage.
This is Exchange 2010 SP2 RU5v2

C:\>Dir C:\Database\DB01 /s
 Volume in drive C has no label.
 Volume Serial Number is 4648-D66D
 Directory of C:\Database\DB01
01/13/2014  02:59 PM    <DIR>          CatalogData-09bf19a6-14e8-48c7-8d02-399ae
1f40762-28df5dce-5f27-44d7-9abc-091bffd5dc00
12/23/2013  01:53 PM   268,314,935,296 DB01.edb
04/21/2013  01:38 AM            20,888 DB01.edb.IRS.RAW
               2 File(s) 268,314,956,184 bytes
 Directory of C:\Database\DB01\CatalogData-09bf19a6-14e8-48c7-8d02-399ae1f40762-
28df5dce-5f27-44d7-9abc-091bffd5dc00
01/13/2014  02:59 PM    <DIR>          .
01/13/2014  02:59 PM    <DIR>          ..
01/13/2014  02:58 PM            12,288 00010001.ci
01/13/2014  02:58 PM             4,096 00010001.dir
01/13/2014  02:58 PM            65,536 00010001.wid
01/13/2014  02:58 PM            45,056 00010002.ci
01/13/2014  02:58 PM             4,096 00010002.dir
01/13/2014  02:58 PM            65,536 00010002.wid
01/13/2014  02:58 PM            49,152 00010003.ci
01/13/2014  02:58 PM             4,096 00010003.dir
01/13/2014  02:58 PM            65,536 00010003.wid
01/13/2014  02:59 PM           110,592 00010004.ci
01/13/2014  02:59 PM             4,096 00010004.dir
01/13/2014  02:59 PM            65,536 00010004.wid
01/13/2014  02:59 PM            49,152 00010005.ci
01/13/2014  02:59 PM             4,096 00010005.dir
01/13/2014  02:59 PM            65,536 00010005.wid
01/07/2014  04:44 PM    12,594,868,224 00010006.ci
01/07/2014  04:44 PM        28,651,520 00010006.dir
01/13/2014  02:56 PM            65,536 00010006.wid
01/13/2014  02:56 PM         4,784,128 00010006.wsb
01/13/2014  02:59 PM            69,632 00010007.ci
01/13/2014  02:59 PM             4,096 00010007.dir
01/13/2014  02:59 PM            65,536 00010007.wid
01/13/2014  02:59 PM            36,864 00010008.ci
01/13/2014  02:59 PM             4,096 00010008.dir
01/13/2014  02:59 PM            65,536 00010008.wid
01/13/2014  02:59 PM            28,672 00010009.ci
01/13/2014  02:59 PM             4,096 00010009.dir
01/13/2014  02:59 PM            65,536 00010009.wid
01/13/2014  02:03 PM         3,502,080 0001000F.ci
01/13/2014  02:03 PM            20,480 0001000F.dir
01/13/2014  02:52 PM            65,536 0001000F.wid
01/13/2014  01:51 PM         1,900,544 00010010.ci
01/13/2014  01:51 PM            16,384 00010010.dir
01/13/2014  02:48 PM            65,536 00010010.wid
01/13/2014  02:36 PM         4,153,344 00010011.ci
01/13/2014  02:36 PM            24,576 00010011.dir
01/13/2014  02:57 PM            65,536 00010011.wid
01/13/2014  01:47 PM         2,797,568 00010012.ci
01/13/2014  01:47 PM            20,480 00010012.dir
01/13/2014  02:49 PM            65,536 00010012.wid
01/13/2014  01:56 PM         2,560,000 00010013.ci
01/13/2014  01:56 PM            20,480 00010013.dir
01/13/2014  02:38 PM            65,536 00010013.wid
01/13/2014  02:11 PM           716,800 00010014.ci
01/13/2014  02:11 PM             4,096 00010014.dir
01/13/2014  02:19 PM            65,536 00010014.wid
01/13/2014  02:53 PM         3,461,120 00010017.ci
01/13/2014  02:53 PM            24,576 00010017.dir
01/13/2014  02:58 PM            65,536 00010017.wid
01/13/2014  01:42 PM       700,424,192 00010018.ci
01/13/2014  01:42 PM         1,839,104 00010018.dir
01/13/2014  02:59 PM         1,703,936 00010018.wid
01/13/2014  02:18 PM         2,252,800 0001001A.ci
01/13/2014  02:18 PM            16,384 0001001A.dir
01/13/2014  02:45 PM            65,536 0001001A.wid
01/09/2014  09:51 PM       713,547,776 0001001B.ci
01/09/2014  09:51 PM         1,970,176 0001001B.dir
01/13/2014  02:58 PM         1,376,256 0001001B.wid
01/13/2014  02:10 PM         1,642,496 0001001C.ci
01/13/2014  02:10 PM            12,288 0001001C.dir
01/13/2014  02:24 PM            65,536 0001001C.wid
01/13/2014  02:26 PM         5,025,792 0001001D.ci
01/13/2014  02:26 PM            32,768 0001001D.dir
01/13/2014  02:56 PM            65,536 0001001D.wid
01/13/2014  02:46 PM         8,425,472 0001001E.ci
01/13/2014  02:46 PM            36,864 0001001E.dir
01/13/2014  02:56 PM            65,536 0001001E.wid
01/13/2014  02:41 PM         2,088,960 0001001F.ci
01/13/2014  02:41 PM            16,384 0001001F.dir
01/13/2014  02:54 PM            65,536 0001001F.wid
01/13/2014  02:55 PM         1,044,480 00010020.ci
01/13/2014  02:55 PM            12,288 00010020.dir
01/13/2014  02:58 PM            65,536 00010020.wid
01/07/2014  04:26 PM       657,674,240 00010021.ci
01/07/2014  04:26 PM         1,773,568 00010021.dir
01/13/2014  02:56 PM         1,572,864 00010021.wid
01/13/2014  02:58 PM            36,864 00010023.ci
01/13/2014  02:58 PM             4,096 00010023.dir
01/13/2014  02:58 PM            65,536 00010023.wid
01/13/2014  02:58 PM           692,224 00010024.ci
01/13/2014  02:58 PM             4,096 00010024.dir
01/13/2014  02:59 PM            65,536 00010024.wid
01/07/2014  04:44 PM               240 CiAB0001.000
01/07/2014  04:44 PM            65,536 CiAB0001.001
01/07/2014  04:44 PM            65,536 CiAB0001.002
01/07/2014  04:44 PM               240 CiAB0002.000
01/07/2014  04:44 PM            65,536 CiAB0002.001
01/07/2014  04:44 PM            65,536 CiAB0002.002
01/07/2014  04:44 PM               240 CiAD0002.000
01/07/2014  04:44 PM            65,536 CiAD0002.001
01/07/2014  04:44 PM            65,536 CiAD0002.002
12/30/2013  10:33 AM               240 CiPT0000.000
12/30/2013  10:33 AM            65,536 CiPT0000.001
12/30/2013  10:33 AM            65,536 CiPT0000.002
01/13/2014  02:59 PM               240 INDEX.000
01/13/2014  02:59 PM            65,536 INDEX.001
01/13/2014  02:59 PM            65,536 INDEX.002
12/10/2012  03:06 PM                 4 SETTINGS.DIA
12/30/2013  10:31 AM               240 Used0000.000
12/30/2013  10:31 AM            65,536 Used0000.001
12/30/2013  10:31 AM            65,536 Used0000.002
             101 File(s) 14,753,547,684 bytes
     Total Files Listed:
             103 File(s) 283,068,503,868 bytes
               3 Dir(s)  554,741,121,024 bytes free
C:\>

Similar Messages

  • SharePoint 2010 content database size issues alldocs streams

    HI All,
    We are planning to do migration from sharepoint 2010 to sharepoint 2013. our database team identified that "all docs streams" and "audit log table" using almost 300 gig
    we dropped "audit log" table today almost 100 gig
     but still "all doc streams" table size is almost 147 gig
    how to handle this situation before migration?
    do I need to migrate as it is or do I need to work on "all doc streams" table
    hot to handle database size
    which way you recommend ? what are the best practices
    please see database table size
    can any one please send me step by step implementation?
    Thanks,
    kumar
    kkm

    First off, touching SharePoint database tables is completely unsupported.
    http://support.microsoft.com/kb/841057
    You shouldn't be making changes within the database at all and you're putting yourself out of support by doing so.
    AllDocStreams is your data. You need to have users clean up data, or move Site Collections to new Content Databases if you feel you need to reduce the size of the table itself.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Transaction Sync and Database Size

    Hello,
    We're using BDB (via the Java interface) as the persistent store for a messaging platform. In order to achieve high performance, the transactional operations are configured to not sync, i.e., TransactionConfig.setSync(false) . While we do achieve better performance, the size of the database does seem rather large. We checkpoint on a periodic basis, and each time we checkpoint, the size of the database grows, even though records (messages in our world) are being deleted. So, if I were to add, say 10000 records, delete all of them and then checkpoint, the size of the database would actually grow! In addition, the database file, while being large, is also very sparse - a 30GB file when compressed reduces in size to 0.5 GB.
    We notice that if we configure our transactional operations to sync, the size is much smaller, and stays constant, i.e., if I were to insert and subsequently delete 10000 records into a database whose file is X MB, the size of the database file after the operations would be roughly X MB.
    I understand that transaction logs are applied to the database when we checkpoint, but should I be configuring the behaviour of the checkpointing (via CheckpoinConfig )?
    Also, I am checkpointing periodically from a separate thread. Does BDB itself spawn any threads for checkpointing?
    Our environment is as follows:
    RedHat EL 2.6.9-34.ELsmp
    Java 1.5
    BDB 4.5.20
    Thanks much in advance,
    Prashanth

    Hi Prashanth,
    If your delete load is high, your workload should benefit from setting the DB_REVSPLITOFF flag, which keeps around the structure of the btree regardless of records being deleted. The result should be less splits and merges, and is therefore better concurrency.
    Here you can find some documentation that should help you:
    Access method tuning: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_misc/tune.html
    Transaction tuning: http://www.oracle.com/technology/documentation/berkeley-db/db/ref/transapp/tune.html
    If you are filling the cache with dirty pages, you can indeed call checkpoint() periodically in the application, or you can create a memp_trickle thread. See the following sections of the documentation:
    - Javadoc: Environment -> trickleCacheWrite" http://www.oracle.com/technology/documentation/berkeley-db/db/java/com/sleepycat/db/Environment.html#trickleCacheWrite(int)
    Some related thread for the "database size issue", can be found here: http://forums.oracle.com/forums/thread.jspa?threadID=534371&tstart=0
    Bogdan Coman

  • Rman Backupset size is exceeding the database size

    Hi
    my rman backup size is exceeding the database backup size and occupying the full mountpoint. and finally due to space issue, backup is failing.
    Below is the RMAN script
    run {
    backup
    incremental level 0
    tag ASCPLVL0
    database plus archivelog ;
    delete noprompt backupset completed before 'sysdate - 4/24' ;
    Please look into this
    Regards
    M. Satyanvesh

    Are you using compression?
    Ans: No
    The size of an RMAN backup isn't always proportional to the size of the database.
    Ans: yeah I know, but it should be somewhere near to database size(lets say 100gb or 150 gb variance)
    How many archivelogs are you backing up with your database?  This is possibly a factor in the size of your db backup.
    Ans: archive log count per day is 18  and size is 34gb
    Have you got a retention policy in place and do you regularly delete obsolete backups/archivelogs?
    Ans: yes
    Are you taking this backup as part of a backup strategy? or is this just a one off for some other purpose which would seem to be the case.
    Ans: This is production systesm so its a part of backup strategy

  • Database performance issue (8.1.7.0)

    Hi,
    We are having tablespace "payin" in our database (8.1.7.0) .
    This tablespace is the main Tablespace of our database which is dictionary managed and heavily accessed by the user SQL statements.
    Now we are facing the database performance issue during the peak time (i.e. at the month end) when no. of users use to run the no. of large reports.
    We have also increased the SGA sufficiently on the basis of RAM size.
    This tablespace is heavily accessed for the reports.
    Now my question is,
    Is this performance issue is because the tablespace is "dictionary managed" instead of locally managed ?
    because when i monitor the different sessions through OEM, the no. of hard parses is more for the connected users.
    Actually the hard parses should be less.
    In oracle 8.1.7.0 Can we convert dictionary managed tablespace to locally managed tablespace ?
    by doing so will the problem will get somewhat resolve ? will it reduce the overhead on the dictionary tables and on the shared memory ?
    If yes then how what is procedure to convert the tablespace from dictionary to locally managed ?
    With Regards

    If your end users are just running reports against this tablespace, I don't think that the tablespace management (LM/DM) matters here. You should be concerned more about the TEMP tablespace (for heavy sort operations) and your shared pool size (as you have seen hard parses go up).
    As already stated, get statspack running and also try tracing user sessions with wait events. Might give you more clues.

  • Database Size goes on Increasing in SAP B1.Now the Size of my Data is 34Gb

    Hello Experts,
    As one of my client data is going on increasing the data,As of now the transaction data of the Database is 34Gb.
    This how the cases i have'been tested in the test database with no results.
    experts,would be greatly appreciated for this solution.Since,many days im into the problem to solve the issues.
    The cases as follows:
    1.Transaction (Mdf File)=34Gb
    2.LogFile(LdfFile)=1Mb
    3.History tables like AITW,AITM,ACRD--Reserves 6Gb compares to other history table.So,I 've changed the historyLog in SAP general setting to 1 and runned the addon to update the Itemmaster.Now the records aare been reduced from 1lakh to 25000.But the size occupied and freespace remains.
    Is there any solution to solved this problem.
    4.After deleting the records by addon,took the backup and restored .But same size effects for this case.
    5.I have shrinked the database and file for the Db,with no result.
    If the same problem continues for couple of years my data would reach to 60Gb.
    Experts would be appreciated for this big solution.
    Thanks,
    Kumar

    Hi,
    Plz check the following links :
    Rapid increase in database size
    SAP Database Size
    Rapid DataBase size Increase's nearly 30Gb for SAP B1 8.81 PL5

  • Database size would exceed your licensed limit of 10240 MB per database in ms sql 2008 r2 standard editon

    Hello there,
    Pl. provide solution on this on urgent if possible
    REATE DATABASE or ALTER DATABASE failed because the resulting cumulative database size would exceed your licensed limit of 10240 MB per database.
    I have installed MS SQL 2008 R2 Standard Edition
    Error getting while restoring sql db 11gb backup file
    Yogi

    Hi,
    Please post output of  query provided by Olaf. I guess you have connected to Express edition( 2008 r2/2012/2014) and since database has limitation of 10 G hence you got this message. Please double check
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Should We consider Temp Data Files While Estimating The Database Size

    Hi,
    The Database Size is sum of physical files like
    Control file
    redo log file
    datafiles
    temp files
    so i want to know why are we considering the temp files..Because it's temporary. At one stage of database, temp size it could me more and at one stage it could be less.
    So why consider the temp file???
    Please share your views on it..
    Thanks
    Umesh

    So, in essence the size of your datafiles is the size of your tablespaces?No. The size of the tablespace is the sum of the sizes of the datafiles in the tablespace --- i.e. the datafiles determine the tablespace size, not the other way round.
    (Although when you CREATE or ALTER TABLESPACE, you specify the sizes of the datafiles that you want to belong to the tablespace).
    the temporary tablespace has space allocated to it regardless of whether there are temporary tables in that tablespace or not.Two points here :
    1. On most OSs the temporary tablespace tempfile is created as a "sparse" file. So, if you issue a CREATE TEMPORARY TABLESPACE TEMP TEMPFILE 'xyz.dbf' SIZE 1000M; and then did an "ls -l" at the OS level, "xyz.dbf' would appear to be only a few tens of KBs in size. The OS "grows" the file to 1000M as necessary.
    When talking to your OS administrator ensure that you get 1000M (or the AUTOEXTEND MAXSIZE !!) space allocated even though he might "see" only a few 10s of KBs used on the first day.
    2. The temporary tablespace does not have objects (other than "global temporary tables" that overflow from memory to disk). It is really temporary space for joins, sorts, order bys etc.
    So, your datafile size is not affected regardless of your temporary tables coming and going.Yes, your datafile sizes and tempfile sizes are independent. Yet, when "sizing" disk space for the database you must include the tempfile size. However, when reporting to IT Management with a statement "our database size is ".. you might want to break it up into components like Data Dictionary, Tables, Indexes, TemporarySpace, RedoLogs and ArchiveLogs. You could also differentiate between OS-allocated space (sizes of datafiles) and Oracle-allocated space (sizes of segments) and actual used space (which you'd have to compute !) .
    Hemant K Chitale
    Edited by: Hemant K Chitale on Feb 17, 2011 10:42 PM
    Added (Although ....) paragraph to first point.

  • How can we reduce compares Database size in express edition

    Hi,
    for client we use SQL server express edition. express edition has limitation of 10GB of database size.
    please help me how can we reduce/compares DB size.
    Regards,
    Manish

    Data or page compression feature is not available in express edition so you cannot use this feature to help you.
    You might be lucky to reclaim space from shrink operation but eventually data file will grow again. So your best bet is using licensed version.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Articles

  • Mismatch between the Content Database size and the total of each site collection' storage used.

    Hi All,
    Environment:  SharePoint 2010 with SP2.
    Issue: One of the Content databases size in our farm shows 200 GB as used. There are 25 site collections in the DB and we counted
    the total storage used of all site collections in that Content DB and is not more than 40 GB. (used "enumsites" command and counted the each sitecollections storage used).
    What actions/troubleshooting were done?
    Ran one script which will find the actual size of site collection and how much space used in the disk. But didn’t find major difference in the report.
    Checked “Deleted from end user recycle bin “in all the site collections and no major storage is noticed.
    Planning to Detach the problematic Content DB and attach ,will check whether any major effect is there
    Why the Content DB shows 200 GB as used when the total storage used of all site collections is just 40 GB.
    Appreciate suggestions from any one.
    Best Regards,
    Pavan Kumar Sapara.
    s p kumar

    Hi,
    Thanks for your reply.
    As there is only 20 MB of unallocated space for the above said DB, SQL DB team informed that they cannot perform the DB shrink at this moment.
    So we are thinking to Offload all the site collections to other new DB and then will Drop the problematic database. In this way we can overcome the
    issue.
    Answer for your queries.
    Are the mismatched sizes causing an issue? Are you short on diskspace for DB storage or SQL backups?
    No, there is no issue with the mismatched sizes. We are not on short on disk space. We just worried why it occupies that much size(200 GB) when
    the total storage used of all site collections in that DB is 40 GB.
    Best Regards,
    Pavan Kumar Sapara.
    s p kumar

  • SQL azure database size not dropping down after deleting all table

    Dear all,
    I have a simple database on Azure for which I have deleted all table data. The size of the database is still showing 5Mb of data and I am charge for that. I have heard that this may happen from cluster index getting fragmented.
    I have run  a querry I found on internet on all my table index to show percentage of fragmentation and all report 0%.
    DBA is not so my job but what could it be or how can I reduce that size ?
    ON premise I would use COMPACT DB but not available in azure like some others DB action
    Thnaks for tips
    regards

    user created objects/data are not the only ones stored in your database. you got system objects and metadata as Mike mentions above.
    are you trying to skip being charged if you're not storing data? looking at the pricing table,  you'll still get charged the $4.995 for the 0-100MB database size range.

  • Urgent help needed; Database shutdown issues.

    Urgent help needed; Database shutdown issues.
    Hi all,
    I am trying to shutdown my SAP database and am facing the issues below, can someone please suggest how I can go about resolving this issue and restart the database?
    SQL> shutdown immediate
    ORA-24324: service handle not initialized
    ORA-24323: value not allowed
    ORA-01089: immediate shutdown in progress - no operations are permitted
    SQL> shutdown abort
    ORA-01031: insufficient privileges
    Thanks and regards,
    Iqbal

    Hi,
    check SAP Note 700548 - FAQ: Oracle authorizations
    also check Note 834917 - Oracle Database 10g: New database role SAPCONN
    regards,
    kaushal

  • Tables are deleted but database size does not change in sql server 2008r2

    Hi All,
    20GB Tables are deleted in my database but database size does not change and disk size showing same size.

    Hi ,
    I have ran the Disk usage by Top Tables report and Identified couple of tables with unwanted data for last 5 years. I have deleted the data for the first 3 years and then ran the Disk usage by Top Tables report again. When I compared the report before
    and after the data deletion, I have noticed certain facts which is not matching with what I know or learned from experts like you. The following are the points where I am looking for clarification:
    1.Reserved (KB) has been reduced. I was expecting the data Reserved (KB) will remain the same after the data deletion. 
    2. The Data(KB) and Indexes(KB) fields have been reduced as expected. The Unused(KB) field have been increased as expected.
    I was expecting the total of Data(KB) and Indexes(KB) field space gained will be equal to the Unused(KB) field gained after deleting the data. But that is not the case. When I deducted(subtracted) the difference in  the Reserved(KB)(Difference before
    and after data deletion) field from the Total of space gained from the data deletion is equal to the Unused(KB) gained field value.
    I am not a SQL expert and not questioning but trying to understand whether we really gain space by deleting data from the tables. Also keen to get the concepts right, but my testing by deleting some records confused me.
    Looking ahead to all your expert advice.
    Thanks,
    Vennayat

  • What is the best practice on mailbox database size in exchange 2013

    Hi, 
    does anybody have any links to good sites that gives some pros/cons when it comes to the mailbox database sizes in exchange 2013? I've tried to google it - but hasn't found any good answers. I would like to know if I really need more than 5 mailbox databases
    or not on my exchange environment. 

    Hi
       As far as I know, 2TB is recommended maximum database size for Exchange 2013 databases.
       If you have any feedback on our support, please click
    here
    Terence Yu
    TechNet Community Support

  • Paper Size issues with CreatePDF Desktop Printer

    Are there any known paper size issues with PDFs created using Acrobat.com's CreatePDF Desktop Printer?
    I've performed limited testing with a trial subscription, in preparation for a rollout to several clients.
    Standard paper size in this country is A4, not Letter.  The desktop printer was created manually on a Windows XP system following the instructions in document cpsid_86984.  MS Word was then used to print a Word document to the virtual printer.  Paper Size in Word's Page Setup was correctly set to A4.  However the resultant PDF file was Letter size, causing the top of each page to be truncated.
    I then looked at the Properties of the printer, and found that it was using an "HP Color LaserJet PS" driver (self-chosen by the printer install procedure).  Its Paper Size was also set to A4.  Word does override some printer driver settings, but in this case both the application and the printer were set to A4, so there should have been no issue.
    On a hunch, I then changed the CreatePDF printer driver to a Xerox Phaser, as suggested in the above Adobe document for other versions of Windows.  (Couldn't find the recommended "Xerox Phaser 6120 PS", so chose the 1235 PS model instead.)  After confirming that it too was set for A4, I repeated the test using the same Word document.  This time the result was fine.
    While I seem to have solved the issue on this occasion, I have not been able to do sufficient testing with a 5-PDF trial, and wish to avoid similar problems with the future live users, all of which use Word and A4 paper.  Any information or recommendations would be appreciated.  Also, is there any information available on the service's sensitivity to different printer drivers used with the CreatePDF's printer definition?  And can we assume that the alternative "Upload and Convert" procedure correctly selects output paper size from the settings of an uploaded document?
    PS - The newly-revised doc cpsid_86984 still seems to need further revising.  Vista and Windows 7 instructions have now been split.  I tried the new Vista instructions on a Vista SP2 PC and found that step 6 appears to be out of place - there was no provision to enter Adobe ID and password at this stage.  It appears that, as with XP and Win7, one must configure the printer after it is installed (and not just if changing the ID or password, as stated in the document).

    Thank you, Rebecca.
    The plot thickens a little, given that it was the same unaltered Word document that first created a letter-size PDF, but correctly created an A4-size PDF after the driver was changed from the HP Color Laser PS to a Xerox Phaser.  I thought that the answer may lie in your comment that "it'll get complicated if there is a particular driver selected in the process of manually installing the PDF desktop printer".  But that HP driver was not (consciously) selected - it became part of the printer definition when the manual install instructions were followed.
    However I haven't yet had a chance to try a different XP system, and given that you haven't been able to reproduce the issue (thank you for trying), I will assume for the time being that it might have been a spurious problem that won't recur.  I'll take your point about using the installer, though when the opportunity arises I might try to satisfy my cursed curiosity by experimenting further with the manual install.  If I come up with anything of interest, I'll post again.

Maybe you are looking for

  • Its Time To Upgrade My Computer/ Need Some Advice, old or new 8 core

    well my dual 2.0 had a nice run but its time to move on up.i use alot of software plugs, omnisphere , bfd2 etc and my machine can't handle it. my question is. do i get a older 8 core and save some money or get one of the new ones. and also which spee

  • I want to release app, but don't want to support iPhone 5 at first?

    I have an app developed for iOS 5 and it supports the iPhone 4, 4S, iPod 4g ect, but I want to release it before I support the iPhone 5. I want to do this because when I test the app on the iPhone 5 there is a big white space at the bottom insted of

  • CAN I USE VERSION 5.0.4 ON MY NEW iMAC?

    THE OLDER VERSION IS MUCH MORE USER FRIENDLY IN MY OPINION. VIEWING THE ENTIRE LIBRARY WITH NEW IMPORTS AT THE BOTTOM OF THE SCREEN WITH THE OPTION OF CLICKING & DRAGGING TO AN ALBUM IS SIMPLE & FAST. I LIKE TO CLICK ON PHOTOS/BATCH CHANGE/TEXT & NAM

  • Error in ME51N - Account object functional area differs in asset master

    We are in the middle of going thru an upgrade - Funds & Functional Area were display fields in the old system but it was recetly changed to required fields in the Asset Master Record (AS01). Now when I try to create a Purchase Requisition for procuri

  • Global vars in as3 and code run order

    two questions: 1)how can i declare a global variable in as3, like in as2 i could write for example : _global.variableName = "variableValue" and then access the var from anywhere. 2)how can i tell a part of the code from the root to run only after the