DB size is growing gradually

My database size is growing gradually, what could be the reason and how to fix it? OS is windows xp.
Hey it's D:\oracle\product\10.2.0\admin\orcl\bdump folder having orcl_mmon_8640.trc files getting written in rapid pase....!!!!
What should i do now?
Here goes couple of information:
SQL> select * from v$sgainfo;
NAME BYTES RES
Fixed SGA Size 1250428 No
Redo Buffers 7135232 No
Buffer Cache Size 415236096 Yes
Shared Pool Size 180355072 Yes
Large Pool Size 4194304 Yes
Java Pool Size 4194304 Yes
Streams Pool Size 0 Yes
Granule Size 4194304 No
Maximum SGA Size 612368384 No
Startup overhead in Shared Pool 37748736 No
Free SGA Memory Available 0
11 rows selected.
SQL> show parameter sga;
NAME TYPE VALUE
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 584M
sga_target big integer 584M
SQL>
Edited by: user1945932 on Feb 2, 2011 1:46 PM
Edited by: user1945932 on Feb 2, 2011 2:25 PM

If you are asking about the memory usage you see using operating system utilities, it could be a memory leak. It would be helpful if you posted more details about which patch level of the OS, and the exact patch level of Oracle. The latter can be seen with:
SYS@TPRD> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE    10.2.0.4.0      Production
TNS for HPUX: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - ProductionNotice how much easier it is to read when we use the tag before and after the information?                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Similar Messages

  • Index size keep growing while table size unchanged

    Hi Guys,
    I've got some simple and standard b-tree indexes that keep on acquiring new extents (e.g. 4MB per week) while the base table size kept unchanged for years.
    The base tables are some working tables with DML operation and nearly same number of records daily.
    I've analysed the schema in the test environment.
    Those indexes do not fulfil the criteria for rebuild as follows,
    - deleted entries represent 20% or more of the current entries
    - the index depth is more then 4 levels
    May I know what cause the index size keep growing and will the size of the index reduced after rebuild?
    Grateful if someone can give me some advice.
    Thanks a lot.
    Best regards,
    Timmy

    Please read the documentation. COALESCE is available in 9.2.
    Here is a demo for coalesce in 10G.
    YAS@10G>truncate table t;
    Table truncated.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                         65536
    TIND                      65536
    YAS@10G>insert into t select level from dual connect by level<=10000;
    10000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     196608We have 10,000 rows now. Let's delete half of them and insert another 5,000 rows with higher keys.
    YAS@10G>delete from t where mod(id,2)=0;
    5000 rows deleted.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>insert into t select level+10000 from dual connect by level<=5000;
    5000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     327680Table size is the same but the index size got bigger.
    YAS@10G>exec show_space('TIND',user,'INDEX');
    Unformatted Blocks .....................               0
    FS1 Blocks (0-25)  .....................               0
    FS2 Blocks (25-50) .....................               6
    FS3 Blocks (50-75) .....................               0
    FS4 Blocks (75-100).....................               0
    Full Blocks        .....................              29
    Total Blocks............................              40
    Total Bytes.............................         327,680
    Total MBytes............................               0
    Unused Blocks...........................               0
    Unused Bytes............................               0
    Last Used Ext FileId....................               4
    Last Used Ext BlockId...................          37,001
    Last Used Block.........................               8
    PL/SQL procedure successfully completed.We have 29 full blocks. Let's coalesce.
    YAS@10G>alter index tind coalesce;
    Index altered.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        196608
    TIND                     327680
    YAS@10G>exec show_space('TIND',user,'INDEX');
    Unformatted Blocks .....................               0
    FS1 Blocks (0-25)  .....................               0
    FS2 Blocks (25-50) .....................              13
    FS3 Blocks (50-75) .....................               0
    FS4 Blocks (75-100).....................               0
    Full Blocks        .....................              22
    Total Blocks............................              40
    Total Bytes.............................         327,680
    Total MBytes............................               0
    Unused Blocks...........................               0
    Unused Bytes............................               0
    Last Used Ext FileId....................               4
    Last Used Ext BlockId...................          37,001
    Last Used Block.........................               8
    PL/SQL procedure successfully completed.The index size is still the same but now we have 22 full and 13 empty blocks.
    Insert another 5000 rows with higher key values.
    YAS@10G>insert into t select level+15000 from dual connect by level<=5000;
    5000 rows created.
    YAS@10G>commit;
    Commit complete.
    YAS@10G>select segment_name,bytes from user_segments where segment_name in ('T','TIND');
    SEGMENT_NAME              BYTES
    T                        262144
    TIND                     327680Now the index did not get bigger because it could use the free blocks for the new rows.

  • Why has my Adobe PCD cache.db file size started growing suddenly

    I am running CS4 Design Premium on an iMac which had OS X 10.6.2 until the 29th and now has 10.6.3.
    On 19th March my Adobe PCD cache.db file size was just over 100Mb. Since then it has grown steadily until it is now over 19Gb in size.
    Can anyone explain to me why it should suddenly have started growing please? During this time I have really only run Acrobat Pro; however I have had a lot of browser (Safari and FF) hangs/crashes due to Flash plugin issues.
    I understand the cache.db also holds licence information so is it safe to delete this file? If so will I need to reenter my license information?
    Any help would be appreciated. Thanks
    Phill

    I have the same problem.
    My pcd.db file is now 56 GB
    What is going on !?!?!?
    Sikem

  • Syslog database file size is growing

    Hi ,
    I have a Cisco Works Server ( LMS 2.6 Version) which had a issue with Syslog Severity level summary report which was hanging when ever we run a job and report job used to fail always.Also i have observed SyslogFirst.db,SyslogSecond.db,SyslogThird.db database file are grown to 90Gb Each due to which RME was very slow.
    I have done a RME database Reinitilization and post to that Syslog Severity level summary report started working properly.Also the file file size of SyslogFirst.db,SyslogSecond.db,SyslogThird.db are reduced to almost 10 MB.But when i see today the SyslogThird.db file increase to 4GB again.
    I need a help on what causing this files (SyslogThird.db ) to grow so fast.Is there any option whcih i need to see in cisco work which stop this files to grow so fast.Please help me on this issue.
    Thanks & Regds,
    Lalit

    Hi Joseph,
    Thanks for your reply.SyslogThird.db is not growing now but my SeverityWise Summary report is stoped again.If i see status in the RME Jobs it says SeverityWise Summary report failed.I have checked the SyslogThird.db file size and found it was 20Gb.Is this is failing because 20Gb file size.
    Please needs your valuable inputs.Thanks once gain.After RME reinitilization it was only 1Gb and report was getting generated.
    Thanks & Regds,
    Lalit

  • Proxy 4 - Cache size keeps growing

    I may have a wrong cache setting somewhere, but I can't find it. I am running Proxy 4.0.2 (for windows).
    Under Cache settings, I have "Cache Size" set to 800MB. Under "Cache Capacity" I have it set to 1GB (500 MB-2GB).
    The problem is my physical cache size on the hard drive keeps growing and growing and is starting to fill the partition on the hard drive. At last count, the "cache" directory on the hard drive which holds the cache files is now using 5.7GB of space and still growing.
    Am I mis-understanding something? I thought the max physical size would be a lot lower, and stop at a given size. But the cache directory on the hard drive is now close to 6GB and still growing day by day. When is it going to stop growing, or how do I stop it and put a cap on the physical size it can grow to on the hard drive?
    Thanks

    Until 4.03 is out, you can use this script..
    Warning: experimental, run this on a copy of cache first to make sure that it works as you want it.
    The firs argument is the size in MB's that you want to remove.
    I assume your cachedir is "./cache" if it is not, then change the variable $cachedir to
    the correct value.
    ==============cut-here==========
    #!/bin/perl
    use strict;
    use File::stat;
    my $cachedir = "./cache";
    my $gc_size; #bytes
    my $verbose = 0;
    sub gc_file {
        my $file = shift;
        my $sb = stat($file);
        $gc_size -= $sb->size;
        unlink $file;
        print "$gc_size more after $file\n" if $verbose;
        exit 0 if $gc_size < 0;
    sub main {
        my $size = shift;
        $gc_size = $size * 1024 * 1024; #in MB's
        opendir(DIR, $cachedir) || die "can't opendir $cachedir: $!";
        my @sects = grep {/^s[0-9]\.[0-9]{2}$/} readdir(DIR);
        closedir DIR;
        foreach my $sect (@sects) {
            chomp $sect;
            opendir (CDIR, "$cachedir/$sect") || die "cant opendir $cachedir/$sect: $!";
            my @ssects = grep {/^[A-F0-9]{2}$/} readdir(CDIR);
            closedir CDIR;
            foreach my $ssect (@ssects) {
                chomp $ssect;
                opendir (SCDIR, "$cachedir/$sect/$ssect") || die "cant opendir $cachedir/$sect/$ssect: $!";
                my @files = grep {/^[A-Z0-9]{16}$/} readdir(SCDIR);
                closedir SCDIR;
                foreach my $file (@files) {
                    gc_file "$cachedir/$sect/$ssect/$file";
    main $ARGV[0] if $ARGV[0];
    =============cut-end==========On your second problem, the easiest way to recover a corrupted partition is to list out the sections in that partition, and delete those sections that seem like odd ones
    eg:
    $ls ./cache
    s4.00 s4.01 s4.02 s4.03 s4.04 s4.05 s4.06 s4.07 s4.08 s4.09 s4.10 s4.11 s4.12 s4.13 s4.14 s4.15 s0.00
    Here the s0.00 is the odd one out, so remove the s0.00 section. Also keep an eye on the relative sizes of the sections. if the section to be removed is larger than the rest of the sections combinde, you might not want to remove that.
    WARNING: anything you do, do on a copy

  • Syslog file size is growing

    Hi ,
    I have a Cisco Work Server (LMS 2.6 Version ) in which syslog.log file is growing like anything.Earlier it use to grow around 500MB to 1GB Per day.Since last one week its growing around 6 to 7 Gb per day.I am moving the syslog file through Logrot Script.I wanted to know is there any issue which causing syslog file to grow 6 to 7Gb per day.
    Please help me in resolving this issue.
    Thanks in advance.
    Thanks & Regds,
    Lalit

    One approach would be to run a Severity Level Summary or 24-hr Syslog Analysis report in RME, either of which would get you some rough idea of what's the most chatty message. Then zero in from there.
    Or you could awk the syslog_info file directly from CLI and tally the hosts/message types to find the offender(s).

  • File Size Growing too much with every Save

    I have designed a form that uses Giids with drop-down lists and also some Image Fields.  When I am using the form and I save it the file size grows excessively with each save.  If I add even one character to a sentence the file size may grow by as much as 500K or more. Saving a few times causes the file to become far to large to send by email.  Any ideas what I need to try and fix, or is this just normal?

    Nope,  I have it unselected and it still grows by leaps and bounds.  Any other ideas?  Is there anyone I can send a form to who  can work on it for a fee?

  • File sizes growing by magic?!?!?!?!?!?!

    I create DVD's for a wedding business. My file sizes are "growing" and i don't know why. This is the first time I've noticed it. I've designed and burned over 100 wedding DVD's using this method. I use a template in DVD studio pro and I drag and drop my assets onto the appropriate button.
    The project material is 102 minutes. Some double layered video.
    This particular project says that a
    DVD best quality 150 min compression (using compressor template) is 3.92G. I put it in the DVD template and it jumps to 4.4G.
    The audio size (AIFF 16bit-2200Mhtz) is 68M. I put it in the template and my 4.4G from above jumps to 5.7G. 68M to 1.3G??
    I have no idea what is wrong. I've spent four days compressing in various ways using compressor. The description above is the fifth and last attempt.

    What you should do is open the project, close the project, then reopen it. The calculator of size is not always accurate and can "jump" when assets change.
    Also do not use AIFs, make AC3s and use them instead, saves room and improves performance (prevents issues)
    Also note thhat is you are using items in motion menus, that also in creases size each time you extend the length of time (or add audio if there was none)
    You should have no isssues with getting 102 minutes of video on a DVD-5 usually (unless very high motion and you need higher encoding rates)
    Build to the hard drive to get a better sense of the real size if the above does not help )opening/cclosing/reopening should get you back on track though)

  • BIP-Weblogic out file growing in size

    Hi,
    I am noticing that BIP outputs  the standout to weblogic node.out file. This include the query executed by report causing the file size to grow. I am looking for ways to contain what gets spit into outfile by BIP. Any thoughts?

    I'm guessing that your system is running OpenSolaris, not Solaris. You should direct your question to [email protected] Better yet, search the archives on opensolaris.org, where you might already find the answer. Sorry, I don't know the solution offhand, other than it has to do with storing the OpenSolaris pkg bits after downloading for offline installation.
    -- Alan

  • Weblogic server 10.0.1 stop all request when memory drop and grow again

    We are running weblogic server 10.0.1 on window server 2003 with a custom applicaiton . our heap size is set to 1024MB min and max. Usually it is running normally, but at some ocassion, we find the webogic process memory drop to 80K and then grow gradually( from the task window). During this period, the server stops all requests until the process memory reach about 1000MB as we look at the server console. It starts process again and accept requests. After the requests are cleared, the server resume normal. Is it a full GC or window sever memory allocation when its memory is low? I cannot figure out this issue.
    Much appreciated if there is any advices.
    Keith

    First of all, when you go to monitor tab you see all the threads from self tuning thread pool. This includes all the threads in the server, not only the ones allocated to your workmanager.
    This is how i think its working...
    1) All your threads are hung..you have 300 hogging threads , that reaches your max thread constraints on your work manager.. weblogic is not going to allow any more requests from that work set to take up threads.. and capacity is both executing and queued requests..you got 150 pending requests and 300 hogging.. so a total of 450, your capacity.
    2) Its not 631 threads, its 631 requests, weblogic must be reporting all the threads in the queue at the time you monitored it.
    3) The reason for one cluster to go down would be that those servers in that cluster were waiting for some resource. As you have different applications in the two clusters...there must be some resource that cluster1 uses which was not available. you can see what those threads were doing by getting thread dumps when this happens.

  • TIme Machine  backup grows too large during backup process

    I have been using Time Machine without a problem for several months, backing up my imac - 500GB drive with 350g used. Recently TM failed because the backups had finally filled the external drive - 500GB USB. Since I did not need the older backups, I reformatted the external drive to start from scratch. Now TM tries to do an initial full backup but the size keeps growing as it is backing up, eventually becoming too large for the external drive and TM fails. It will report, say, 200G to back up, then it reaches that point and the "Backing up XXXGB of XXXGB" just keeps getting larger. I have tried excluding more than 100GB of files to get the backup set very small, but it still grows during the backup process. I have deleted plist and cache files as some discussions have suggested, but the same issue occurs each time. What is going on???

    Michael Birtel wrote:
    Here is the log for the last failure. As you see it indicates there is enough room 345g needed, 464G available, but then it fails. I can watch the backup progress, it reaches 345G and then keeps growing till it give out of disk space error. I don't know what "Event store UUIDs don't match for volume: Macintosh HD" implies, maybe this is a clue?
    No. It's sort of a warning, indicating that TM isn't sure what's changed on your internal HD since the previous backup, usually as a result of an abnormal shutdown. But since you just erased your TM disk, it's perfectly normal.
    Starting standard backup
    Backing up to: /Volumes/Time Machine Backups/Backups.backupdb
    Ownership is disabled on the backup destination volume. Enabling.
    2009-07-08 19:37:53.659 FindSystemFiles[254:713] Querying receipt database for system packages
    2009-07-08 19:37:55.582 FindSystemFiles[254:713] Using system path cache.
    Event store UUIDs don't match for volume: Macintosh HD
    Backup content size: 309.5 GB excluded items size: 22.3 GB for volume Macintosh HD
    No pre-backup thinning needed: 345.01 GB requested (including padding), 464.53 GB available
    This is a completely normal start to a backup. Just after that last message is when the actual copying begins. Apparently whatever's happening, no messages are being sent to the log, so this may not be an easy one to figure out.
    First, let's use Disk Utility to confirm that the disk really is set up properly.
    First, select the second line for your internal HD (usually named "Macintosh HD"). Towards the bottom, the Format should be +Mac OS Extended (Journaled),+ although it might be +Mac OS Extended (Case-sensitive, Journaled).+
    Next, select the line for your TM partition (indented, with the name). Towards the bottom, the Format must be the same as your internal HD (above). If it isn't, you must erase the partition (not necessarily the whole drive) and reformat it with Disk Utility.
    Sometimes when TM formats a drive for you automatically, it sets it to +Mac OS Extended (Case-sensitive, Journaled).+ Do not use this unless your internal HD is also case-sensitive. All drives being backed-up, and your TM volume, should be the same. TM may do backups this way, but you could be in for major problems trying to restore to a mis-matched drive.
    Last, select the top line of the TM drive (with the make and size). Towards the bottom, the *Partition Map Scheme* should be GUID (preferred) or +Apple Partition Map+ for an Intel Mac. It must be +Apple Partition Map+ for a PPC Mac.
    If any of this is incorrect, that's likely the source of the problem. See item #5 of the Frequently Asked Questions post at the top of this forum for instructions, then try again.
    If it's all correct, perhaps there's something else in your logs.
    Use the Console app (in your Applications/Utilities folder).
    When it starts, click +Show Log List+ in the toolbar, then navigate in the sidebar that opens up to your system.log and select it. Navigate to the +Starting standard backup+ message that you noted above, then see what follows that might indicate some sort of error, failure, termination, exit, etc. (many of the messages there are info for developers, etc.). If in doubt post (a reasonable amount of) the log here.

  • Need help -To Restrict Huge temp file, which grows around 3 GB in OBIEE 11g

    Hi Team,
    I am working on OBIEE 11.1.1.5 version for a client specific BI application. we have an issue concerning massive space consumption in OBIEE 11g installed linux environment whenever trying to run some detail level drill down reports. While investigating, we found that whenever a user runs the drill down report a temp file named nQS_xxxx_x_xxxxxx.TMP is created and keep's growing in size under the below given folder structure,
    *<OBIEE_HOME>/instances/instance1/tmp/OracleBIPresentationServicesComponent/coreapplication_obips1/obis_temp/*
    The size of this temp file grows huge as much as around 3 GB and gets erased automatically when the drill down report output is displayed in UI. Hence when multiple users simultaneously try to access these sort of drill down reports the environment runs out of space.
    Regarding the drill down reports:
    * The drill down report has around 55 columns which is configured to display only 25 rows in the screen and allows the user to download the whole data as Excel output.
    * The complete rows being fetched in query ranges from 1000 to even above 100k rows. Based on the rows fetched, the temp file size keeps growing. ie., If the rows being fetched from the query is around 4000 a temp file of around 60 MB is created and gets erased when the report output is generated in screen (Similarly, for around 100k rows, the temp file size grows up to 3 GB before it gets deleted automatically).
    * The report output has only one table view along side Title & Filters view. (No Pivot table view, is being used to generate this report.)
    * The cache settings for BI Server & BI Presentation services cache are not configured or not enabled.
    My doubts or Questions:
    * Is there any way to control or configure this temp file generation in OBIEE 11g?
    * Why the growing temp file automatically gets deleted immediately after the report output generation in screen. Is there any default server specific settings governing this behaviour?
    * As per certain OBIEE article reference for OBIEE 10g, I learnt that for large pivot table based reports the temp file generation is quite normal because of huge in-memory calculations involved . However we have used only Table view in output but still creates huge temp files. Is this behaviour normal in OBIEE 11g. If not, Can any one Please suggest of any specific settings to be considered to avoid generating these huge files or atleast generate a compressed temp file.
    * Any other work around solution available for generating a report of this type without the generation of temp files in the environment?
    Any help/suggestions/pointers or document reference on this regard will be much appreciated. Please advice
    Thanks & Regards,
    Guhan
    Edited by: 814788 on 11-Aug-2011 13:02

    Hello Guhan,
    The temp files are used to prepare the final result set for OBI presentation server processing, so as long as long you dataset is big the tmp files will be also big and you can only avoid this by reducing your dataset by for example filtering your report.
    You can also control the size of your temp files by reducing the usage of the BI server.I mean by this if you are using any functions like for example sorting that can be handled by your database so just push to the DB.
    Once the report finished the BI server removes automatically the tmp files because it's not necessary anymore.you can see it as a file that is used for internal calculations once it's done the server gets rid of it.
    Hope this helps
    Adil

  • Reducing File Size After Applying Digital Signature

    I have a dynamic  form created in Live Cycle Designer ES 8.2, reader extended in Acrobat Pro 9.4 and being filled in and signed in Reader 9.4.  The form has  multiple (10+) digital signature fields.  In some cases, images (jpeg, tiff) are inserted into the form before it is circulated for signatures.  Everytime a signature is applied, the file size grows based on the size of the images that have been attached as it seems to be embedding a signed version of the form each time a signature is applied.  After a few signatures are applied, the file size can grow to be quite large.  Is there a way to prevent it from doing this everytime a signature is applied so that the file can be kept to a more manageable size?

    Go into the importing tab, and set the options to Mp3 at a lower bitrate, then go to your song and choose Convert to Mp3 - it will then create another copy at a lower bitrate - leaving the high quality original - you can then burn the smaller one to your CD - it shouldn't take long to convert.

  • Huge file size after importing

    I have a shell (6 slides) used to jump out to windows media
    files. I imported only two slides from another project with text
    objects only and now the file size jumped from 977k to over 10 Meg.
    How on earth is this possible? basically powerpoint slides. NO
    video. NO audio.
    Plus, when I look at the project size in storyboard view
    within Cap, it says the project is 1,234k. But when I look at it in
    windows explorer it definitley says 10,333k. Why are the file sizes
    inconsistent?
    Thoughts, suggestions and good guesses are appreciated!
    Thanks!
    Jim

    Jim,
    I've never paid much attention over the years to the
    Storyboard view Information panel, so I'm not entirely sure what
    that "Size" value is referring to. I just opened one of my existing
    projects and see "684.0KB," which is alot closer to the published
    SWF size (569KB) than the project's source file (4,172KB).
    When you say you see "10,333K" in Explorer, I assume you're
    talking about the project source file. Since you said that you
    don't have audio or video, then you must have only a short sequence
    of slides containing background images imported from PowerPoint.
    The dimensions of the PowerPoint presentation (the resolution of
    the images) and their color depth will impact the size of the
    images when imported into Captivate.
    Could it be that the PowerPoint project contained some rather
    large images to begin with?
    That said, do you know already that Captivate 2 has a very
    significant bug that causes the CP file size to grow dramatically
    as you import/edit/re-edit your project? You didn't mention how
    extensively you may have already edited your project, so I thought
    I'd bring that up.
    Another thing to consider would be whether your project's
    library has lots of unused objects in it -- maybe images that came
    in with the PowerPoint project but that you're not really using in
    your slides. Cleaning out the library and saving the project will
    net you a little savings.

  • Large File Sizes in FrameMaker 9.x

    We have upgraded from 7.1 to 9.0p255. Opening and saving files in 9.x (without making any changes), increases all file sizes 9 MB each. Has anyone else had this issue?
    Thanks.

    I found an MS KB article that discusses filesize increases with Word when using embedded GIF images; I have no idea whether a similar situation applies to FM or not, but I suspect it might.
    http://support.microsoft.com/kb/224663
    >> When you save a Microsoft Word document that contains an EMF, PNG, GIF,  or JPEG graphic as a different file format (for example, Word 6.0/95 (*.doc) or Rich Text Format (*.rtf)), the file size of the document may dramatically increase.
    >> For example, a Microsoft Word 2000 document that contains a JPEG graphic  that is saved as a Word 2000 document may have a file size of 45,568  bytes (44.5KB). However, when you save this file as Word 6.0/95 (*.doc) or as Rich Text Format (*.rtf), the file size may grow to 1,289,728 bytes (1.22MB).
    >> This functionality is by design in Microsoft Word. If an EMF, a PNG, a  GIF, or a JPEG graphic is inserted into a Word document, when the  document is saved, two copies of the graphic are saved in the document.  Graphics are saved in the applicable EMF, PNG, GIF, or JPEG format and  are also converted to WMF (Windows Metafile) format.
    If you haven't yet done so, you might try examining an FM sample file with wonderful freebie MIFBrowse by Graham Wideman. Even though it does not mention current FM versions, it works perfectly.
    MIFBrowser
    http://www.wideman-one.com/gw/tech/framemaker/mifbrowse.htm

Maybe you are looking for

  • Capital Goods Transfer of Credit

    Hi Our client want to process to pass the manual jv against capital goods through T/code J1IH (under addl.excise & RG23C for only part2). After post the doc, they want to process to post the doc for 50% Capital Hold amt (previous yr.) to Capital Good

  • Inter company Repair that was carried out zero Value

    hai Experts, the document is a Zero value inter company repair that was carried out at Zero value. However, although no sales value has been recorded in FI/PCA, there has been an entry in PA under List Price value field pls advise wat are the free re

  • RFQ Message Error " Not Processed"

    HI If the message was processed successfully, the status is set to '1' and if an error occurs, the status is set to '2'. Status '0' usually means that the processing program has not yet been started (if the message is to be processed manually, for ex

  • Database Check Sum

    Hi, Can we find the amount of data captured into the database in specific period i.e. in a day/week etc. Thank you....

  • I can't export my iMovies to my desktop, youtube, vimeo... anywhere.. HELP!

    I created a video that was 19 minutes long and just tired to "share" it to multiple different places (youtube, desktop, vimeo) and everytime it says it failed. I thought maybe it was too long of a video so I created another one that was only 5 minute