Index Size Increase (Rebuild with Parallel)

All,
I'm not sure whether or not this is an Exadata problem, but we saw it happen on our Exadata machine (maybe the larger extents are impacting this). However, I thought this forum might have an idea, especially if it’s a generic RDBMS issue instead of an Exadata one
I am running my Production EDW on an Exadata V2 quarter-rack running 11.2.0.2 BP7 RDBMS and GI and 11.2.2.3.2 ESS on OEL5.5
We have started to implement HCC compression due to space pressure. Last Thursday/Friday, we attempted to compress our largest table:
• 20bn rows, 3.7Tb in size, DEGREE 2, INSTANCE 1, 2555 partitions
• 8 bitmap partitioned indexes each with DEGREE 20, INSTANCE 1. Total size AFTER the rebuild: 1.3Tb (significantly greater than before!!)
• ‘Archive High’ HCC compression applied to the oldest 10% partitions (255).
• Remaining partitions were left uncompressed.
Because of the fact our biggest tables make use of bitmap indexes, in order for us to use HCC (which doesn’t like bitmaps), we had to:
• Disable all bitmap indexes
• Compress the oldest 10% of partitions with archive high HCC compression
• Rebuild the indexes for these partitions (255 total, 8 bitmap indexes each)
• Gather statistics
• Rebuild the indexes for the rest of the partitions (the remaining 90%). This took BY FAR the longest time.
During the last step, the index rebuilds seemed to take an excessive length of time – to rebuild all 8 bitmap indexes on one partition was about 3 minutes. This, obviously, had a big impact on the end users as this happens to be the table that most feeds hit (of course…!). We happened to run into the nightly batch too, which was as fantastic as you can probably imagine.
On Monday, we noticed that the tablespace for the indexes had ballooned by about 800Gb and figured out that the culprit was mainly the table’s indexes. Some index partitions had increased up to 20x (the degree of parallelism used to rebuild the index). Further research shows that we have seen this for other tables with varying results (if an index had DEGREE 8, we might see it increase by 8 times, etc).
We had the same issue in our Development environment as well. When we rebuilt the same indexes in Development (and Production) with no parallelism (DEGREE 1), we were able to rebuild 6 entire partitions (with 8 indexes each) in less than one minute: an 18x performance improvement. I suspect that the underlying table having DEGREE 2 allowed it to use the SmartScan, but still.
Obviously, increasing the index size by 20x was not meant to be part of the HCC compression results. We’ve saved 3.1Tb in total with HCC, but we have increased the size of the indexes just on this one table alone by 1Tb! However, we also saw this happen on index partitions which had NOT been compressed by HCC – which seems to rule out HCC being a factor in this (probably?)
We have seen this on another table which is partitioned with bitmap indexes. However, we only saw this increase in those indexes which had DEGREE >1. There were two other indexes on that table which were serial and saw NO increase in space usage.
I don’t know whether we see this because of…
• bitmap indexes
• bitmap partitioned indexes
• indexes on partitioned tables
• tables which use SmartScan
• HCC
• Exadata
• 11.2
• Larry Ellison hates me and wants to make my life miserable by introducing random errors just to annoy me
• All/any/none of the above
This sounds like it’s a bug to me with the indexes being parallelized. Anyone else seen this happen?
Any ideas, advice, jokes, cash appreciated...
Mark

The thing that makes me doubt it's related to the default partition size is that if we rebuild the indexes serially, the size is reduced by orders of magnitude. I do think it's somewhere in that ball park, though...
I'm not sure if this isn't related to Exadata - the default extent size is 4Mb (and we haven't changed that). For eight bitmap indexes, that means the initially allocated extents (we don't defer) for just one partition total 32Mb.
Yes, sorry, by 'ballooning by Monday morning', I was using some poetic license. The reason being that we actually lost a disk on the Thursday morning and replaced it on Thursday evening. Obviously, that masked the amount of usable space until the disk had re-balanced. The 'ballooning' has been definitively proven to be related to the degree of parallelism.
Our plan is to create the indexes serially - after all, it doesn't appear that parallelism speeds up their rebuilds during the batch (which is why they were set so high originally). First of all, I have to persuade certain power users to accept the reduction in parallelism at the object-level (not a best practice, anyway) and get them to use optimizer hints in their queries. I've been fighting this fight for a while because users can inadvertently consume ridiculous amounts of parallel slaves and compromise the stability of the system.
I will check the extents - previously, I had just been querying segment usage by partition. Maybe the extents might help pinpoint the cause...

Similar Messages

  • Index size increased after import

    hi i mentioned already the index creation problem when i am trying to create index using script after import of table.So droped the table and created table using script and index also without data,then i started to import at tablelevel with indexes=n then ia m importing data from the production database.
    The size of the 2 indexes in production is 750 and 1200 mb each in test db both index size increased around double 1200 and 1700 mb each.I used same script in both db.Why this is increased here i took the export with compress=y full database export.Why the index size increased? when i created the index with initial extent and next extent size respective 800 and 100 mb.Whether is it the reason?
    with regards
    ramya

    i gave initial 1000 and next 100 for the index size around 1.1 gb in production but here in test why this became around 1.7 gb,eventhough the pct increase is 50 it should come around 1.3 maximum.Whether it will give any performance problem
    wiht regards
    ramya

  • Index size increases than table size

    Hi All,
    Let me know what are the possible reasons for index size greater than the table size and in some cases index size smaller than table size . ASAP
    Thanks in advance
    sherief

    hi,
    The size of a index depends how inserts and deletes occur.
    With sequential indexes, when records are deleted randomly the space will not be reused as all inserts are in the leading leaf block.
    When all the records in a leaf blocks have been deleted then leaf block is freed (put on index freelist) for reuse reducing the overall percentage of free space.
    This means that if you are deleting aged sequence records at the same rate as you are inserting, then the number of leaf blocks will stay approx constant with a constant low percentage of free space. In this case it is most probably hardly ever worth rebuilding the index.
    With records being deleted randomly then, the inefficiency of the index depends on how the index is used.
    If numerous full index (or range) scans are being done then it should be re-built to reduce the leaf blocks read. This should be done before it significantly affects the performance of the system.
    If index access’s are being done then it only needs to be rebuilt to stop the branch depth increasing or to recover the unused space
    here is a exemple how index size can become larger than table size:
    Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
    Connected as admin
    SQL> create table rich as select rownum c1,'Verde' c2 from all_objects;
    Table created
    SQL> create index rich_i on rich(c1);
    Index created
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1179648 144 9
    INDEX 1179648 144 9
    SQL> delete from rich where mod(c1,2)=0;
    29475 rows deleted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1179648 144 9
    INDEX 1179648 144 9
    SQL> insert into rich select rownum+100000, 'qq' from all_objects;
    58952 rows inserted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1703936 208 13
    INDEX 2097152 256 16
    SQL> insert into rich select rownum+200000, 'aa' from all_objects;
    58952 rows inserted
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 2752512 336 21
    INDEX 3014656 368 23
    SQL> delete from rich where mod(c1,2)=0;
    58952 rows deleted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 2752512 336 21
    INDEX 3014656 368 23
    SQL> insert into rich select rownum+300000, 'hh' from all_objects;
    58952 rows inserted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 3014656 368 23
    INDEX 4063232 496 31
    SQL> alter index rich_i rebuild;
    Index altered
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 3014656 368 23
    INDEX 2752512 336 21
    SQL>

  • Photoshop Liquify Brush Size increase/decrease with alt+crtl drag don't work anymore...

    Hi :-)
    I have installed the last Update to Photoshop CS6 13.0.4...
    Now I can't increase/decrease the Brush Size in the Liquify-Dialog anymore (before I installed the update, it works fine with alt+crtl and drag)...
    If I press alt+crtl, the "Cancel"-Button is shown as "Revert", but the Brush resizing don't work... :-/
    I use a Mac on OS X 10.6.8...
    But it seems it has to do with the system... beacause at home I use 10.8.3 and there works everything fine with the 13.0.4 Update!!!

    Files has been uploaded here:
    http://soloporhoy.1free.ws/test/soluperguei/index.php

  • Folio article file size increases massively with more buttons

    Hello,
    we are about to finish up a DPS project and now have some serious problems with the folio sizes.
    We did all the usual things to make folios smaller (cropped every image to the trimmed size, pdf articles, vektor overlays etc.)
    So we made some tests and discovered a really strange behavior:
    We have one single article with a fullscreen MSO containing 7 picture status, and seven buttons (with the same pictures as thumbnails) that lead to the appropriate statuses.
    First we tried to experiment deleting images from the MSO but this had no effect on the file size. But as we deleted the seven buttons the file size changed from 10MB to 52MB!
    The size of all the assets (images and ID file) from this article is only 12 MB.
    Is this a known bug? Is there a way around it? It is some kind of crazy when an uploaded folio has 4 times the size as all his assets?!?!
    Thanks in advance,
    nils

    Hi Bob,
    sorry for my late reply.
    I have done some additional testing and came to the conclusion that no button is responsible for the big file sizes.
    It was the overlay setting after all (sorry for pointing in the wrong direction). As I switched the slideshow from "Raster" to "Vector" the file size decreased plenty. (I thougt I checked this but actually I did it only in vertical articles.)
    However I did not know that the export format has such an impact on file size. This one article decreased from 52 MB to 17 MB. And it was only one slideshow with 7 (retina) images inside.
    Changing all the slideshows in our 50 article folio (about 35) decreased the file size from 576 MB to 243 MB!
    Thanks,
    nils

  • XML file size increases GREATLY with ![CDATA[

    I have several .xml files that provide info for a website of
    mine and was quite surprised to discover, while I was uploading
    them, that some of them were around 60-70kb, while others were 4kb
    or less. Knowing that the amount of content in all of them doesn't
    vary that much, I noticed that the biggest file sizes corresponded
    to the files using <![CDATA[ ]]>.
    I included a sample file code below. It's in Portuguese, but
    I think it's still obvious that this file could not be more than a
    few kb, and Dreamweaver (CS3) saves it with a whopping 62kb!
    I tried saving the exact same file in Text Edit and it
    resulted in a normal file size.
    Has anyone encountered similar situations? I'm guessing this
    is some sort of bug, that Dreamweaver puts some extra content -
    somewhere - because of the CDATA. Is there any sort of reason or
    fix for this?
    Thanks in advance.

    Ok... embarassing moment. Just realized that DW CS3 is not
    guilty. In Leopard, in the file's Get Info panel, I changed the
    preferred application to open it, and the file's size changed
    according to different applications. Reverting back to DW CS3, it
    still resulted in the 60-70kb size, but deleting the file and
    saving a new one it the same name solved the problem.
    Sorry guys.

  • File sizes increase in StarOffice 8

    I've got a letter I send out about once a month, just changing dates in the contents for each edition. It has a linked .jpg file as background.
    In StarOffice 7 the .sxw file size is 8 Kb.
    In StarOffice 8 the size has leapt to 44 Kb, whether I save it in .sxw or .odt format.
    Why?

    I find that file sizes increase both with text documents and with spreadsheets, sometimes by a great deal. One of the reasons I switched from 5.2 to 7 was the much smaller file sizes, since I keep my weekly back-ups on floppies (and am a stingy sort of fellow generally). But I had to run 7 and 5.2 together, since 7 does not handle Sanskrit/Hindi founts (the SO "help" team just asked a whole lot of questions and then disappeared). So now I have (on Windows) 5.2, 7 and 8. I have still not checked if 8 handles these founts. If it does, I can remove 5.2. For the rest, 8 does not seem to offer any advantage over 7 -- not counting the data base, which I have still to look into. File size, in my view, goes against it.

  • Getting same index size despite different table size

    Hello,
    this question arose from a different thread, but touches a different problem, which is why I have decided to post it as a separate thread.
    I have several tables of 3D points.
    The points roughly describe the same area but in different densities, which means the tables are of different sizes. The smallest contains around 3million entries and the largest around 37 million entries.
    I applied an index with
    CREATE INDEX <index name>
    ON <table name>(<column name>)
    INDEXTYPE is MDSYS.SPATIAL_INDEX
    PARAMETERS('sdo_indx_dims=3');
    My problem is that I am trying to see how much space the index occupies for each table.
    I used the following syntax to get the answer to this:
    SELECT usim.sdo_index_name segment_name, bytes/1024/1024 segment_size_mb
    FROM user_segments us, user_sdo_index_metadata usim
    WHERE usim.SDO_INDEX_NAME = <spatial index name>
    AND us.segment_name = usim.SDO_INDEX_TABLE;
    (thanks Reggie for supplying the sql)
    Now, the curious thing is that in all cases, I get the answer
    SEGMENT_NAME SEGMENT_SIZE_MB
    LIDAR_POINTS109_IDX .0625
    (obviously with a different sement name in each case).
    I tried to see what an estimated index size would be with
    SDO_TUNE.ESTIMATE_RTREE_INDEX_SIZE
    And I get estimates ranging from 230MB in the case of 3million records up to 2.9 for the case of 37million records.
    Does anyone have an idea why I am not getting a different actual index size for the different tables?
    Any help is greatly appreciated!!!
    Cheers,
    F.

    It looks like your indexes didn't actually create properly. Spatial indexes are a bit different to 'normal' indexes in this regard. A BTree index will either create or not. However, when creating a spatial index, something may fail, but the index structure will remain and it will appear to be valid according to the data dictionary.
    Consider the following example in which the SRID has a problem:
    SQL> CREATE TABLE INDEX_TEST (
      2  ID NUMBER PRIMARY KEY,
      3  GEOMETRY SDO_GEOMETRY);
    Table created.
    SQL>
    SQL> INSERT INTO INDEX_TEST (ID, GEOMETRY) VALUES (1,
      2  SDO_GEOMETRY(2001, 99999, SDO_POINT_TYPE(569278.141, 836920.735, NULL), NULL, NULL)
      3
    SQL> INSERT INTO user_sdo_geom_metadata VALUES ('INDEX_TEST','GEOMETRY',
      2     MDSYS.SDO_DIM_ARRAY(
      3     MDSYS.SDO_DIM_ELEMENT('X',0, 1000, 0.0005),
      4     MDSYS.SDO_DIM_ELEMENT('Y',0, 1000, 0.0005)
      5  ), 88888);
    1 row created.
    SQL>
    SQL> CREATE INDEX INDEX_TEST_SPIND ON INDEX_TEST(GEOMETRY) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
    CREATE INDEX INDEX_TEST_SPIND ON INDEX_TEST(GEOMETRY) INDEXTYPE IS MDSYS.SPATIAL_INDEX
    ERROR at line 1:
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-13249: SRID 88888 does not exist in MDSYS.CS_SRS table
    ORA-29400: data cartridge error
    Error - OCI_NODATA
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 10
    SQL> SELECT usim.sdo_index_name segment_name, bytes/1024/1024 segment_size_mb,
      2  usim.sdo_index_status
      2  FROM user_segments us, user_sdo_index_metadata usim
      3  WHERE usim.SDO_INDEX_NAME = 'INDEX_TEST_SPIND'
      4  AND us.segment_name = usim.SDO_INDEX_TABLE;
    SEGMENT_NAME                     SEGMENT_SIZE_MB SDO_INDEX_STATUS
    INDEX_TEST_SPIND                           .0625 VALID
    1 row selected.
    SQL>When you ran the CREATE INDEX statement did it say "Index created." afterwards or did you get an error?
    Did you run the CREATE INDEX statement in SQL*Plus yourself or was it run by some software?
    I suggest you drop the indexes and try creating them again. Watch out for any errors. Chances are its an SRID issue.

  • SSIS 2008 Rebuild Index Task increasing Log Size

    I am testing SSIS 2008 Rebuild Index Task on a single database (db1). I shrank the
    db1's log file to initial size. I also checked the box Sort results in tempdb on the SSIS package.
    However, when I run the package, db1's log file size increased about 55 times the original size.
    When I run a rebuild of all the index on db1
    with (sort_in_tempdb=ON) ; there was only a slight increase in the log file size ( did not even double the initial size).
    Is this a SSIS bug? The check box is not actually sorting in tempdb?

    I am testing SSIS 2008 Rebuild Index Task on a single database (db1). I shrank the
    db1's log file to initial size. I also checked the box Sort results in tempdb on the SSIS package.
    However, when I run the package, db1's log file size increased about 55 times the original size.
    When I run a rebuild of all the index on db1
    with (sort_in_tempdb=ON) ; there was only a slight increase in the log file size ( did not even double the initial size).
    Is this a SSIS bug? The check box is not actually sorting in tempdb?
    Arthur can you please move this thread in Database engine forum. IMO its what changed from 2008 .Index rebuild is fully logged from 2008 onwards it was previously (in 2005) minimally logged .Refer below link for information regarding same
    http://support.microsoft.com/kb/2407439/en-gb
    Now about when sort intempdb is used.The intermediate sort results that are used to build the index are stored in tempdb .When you rebuild without sort_in_tempdb ( i guess your data and log file are on same drive) index will utilize disk space on which it
    resides so it might seem to you log has increased.Is it so ,am i correct.
    What query you used to measure log file size,are you absolutely sure it was log file that increased
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Error when creating index with parallel option on very large table

    I am getting a
    "7:15:52 AM ORA-00600: internal error code, arguments: [kxfqupp_bad_cvl], [7940], [6], [0], [], [], [], []"
    error when creating an index with parallel option. Which is strange because this has not been a problem until now. We just hit 60 million rows in a 45 column table, and I wonder if we've hit a bug.
    Version 10.2.0.4
    O/S Linux
    As a test I removed the parallel option and several of the indexes were created with no problem, but many still threw the same error... Strange. Do I need a patch update of some kind?

    This is most certainly a bug.
    From metalink it looks like bug 4695511 - fixed in 10.2.0.4.1

  • Alter index rebuild with online option

    I created a spatial index with the following statement:
    SQL> create index A3_IX1_A on A67(GEOMETRIE) indextype is mdsys.spatial_index parameters ('sdo_non_leaf_tbl=TRUE; sdo_dml_batch_size =
    2 1000');
    I want to rebuild the index with the next statement:
    SQL> alter index ttnl.A3_IX1_A rebuild online parameters ('index_status=cleanup;sdo_dml_batch_size=1000');
    the result is the next error message:
    alter index ttnl.A3_IX1_A rebuild online parameters ('index_status=cleanup;sdo_dml_batch_size=1000')
    ERROR at line 1:
    ORA-29871: invalid alter option for a domain index
    Can some one help me?

    Hi,
    This is only supported in the next release (11.x).
    In the release upto 10.2, this functionality is not supported for Spatial indexes.
    siva
    Message was edited by:
    sravada

  • Difference between Create Index without and with Parallel Clause

    Hi all.
    I want to know the difference between Create Index with Parallel clause and Create Index without Parallel clause.
    Any documentation.
    Thanks,
    Hassan

    Sure?
    dimitri@VDB> create table t parallel 3 as select * from all_objects;
    Table created.
    dimitri@VDB> set autotrace traceonly
    dimitri@VDB> select * from t;
    40934 rows selected.
    Execution Plan
    Plan hash value: 3050126167
    | Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT     |          | 40601 |  5075K|    50   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR      |          |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)| :TQ10000 | 40601 |  5075K|    50   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    PX BLOCK ITERATOR |          | 40601 |  5075K|    50   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   4 |     TABLE ACCESS FULL| T        | 40601 |  5075K|    50   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    --------------------------------------------------------------------------------------------------------------Looks like PQ to me.
    alter table t noparallel;
    Table altered.
    dimitri@VDB> select * from t;
    40934 rows selected.
    Execution Plan
    Plan hash value: 1601196873
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      | 40601 |  5075K|   135   (1)| 00:00:02 |
    |   1 |  TABLE ACCESS FULL| T    | 40601 |  5075K|   135   (1)| 00:00:02 |
    --------------------------------------------------------------------------Same if we use an Index:
    dimitri@VDB> create index t_idx on t(object_name) parallel 3;
    Index created.
    dimitri@VDB> select object_name from t;
    40934 rows selected.
    Execution Plan
    Plan hash value: 4278805225
    | Id  | Operation               | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT        |          | 40601 |   674K|    50   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR         |          |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)   | :TQ10000 | 40601 |   674K|    50   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    PX BLOCK ITERATOR    |          | 40601 |   674K|    50   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   4 |     INDEX FAST FULL SCAN| T_IDX    | 40601 |   674K|    50   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    -----------------------------------------------------------------------------------------------------------------Everything done on a single CPU (also only one core in the CPU) AMD Box.
    Dim

  • How do I increase stack size beyond 65532 with help of ulimit command

    How do I increase stack size beyond 65532 with help of ulimit command

    Are you already using "64-bit Kernel and extensions" as noted in this pane?

  • Why are frame sizes increasing after editing with Nik plug-ins?

    I’m using the Nik Collection of plug-ins with Aperture, and have discovered that the frame size of the resulting edits always end up larger than the original - sometimes much larger.
    The original image starts as 4592 x 3448 but after editing, the longest side of the resulting image is typically over 5000, 6000 or even 7000 pixels. 
    I recently worked on a cropped image that started at 4138 x 2759 and ended up at 7361 x 4908, after editing with a combination of Dfine, Viveza, Color Efex + Silver Efex.  If I export a version during the editing process, and then carry out futher editing, the frame size increases again.
    I’m exporting from Aperture using the TIFF - Original Size (16 bit) setting. 
    Can anyone tell me what’s going on here?  I’m concerned that the quality of the images is being compromised by the enlargement. 
    I'm using the latest set of Nik plug-ins, with Aperture 3.2.4 on a MBP running 10.6.8

    Keli
    In order to open Aperture in 32-bit mode by default you can right-click on the application icon, select "Get Info" and check the box marked "Open In 32-bit Mode" (or you can continue to run it by default in 64-bit mode and just get forced to re-open the app anytime you need to use plug-ins that can only run in 32-bit mode).
    hope this helps!
    Raf

  • File size increases dramatically after digital signature with Acrobat X

    I signed a PDF file using Acrobat X.
    File size of unsigned document: 200 k
    File size of signed document: 3200 k (more than ten times bigger)
    Any idea?
    BR
    Harald

    Finally I found the solution.
    The file size increase is a result of the embedded information used for long-term signature validation.
    If you do not need this feature, you can turn it off:
    Preferences > Security > Advanced Preferences > Creation tab
    disable "Include Signature’s Revocation"
    Help page: http://help.adobe.com/en_US/acrobat/X/pro/using/WS934c23d7cc8877da1172e0811fde233c98-8000. html
    BR
    Harald

Maybe you are looking for

  • Installation of the ST-PI 2005 plug-in

    Hi, I have installed the ST-PI 2008 plug-in as the RTCCTOOL tool had requested, but now the u201CSAP earlywatch service reportu201D keep informing than I need the ST-PI 2005 plug-in with patch level 0008. The problem is that the SAINT tool doesnu2019

  • Hyperlinks/buttons not working in Safari

    When I go onto Safari and to a website like Verizon.  I go to the page to pay my bill.  When I press on the 'Pay Bill' red button,  it does nothing.  It will not forward to the next step or the next page.  It is so frustrating.  My iPad works find bu

  • Problem calling more than one instance of a dll from TestStand

    Hi, I've posted this message in the LabWindows forum a few days ago and haven't gotten any answer. I have made a DLL with the evaluation version of LabWindows 7.1 to connect to a Telnet server and perform various commands. This DLL is used with TestS

  • Processing file in chunks

    I have a file structure like Header-Transaction Records(100K)-Trailer with Transaction Records of 100k. I want to process the this file through file-xi-file scenario. I am using the "Recordsets per message" (5000 records per each chunk)parameter in t

  • How to open .doc files created with microsoft word 97... when using mac os x 10.7.4?

    hi!  i'm using mac os x 10.7.4 and have Office 2011.  i can no longer open older documents (.doc) created with Microsoft Word 97...  What can i do to get access to these files? thanks in advance!