PDF size exceeds 50kb

Hi all,
We are developing a smartform wherein the requirment is to convert the smartform into PDF.
The present size of the PDF is 700 kb and we want to reduce it 50 kb. We have tried by changing the logo and the font but that has helped us reduce the size to 600kb.
Any suggestions in the case would be thankful.
Thanks
Nitin Sachdeva

Hi
Just check If this program is helpful:RSTXPDFT4.
Regards,
Vishwa.

Similar Messages

  • PDF size exceeds email attachment limit

    Hi, I got some PDF files which exceed the limit as an email attachment, is there a way that I can add them directly via iTunes to read on my iPad2 via Adobe Reader, if so how?  Any help is much appreciated...

    You can use Dropbox.

  • Reduce smartform generated PDF size

    Hi,
    How to reduce the smartform generated PDF size. I have tried removing few tables,windows, logo(size 10KB) etc.. but the size didnt vary much. I am using new Font 'CorpoS' in my smartstyle and using PT language (has special characters).
    Current size of my PDF is 286KB and i want it to be 20-50KB.
    Please give me valuable suggestions on how to reduce the size.
    Please treat this as urgent.
    Thanks in advance.
    Regards,
    Manjeera.
    Edited by: Manjeera Chinigiri on Jan 30, 2008 11:02 AM

    Hello,
    please check the note 843480.
    Kind regards
    Marcin

  • PDF Size will increase in size dramatically with every submit.

    I have a PDF Form desinged using Adobe LiveCycle Desinger ES2.
    It has a submit button which will submit the form to the server (IIS and ASP.NET) using this javascript command:
    event.target.submitForm( {cURL: "http://server/ASPNETWebPage.ASPX", aPackets:["datasets","pdf"], cSubmitAs: "XDP"});
    On the server, from ASP.NET, I use the following code to extract the submitted "chunk" element and convert it from Base64 to Binary PDF File:
                fs = New System.IO.FileStream(mFormFileNameFolder, IO.FileMode.Create)
                bw = New System.IO.BinaryWriter(fs)
                ' Get chunk element form the submitted XML
                Dim srChunk As New StringReader(mXML.GetElementsByTagName("chunk")(0).InnerXml)
                Do While True
                    Dim theChunkLine As String
                    theChunkLine = srChunk.ReadLine
                    If Not String.IsNullOrEmpty(theChunkLine) Then
                        theReadBytes = theChunkLine.Length
                    Else
                        theReadBytes = 0
                        Exit Do
                    End If
                    Dim theBase64Length = (theReadBytes * 3 / 4)
                    Dim buffer() As Byte
                    buffer = Convert.FromBase64String(theChunkLine)
                    bw.Write(buffer)
                Loop
                bw.Close()
                bw = Nothing
                fs.Close()
                fs = Nothing
    The above code is working fine, and PDF is generted successfully.
    I have one problem.
    With every submit, the generated PDF Size will increase dramatically. I reported this to Adobe Support, and they cofirmed that this is by desing and that with every submit, the previous PDF State is saved, and the new state is added. That is why I get huge PDF File.
    I was told that the only way to solve this problem is to submit the form as PDF ONLY, and after I save the PDF File on a file system, I then must use Adobe Service/Process "exportData" to extract the XML Data from the PDF.
    I think this is really big change to me. I was hoping that there is a way to indentify the latest PDF State from the chunk element.
    Any help will be greatly appreciated.
    Tarek.

    Thanks a lot C. Myers,
    You explanation helped me understand what is happening.
    I have been following the same method for the past 4 years, and I was hit by this problem (OutOfMemoryException) only when some users started using image size more than 500KB. Then, I decided to report this problem.
    I was able to rewrite the code to convert from Base64 to binary using buffering:
    http://forums.asp.net/t/1662571.aspx/1?URGENT+Exception+OutOfMemoryException+thrown+when+w hen+converting+to+String+
    So far, I am not getting OutOfMemoryExceptions, but the PDF Size will continue to grow with every submit. However, if the all the images size is less than 50KB, the increase is not significant.
    Please allow me to ask this question:
    Is there a way to change the above code so that I can extract only the last version of the submitted PDF from the Data Stream "chunk" element ?
    Sooner or later, some one will notice that such PDF sizes are not logical. Even when the PDF does not have images, I have noticed in the past, some PDF Sizes (for Staff Profile Data Collection Form) are something like 15MB !!! I was not able to figure out why. But now I understand. I think the user must have submitted the form for saving many times.
    Now, things are OK. But, I will post back if this problem will fire back.
    Tarek.

  • Why is my PDF size from my InDesign export still large?

    I'm doing class project that is a four-page, seven-image document with very little text. The project must be submitted as a PDF under 300 KB. However, when I export the document under the smallest file size preset and with the compression rates down to 72 dpi, I get a PDF that is 612 KB.
    All my classmates are able to get it under 300 KB, so I'm confused as to why mine is so large. I'm using 5 JPGs and 2 GIFs, and each take up a 5in by 5in square in the document. They are relatively really small pictures (avg. 250 KBS) and I've even re-saved all to be under 100 KB, but the PDF size only drops to around 500 KB I transferred between InDesign CC on Mac and InDesign CS5 on Windows and wonder if that is a concern.

    Just wanted to share the results, in case someone comes across this forum in the future.
    In danegonzalez's four-page InDesign layout, seven black-and-white images were used.  Three pages contained two 5" by 5" images accompanied by seven lines of text under each image, and the last page had one image-and-description set of similar sizes.  An audit of the PDF done on Acrobat (below) showed that the images made up the majority of the file size.
    Since the images had quite a bit of detail, I figured that changing their file type would be the best way to create a more compact PDF and keep the integrity of the images.  So I cropped the images to size and saved them as 24-bit PNG files.  Even though the PNG files were larger than the originals, it gave nice PDF results.  Here is the audit results of the updated PDF after using InDesign's "Smallest File Size" default preset.
    I'm glad we were able to resolve this issue. 

  • Essbase Error:Set is too large to be processed. Set size exceeds 2^64 tuple

    Hi,
    we are using obiee 11.1.1.6 version with essbase 9.3.3 as a data source . when I try to run a report in obiee, I am getting the below error :
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 96002] Essbase Error: Internal error: Set is too large to be processed. Set size exceeds 2^64 tuples (HY000)
    but if I run the same query in excel add in, I am just getting 20 records . wondering why I am getting this error in obiee . Does any one encountered the same issue ?
    Thanks In advance,

    Well if you want to export in I think you have to manually do it.
    The workaround it to open your aperture library by right clicking it and show contents...
    Then go into your project right click show contents...
    In here there are sub folders on dates that the pictures were added to those projects. If you open the sub folder and search for your pictures name it should be in that main folder.
    You can just copy it out as you would any normal file to any other location.
    Voila you have manually exported out your file.
    There is a very similar post that has been close but again you can't export the original file that you are working on - FYI http://discussions.apple.com/thread.jspa?threadID=2075419

  • PDF Size is huge when inserting an image on every page

    Hello,
    I've got a little problem when inserting an image in a crystal report and then exporting it to PDF.
    We're using Crystal Report XI and we've designed the report with a sub-report that gets an image from a database field and put this image on every page of the report (it's just a logo for every page).
    This image is a JPEG file with 32KB size.
    The exported report is about 1000 pages and we're dealing with an issue because of that. Sometimes the report won't generate giving a OutOfMemory Exception (altough we've defined MinHeap/MaxHeap to 512M).
    We also find this curious :
    - A generated pdf report with the image on every page totals 100MB size (we would expect something like 32KB*100pages=32000KB/1024KB=31.25MB ?) : Depending on the number of pages, could give OutOfMemoryError.
    - A generated pdf report without the image on every page totals 1.8MB size. : No OutOfMemoryError problem here.
    Do you know of any way to optimize this, reducing the pdf size and ?
    Thank you very much.
    Eduardo Andrade (GEDI, S.A.)

    Hello Ted Ueda,
    Thank you for your answer, I was suspecting that...
    Since this is an obvious problem, do you know if there is another way of including an image in the report in a more efficient way ? For instance, include the image, only on the first page and on the rest of the pages just include a link to that image?
    I've searched in here : https://wiki.sdn.sap.com/wiki/x/JwBmBQ , but did not find anything close to this...
    Thank you.
    Best Regards,
    Eduardo Andrade

  • How can I reduce a fotobook pdf size for online printing

    Hello aperture,
    I have been creating a large customer book (33 x 28 cm) with aperture.
    I'm working with RAW datas and the fotobook has got around 150 pages.
    My online printing office accepts only PDF X3 2002 datas and I can get this with adobe acrobat X Pro.
    I print the book in aperture an save it as an Adobe PDF and than, I can save the datas as a PDF X3:2002. It works perfectly.
    My problem is that the online printing office accepts only 2 GB and my pdf, which I've printed with aperture, has got around 7 GB.
    What can I do in aperture to reduce the pdf size and get a smaller one?
    My aperture settings for Export is: 300 dpi.
    Thanks for you help,
    Best
    Andrea

    Actually in LR4 and Camera Raw 7 in Photoshop CS, you CAN create a downsampled DNG by using the Lossy Compression method of converting to DNG. In the LR 4 Export, select DNG then you have the ability to specify long dimensions or the size in Megapixels. Note, a Lossy DNG isn't un-demosiaced, it's been demosiaced but is still stored in Linear Gamma, so it's like a partially baked file, not a true unbaked raw file.
    Note, in Camera Raw 7 when you set up your save dlog, it's a bit different. You select Lossy DNG and then have a dropdown for size presets...
    Couple of things about the Lossy DNG, you SHOULD name it to distinguish between the downsampled DNG and your original DNG...you don't want to get into the situation where you overwrite your original. Also note that some things won't work totally as expected. For example, sharpening and noise reduction will be working but on a downsampled file. While the numbers will correlate, the actual effects will be a bit different because it's a downsampled file and more prone to potentially over sharpening. It all works, and the controls will work fine if you work at 100% zoom, but don't expect the high resolution your original file had.

  • Table size exceeds Keep Pool Size (db_keep_cache_size)

    Hello,
    We have a situation where one of our applications started performing bad since last week.
    After some analysis, it was found this was due to data increase in a table that was stored in KEEP POOL.
    After the data increase, the table size exceeded db_keep_cache_size.
    I was of the opinion that in such cases KEEP POOL will still be used but the remaining data will be brought in as needed from the table.
    But, I ran some tests and found it is not the case. If the table size exceeds db_keep_cache_size, then KEEP POOL is not used at all.
    Is my inference correct here ?
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - ProductionSetup
    SQL> show parameter keep                    
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 4M
    SQL>
    SQL>     
    SQL> create table t1 storage (buffer_pool keep) as select * from all_objects union all select * from all_objects;
    Table created.
    SQL> set autotrace on
    SQL>
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    PL/SQL procedure successfully completed.
    SQL> set serveroutput on
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    SEGMENT_NAME                  : T1
    PARTITION_NAME                :
    SEGMENT_TYPE                  : TABLE
    SEGMENT_SUBTYPE               : ASSM
    TABLESPACE_NAME               : HR_TBS
    BYTES                         : 16777216
    BLOCKS                        : 2048
    EXTENTS                       : 31
    INITIAL_EXTENT                : 65536
    NEXT_EXTENT                   : 1048576
    MIN_EXTENTS                   : 1
    MAX_EXTENTS                   : 2147483645
    MAX_SIZE                      : 2147483645
    RETENTION                     :
    MINRETENTION                  :
    PCT_INCREASE                  :
    FREELISTS                     :
    FREELIST_GROUPS               :
    BUFFER_POOL                   : KEEP
    FLASH_CACHE                   : DEFAULT
    CELL_FLASH_CACHE              : DEFAULT
    PL/SQL procedure successfully completed.DB_KEEP_CACHE_SIZE=4M
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              9  recursive calls
              0  db block gets
           2006  consistent gets
           2218  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=10M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=10M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 12M
    SQL>
    SQL> set autotrace on
    SQL>
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=20M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=20M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 20M
    SQL> set autotrace on
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
           1656  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
              0  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedOnly with 20M db_keep_cache_size I see no physical reads.
    Does it mean that if the db_keep_cache_size < table size, there is no caching for that table ?
    Or am I missing something ?
    Rgds,
    Gokul

    Hello Jonathan,
    Many thanks for your response.
    Here is the test I ran;
    SQL> select buffer_pool,blocks from dba_tables where owner = 'HR' and table_name = 'T1';
    BUFFER_     BLOCKS
    KEEP          1977
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
          1939
    SQL> show parameter db_keep_cache_size
    NAME                                 TYPE        VALUE
    db_keep_cache_size                   big integer 20M
    SQL>
    SQL> alter system set db_keep_cache_size = 5M scope=both;
    System altered.
    SQL> select count(*) from hr.t1;
      COUNT(*)
        135496
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
           992I think my inference is wrong and as you said I am indeed seeing the effect of tail end flushing the start of the table.
    Rgds,
    Gokul

  • One of the BizTalk Server processes in the affected computer is being throttled for significant periods because of high database size exceeding the threshold

    Hello Experts,
    I have complex Biz Talk 2013 farm having 20 servers,15 Hosts. In my production environment even if there is no traffic i am getting throttling errors from SCOM for my all the production hosts.
    Error : One of the BizTalk Server processes in the affected computer is being throttled for significant
    periods because of high database size exceeding the threshold
     I checked following things:
    1. MsgBoxDB size 748732 KB
    2. Spool table size  53 MB
    3. Tracking DB size 26724 KB
    4. Host settings --  Message Queue Size = 100, MsgCount = 50000 , Spool MP = 10 , Tracking MP = 10
    5. Ran Message Box viewer  and did not found any error related to DB size. (which counter i should focus on in MBV)
    Note -- for DB i am sharing full backup size because it does not have log file size.
    Please suggest where i should focus?
    Is SCOM reporting correctly because everything is fine in biztalk ?
    Thanks
    Yagya
    https://www.mcpvirtualbusinesscard.com/VBCServer/card.aspx?tag=YagyaDattMishra&wa=wsignin1.0

    Hi Yagyam
    I remember this error from SCOM when you use the standard SCOM BizTalk pack.
    Check the eventlog of the server, do you see any errors in event log, this could give some clue to the root cause. Whenever you have this alert from SCOM, you must have
    some entries in eventlog relating to the alert raised by SCOM.
    Is your message processing by BizTalk hosts are normal? Run the performance monitor to see the bottleneck. Check if there is really throttling.
    As mentioned in this blog, check the state of the SQL Agent on BizTalk's SQL database server, may be try restart the SQL Agent on BizTalk's SQL database server.
    http://blogs.msdn.com/b/timdel/archive/2008/11/19/why-i-love-scom.aspx
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

  • I am on a Windows 7 OS attempting to reduce pdf size with my Adobe Acrobat Standard XI & Pro.  The application keeps timing out and  at the Subsetting embedded fonts portion and the application gives "Adobe Acrobat has stopped working" and then closes.  T

    I am on a Windows 7 OS attempting to reduce pdf size with my Adobe Acrobat Standard XI & Pro.  The application keeps timing out and  at the Subsetting embedded fonts portion and the application gives "Adobe Acrobat has stopped working" and then closes.  The document is 275 pages.  Is there something I can do to stop this?

    Hi Ricci,
    Since when are you facing this issue? Did you tried system restore to a date before this problem occured.
    Does acrobat stop working when you open this specific pdf file or with any pdf file that you open?
    Regards,
    Rahul

  • PDF size of a scan

    Hello there,
    I was trying to find a solution to this, but even Mr. Google doesn't know much.
    My problem is that after upgrading to 10.6 and using the build in Scan feature that came with Snow Leopard, the PDF scans are coming too big. For example a scan of a passport size is 3.8MB or a scan of an A4 document can be anything between 4MB to 6MB per page. Even if I lower the DPI to 150, the size is over 4MB.
    Is there a way of lowering the size of the PDF without using additional software? We are using HP OfficeJet 8500 Pro and HP Photosmart 7280. Both printers have same problem, the PDF size is too big.
    Thank you,
    J

    How are you using OS X scanning?
    I do it through the Image Capture utility, and when I do that, Image Capture integrates with the software that came with the scanner and I can see and adjust all of the scanner's options that control the quality and file size of the scan. I have a different brand though (Epson).
    I get the Apple Image Capture window with a big preview and also a floating Epson window that has all of the scanner's controls.

  • Reducing PDF-size: automatic reduction of datapoints that are used to draw lines in a 2d-axis system within report

    Creating fancy pdf-files for costumers and other purposes is great. However, if the experimental data include many datapoints (>200000) a line-2d-graph ends up in a very big pdf-file. Especially when many pages need to be used.
    Explanation:
    When I use lines to show experimental data in 2d-plots the size of my PDF-file is directly influenced by the number of datapoints used. The more datapoints are used to draw lines within the graph, the bigger the exported PDF-files of the report are.
    It would be great to limit the number of points used to draw a line as it can be done with markers without using the curve transformation option. - Hence, e.g. plotting a line with the help of 200 datapoints is usually as good as showing the same line based on 200000 datapoints but the pdf-size is significantly reduced. You can imagine that when this would be done via the transformation option a long lasting script would be needed for each line to reduce the number of datapoints shown. Hence, the plotting within the report and the actualisation of data would need very long.
     

    Since a while DIAdem optimizes the size of exported PDF-files in a related way as it is suggested here. In principle the PDF-file is exported in a very high resolution, so you can display it in a reader with a very high zoom value (e. g. 6000 %) to look into details of your data. If you have a huge dataset, this could lead in fact to a bigger file size, if data points could be displayed because the high PDF-resolution. But in general, DIAdem only saves information in a PDF-file which is really necessary - but with a high resolution.

  • Java.lang.OutOfMemoryError: Requested array size exceeds VM limit

    Hi!
    I've a this problem and I do not know how to reselve it:
    I' ve an oracle 11gr2 database in which I installed the Italian network
    when I try to execute a Shortest Path algorithm or a shortestPathAStar algorithm in a java program I got this error.
    [ConfigManager::loadConfig, INFO] Load config from specified inputstream.
    [oracle.spatial.network.NetworkMetadataImpl, DEBUG] History metadata not found for ROUTING.ITALIA_SPAZIO
    [LODNetworkAdaptorSDO::readMaximumLinkLevel, DEBUG] Query String: SELECT MAX(LINK_LEVEL) FROM ROUTING.ITALIA_SPAZIO_LINK$ WHERE LINK_LEVEL > -1
    *****Begin: Shortest Path with Multiple Link Levels
    *****Shortest Path Using Dijkstra
    [oracle.spatial.network.lod.LabelSettingAlgorithm, DEBUG] User data categories:
    [LODNetworkAdaptorSDO::isNetworkPartitioned, DEBUG] Query String: SELECT p.PARTITION_ID FROM ROUTING.ITA_SPAZIO_P_TABLE p WHERE p.LINK_LEVEL = ? AND ROWNUM = 1 [1]
    [QueryUtility::prepareIDListStatement, DEBUG] Query String: SELECT NODE_ID, PARTITION_ID FROM ROUTING.ITA_SPAZIO_P_TABLE p WHERE p.NODE_ID IN ( SELECT column_value FROM table(:varray) ) AND LINK_LEVEL = ?
    [oracle.spatial.network.lod.util.QueryUtility, FINEST] ID Array: [2195814]
    [LODNetworkAdaptorSDO::readNodePartitionIds, DEBUG] Query linkLevel = 1
    [NetworkIOImpl::readLogicalPartition, DEBUG] Read partition from blob table: partition 1181, level 1
    [LODNetworkAdaptorSDO::readPartitionBlobEntry, DEBUG] Query String: SELECT BLOB, NUM_INODES, NUM_ENODES, NUM_ILINKS, NUM_ELINKS, NUM_INLINKS, NUM_OUTLINKS, USER_DATA_INCLUDED FROM ROUTING.ITA_SPAZIO_P_BLOBS_TABLE WHERE PARTITION_ID = ? AND LINK_LEVEL = ? [1181,1]
    [oracle.spatial.network.lod.LabelSettingAlgorithm, WARN] Requested array size exceeds VM limit
    [NetworkIOImpl::readLogicalPartition, DEBUG] Read partition from blob table: partition 1181, level 1
    [LODNetworkAdaptorSDO::readPartitionBlobEntry, DEBUG] Query String: SELECT BLOB, NUM_INODES, NUM_ENODES, NUM_ILINKS, NUM_ELINKS, NUM_INLINKS, NUM_OUTLINKS, USER_DATA_INCLUDED FROM ROUTING.ITA_SPAZIO_P_BLOBS_TABLE WHERE PARTITION_ID = ? AND LINK_LEVEL = ? [1181,1]
    Exception in thread "main" java.lang.OutOfMemoryError: Requested array size exceeds VM limit
    I use the sdoapi.jar, sdomn.jar and sdoutl.jar stored in the jlib directory of the oracle installation path.
    When I performe this query : SELECT BLOB, NUM_INODES, NUM_ENODES, NUM_ILINKS, NUM_ELINKS, NUM_INLINKS, NUM_OUTLINKS, USER_DATA_INCLUDED FROM ROUTING.ITA_SPAZIO_P_BLOBS_TABLE WHERE PARTITION_ID = ? AND LINK_LEVEL = ? [1181,1]
    I got the following result
    BLOB NUM_INODES NUM_ENODES NUM_ILINKS NUM_ELINKS NUM_INLINKS NUM_OUTLINKS USER_DATA_INCLUDED
    (BLOB) 3408 116 3733 136 130 128 N
    then the java code I use is :
    package it.sistematica.oracle.spatial;
    import it.sistematica.oracle.network.data.Constant;
    import java.io.InputStream;
    import java.sql.Connection;
    import oracle.spatial.network.lod.DynamicLinkLevelSelector;
    import oracle.spatial.network.lod.GeodeticCostFunction;
    import oracle.spatial.network.lod.HeuristicCostFunction;
    import oracle.spatial.network.lod.LODNetworkManager;
    import oracle.spatial.network.lod.LinkLevelSelector;
    import oracle.spatial.network.lod.LogicalSubPath;
    import oracle.spatial.network.lod.NetworkAnalyst;
    import oracle.spatial.network.lod.NetworkIO;
    import oracle.spatial.network.lod.PointOnNet;
    import oracle.spatial.network.lod.config.LODConfig;
    import oracle.spatial.network.lod.util.PrintUtility;
    import oracle.spatial.util.Logger;
    public class SpWithMultiLinkLevel
         private static NetworkAnalyst analyst;
         private static NetworkIO networkIO;
         private static void setLogLevel(String logLevel)
         if("FATAL".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_FATAL);
         else if("ERROR".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_ERROR);
         else if("WARN".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_WARN);
         else if("INFO".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_INFO);
         else if("DEBUG".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_DEBUG);
         else if("FINEST".equalsIgnoreCase(logLevel))
         Logger.setGlobalLevel(Logger.LEVEL_FINEST);
         else //default: set to ERROR
         Logger.setGlobalLevel(Logger.LEVEL_ERROR);
         public static void main(String[] args) throws Exception
              String configXmlFile =                "LODConfigs.xml";
              String logLevel =           "FINEST";
              String dbUrl =                Constant.PARAM_DB_URL;
              String dbUser =                Constant.PARAM_DB_USER;
              String dbPassword =                Constant.PARAM_DB_PASS;
              String networkName =                Constant.PARAM_NETWORK_NAME;
              long startNodeId = 2195814;
              long endNodeId = 3415235;
         int linkLevel = 1;
         double costThreshold = 1550;
         int numHighLevelNeighbors = 8;
         double costMultiplier = 1.5;
         Connection conn = null;
         //get input parameters
         for(int i=0; i<args.length; i++)
         if(args.equalsIgnoreCase("-dbUrl"))
         dbUrl = args[i+1];
         else if(args[i].equalsIgnoreCase("-dbUser"))
         dbUser = args[i+1];
         else if(args[i].equalsIgnoreCase("-dbPassword"))
         dbPassword = args[i+1];
         else if(args[i].equalsIgnoreCase("-networkName") && args[i+1]!=null)
         networkName = args[i+1].toUpperCase();
         else if(args[i].equalsIgnoreCase("-linkLevel"))
         linkLevel = Integer.parseInt(args[i+1]);
         else if(args[i].equalsIgnoreCase("-configXmlFile"))
         configXmlFile = args[i+1];
         else if(args[i].equalsIgnoreCase("-logLevel"))
         logLevel = args[i+1];
         // opening connection
         System.out.println("Connecting to ......... " + Constant.PARAM_DB_URL);
         conn = LODNetworkManager.getConnection(dbUrl, dbUser, dbPassword);
         System.out.println("Network analysis for "+networkName);
         setLogLevel(logLevel);
         //load user specified LOD configuration (optional),
         //otherwise default configuration will be used
         InputStream config = (new Network()).readConfig(configXmlFile);
         LODNetworkManager.getConfigManager().loadConfig(config);
         LODConfig c = LODNetworkManager.getConfigManager().getConfig(networkName);
         //get network input/output object
         networkIO = LODNetworkManager.getCachedNetworkIO(
         conn, networkName, networkName, null);
         //get network analyst
         analyst = LODNetworkManager.getNetworkAnalyst(networkIO);
         double[] costThresholds = {costThreshold};
         LogicalSubPath subPath = null;
         try
              System.out.println("*****Begin: Shortest Path with Multiple Link Levels");
              System.out.println("*****Shortest Path Using Dijkstra");
              String algorithm = "DIJKSTRA";
              linkLevel = 1;
              costThreshold = 5000;
              subPath = analyst.shortestPathDijkstra(new PointOnNet(startNodeId), new PointOnNet(endNodeId),linkLevel, null);
              PrintUtility.print(System.out, subPath, true, 10000, 0);
              System.out.println("*****End: Shortest path using Dijkstra");
              catch (Exception e)
              e.printStackTrace();
              try
              System.out.println("*****Shortest Path using Astar");
              HeuristicCostFunction costFunction = new GeodeticCostFunction(0,-1, 0, -2);
              LinkLevelSelector lls = new DynamicLinkLevelSelector(analyst, linkLevel, costFunction, costThresholds, numHighLevelNeighbors, costMultiplier, null);
              subPath = analyst.shortestPathAStar(
              new PointOnNet(startNodeId), new PointOnNet(endNodeId), null, costFunction, lls);
              PrintUtility.print(System.out, subPath, true, 10000, 0);
              System.out.println("*****End: Shortest Path Using Astar");
              System.out.println("*****End: Shortest Path with Multiple Link Levels");
              catch (Exception e)
              e.printStackTrace();
         if(conn!=null)
         try{conn.close();} catch(Exception ignore){}
    At first I create a two link level network with this command
    exec sdo_net.spatial_partition('ITALIA_SPAZIO', 'ITA_SPAZIO_P_TABLE', 5000, 'LOAD_DIR', 'sdlod_part.log', 'w', 1);
    exec sdo_net.spatial_partition('ITALIA_SPAZIO', 'ITA_SPAZIO_P_TABLE', 60000, 'LOAD_DIR', 'sdlod_part.log', 'w', 2);
    exec sdo_net.generate_partition_blobs('ITALIA_SPAZIO', 1, 'ITA_SPAZIO_P_BLOBS_TABLE', true, true, 'LOAD_DIR', 'sdlod_part_blob.log', 'w', false, true);
    exec sdo_net.generate_partition_blobs('ITALIA_SPAZIO', 2, 'ITA_SPAZIO_P_BLOBS_TABLE', true, true, 'LOAD_DIR', 'sdlod_part_blob.log', 'w', false, true);
    Then I try with a single level network but I got the same error.
    Please can samebody help me?

    I find the solution to this problem.
    In the LODConfig.xml file I have:
    <readPartitionFromBlob>true</readPartitionFromBlob>
                   <partitionBlobTranslator>oracle.spatial.network.lod.PartitionBlobTranslator11g</partitionBlobTranslator>
    but when I change it to
    <readPartitionFromBlob>true</readPartitionFromBlob>
                   <partitionBlobTranslator>oracle.spatial.network.lod.PartitionBlobTranslator11gR2</partitionBlobTranslator>
    The application starts without the obove mentioned error.

  • I want to reduce pdf size up to 5mb for mailing perpose, anybody have any option to reduce it since I have used optimiser & reduce file size option but its not helpfull.

    I want to reduce pdf size up to 5mb for mailing perpose, anybody have any option to reduce it since I have used optimizer & reduce file size option but its not helpful.

    The optimizer can reduce space, but some things can't get smaller. Text for example. Play with the settings, examine the results of Audit Space Usage.
    Or give up. Even 5 MB is too large for a bulk mailing, by far. Instead put it on your web site and mail a link - done!

Maybe you are looking for