Very-large-scale searching in J2EE

I'm looking to solve a very-large-scale searching problem. I am creating a site
where users can search a table with five million records, filtering and sorting
independantly on ten different columns. For example, the table might be five million
customers, and the user might choose "S*" for the last name, and sort ascending
on street name.
I have read up on a number of patterns to solve this problem, but anticipate some
performance issues. I'll explain below:
1) "Page-by-Page Iterator" or "Value List Handler"
In this pattern, it appears that all records that match the search criteria are
retrieved from the database and cached on the application server. The client (JSP)
can then access small pieces of the cached results at a time. Issues with this
include:
- If the customer record is 1KB, then wide search criteria (i.e. last name =
S*) will cause 1 GB transfer from the database server to app server, and then
1GB being stored on the app server, cached, waiting for the user (each user!)
to ask for the next 10 or 100 records. This is inefficient use of network and
memory resources.
- 99% of the data transfered from the database server will not by used ... most
users flip through a couple of pages and then choose a record or start a new search
2) Requery the database each time and ask for a subset
I haven't seen this formalized into a pattern yet, but the basic idea is this:
If a clients asks for records 1-100 first (i.e. page 1), only fetch that many
records from the db. If the user asks for the next page, requery the database
and use the JDBC API's ResultSet.absolute(int row) to start at record 101. Issue:
The query is re-performed, causing the Oracle server to do another costly "execute"
(bad on 5M records with sorting).
To solve this, I've beed trying to enhance the second strategy above by caching
the ResultSet object in a stateful session bean. Unfortunately, this causes a
"ResultSet already closed" SQLException, although I ensure that the Connection,
PreparedStatement, and ResultSet are all stored in the EJB and not closed. I've
seen this on newsgroups ... it appears that WebLogic is forcing the Connection
closed. If this is how J2EE and pooled connections work, then that's fine ...
there's nothing I can really do about it.
Another idea is to use "explicit cursors" in Oracle. I haven't fully explored
it yet, but it wouldn't be a great solution as it would be using Oracle-specific
functionality (we are trying to be db-agnostic).
More information:
- BEA WebLogic Server 8.1
- JDBC: Oracle's thin driver provided with WLS 8.1
- Platform: Sun Solaris 5.8
- Oracle 9i
Any other ideas on how I can solve this issue?

Michael McNeil wrote:
I'm looking to solve a very-large-scale searching problem. I am creating a site
where users can search a table with five million records, filtering and sorting
independantly on ten different columns. For example, the table might be five million
customers, and the user might choose "S*" for the last name, and sort ascending
on street name.
I have read up on a number of patterns to solve this problem, but anticipate some
performance issues. I'll explain below:
1) "Page-by-Page Iterator" or "Value List Handler"
In this pattern, it appears that all records that match the search criteria are
retrieved from the database and cached on the application server. The client (JSP)
can then access small pieces of the cached results at a time. Issues with this
include:
- If the customer record is 1KB, then wide search criteria (i.e. last name =
S*) will cause 1 GB transfer from the database server to app server, and then
1GB being stored on the app server, cached, waiting for the user (each user!)
to ask for the next 10 or 100 records. This is inefficient use of network and
memory resources.
- 99% of the data transfered from the database server will not by used ... most
users flip through a couple of pages and then choose a record or start a new search
2) Requery the database each time and ask for a subset
I haven't seen this formalized into a pattern yet, but the basic idea is this:
If a clients asks for records 1-100 first (i.e. page 1), only fetch that many
records from the db. If the user asks for the next page, requery the database
and use the JDBC API's ResultSet.absolute(int row) to start at record 101. Issue:
The query is re-performed, causing the Oracle server to do another costly "execute"
(bad on 5M records with sorting).
To solve this, I've beed trying to enhance the second strategy above by caching
the ResultSet object in a stateful session bean. Unfortunately, this causes a
"ResultSet already closed" SQLException, although I ensure that the Connection,
PreparedStatement, and ResultSet are all stored in the EJB and not closed. I've
seen this on newsgroups ... it appears that WebLogic is forcing the Connection
closed. If this is how J2EE and pooled connections work, then that's fine ...
there's nothing I can really do about it.
Another idea is to use "explicit cursors" in Oracle. I haven't fully explored
it yet, but it wouldn't be a great solution as it would be using Oracle-specific
functionality (we are trying to be db-agnostic).
More information:
- BEA WebLogic Server 8.1
- JDBC: Oracle's thin driver provided with WLS 8.1
- Platform: Sun Solaris 5.8
- Oracle 9i
Any other ideas on how I can solve this issue? Hi. Fancy SQL to the rescue! If the table has a unique key, you can simply send a
query per page, with iterative SQL that selects the next N rows beyond what was
selected last time. Eg:
Let variable X be the highest key value you've seen so far. Initially it would
be the lowest possible value.
select * from mytable M
where ... -- application-specific qualifications...
and M.key >= X
and (100 <= select count(*) from mytable MM where MM.key > X and MM.key < M.key and ...)
In English, this says, select all the qualifying rows higher than what I last saw, but
only those that have fewer than 100 qualifying rows between the last I saw and them (ie:
the next 100).
When processing this query, remember the highest key value you see, and use it for the
next query.
Joe

Similar Messages

  • Anonymus trace is genrating at a very large scale

    Hi ,
    we're using oracle EBS . But anonymus trace is generating with the name like "fint_ora_26608_ANONYMOUS.trc" . I don't know , why they're getting generated . Kindly suggest me , why the trace with name like *ANONYMOUS.trc are getting generated ....
    Regds
    Rahul

    It is quite possible that tracing is enabled at the database level (in the init.ora file) or at the app level (using various profile options). Have these traces always been generated ? If not when did they start and what changed ? You could possibly use OAM to check history of all profile options that were changed.
    HTH
    Srini

  • Large Scale Digital Printing Guidelines

    Hi,
    I'm trying to get a getter handle on the principles and options for creating the best large and very large scale prints from digital files.  I'm more than well versed in the basics of Photoshop and color management but there remain some issues I've never dealt with.
    It would be very helpful if you could give me some advice about this issue that I've divided into four levels.  In some cases I've stated principles as I understand them.  Feel free to better inform me.  In other cases I've posed direct questions and I'd really appreciate professional advice about these issues, or references to places where I can learn more.
    Thanks alot,
    Chris
    Level one – Start with the maximum number of pixels possible.
    Principle: Your goal is to produce a print without interpolation at no less than 240 dpi.  This means that you need as many pixels as the capture device can produce at its maximum optical resolution.
    Level two – Appropriate Interpolation within Photoshop
    Use the Photoshop Image Size box with the appropriate interpolation setting (Bicubic Smoother) to increase the image size up to the maximum size of your ink jet printer.
    What is the absolute minimum resolution that is acceptable when printing up to 44”?
    What about the idea of increasing your print size in 10% increments? Does this make a real difference?
    Level three - Resizing with vector-based applications like Genuine Fractals?
    In your experience do these work as advertized and do you recommend them for preparing files to print larger than the Epson 9900?
    Level four – Giant Digital Printing Methods
    What are the options for creating extremely large digital prints?
    Are there web sites or other resources you can point me to to learn more about this?
    How do you prepare files for very large-scale digital output?

    While what you say may be true, it is not always the case. I would view a 'painting' as more than a 'poster' in terms of output resolution, at least in the first stages of development. Definately get the info from your printer and then plan to use your hardware/software setup to give you the most creative flexibility. In other words - work as big as you can (within reason, see previous statement) to give yourself the most creative freedom. Things like subtle gradations and fine details will benefit from more pixels, and can with the right printer be transferred to hard copy at higher resolutions (a photo quality ink jet will take advantage of 600ppi) if that's what you're going for.
    Additionally it's much easier to down scale than to wish you had a bigger image after a 100 hours of labor...

  • Very large HEAPDUMP files are generated when executing BI Web reports NW7.0

    Dear Gurus,
    I´m facing a new problem.
    When few users are working in Portal to execute BI Web reports and queries, the system stops and big files are generated in directory: /usr/sap/BWQ/DVEBMGS42/j2ee/cluster/server0
    I´m using AIX 5.3. The files are these:
    2354248 Sep 29 12:31 Snap0001.20080929.153102.766064.trc
    1028628480 Sep 29 12:32 heapdump.20080929.153102.766064.phd
    0 Sep 29 12:32 javacore.20080929.153102.766064.txt
    I was searching for any solution in SAP help and notes. I´ve read a lot of notes:
    SAP Note 1030279 - Reports with very large result sets-BI Java
    SAP Note 1053495 - Settings to get a heapdump with IBM JVM on AIX
    SAP Note 1008619 - java.lang.OutOfMemoryError in BEx Web Applications
    SAP Note 1127156 - Safety belt: Result set iss too large
    SAP Note 723909 - Java VM settings for J2EE
    SAP Note 1150242 - Improving performance/memory in the BEX Analyzer
    SAP Note 950602 - Performance problems when you start a query in Java Web
    SAP Note 1021517 - NW 2004s BI Web memory optimization for large analysis item
    SAP Note 1044330 - Java parameterization for BI systems
    SAP Note 1025307 - Composite note for NW2004s performance: Reporting
    But still not having found an answer to solve this problem.
    In note 1030279 there is written:
    ?We will provide an optimization of the memory requirement in the next Support Package Stack. With this optimization, you can display a report as "stateless", so that the system can then immediately release the memory that is required to set up the result set.?
    I´m using Support Stack 15 for ABAP and Java, but I don´t have more information about this problem or stateless function in another note. And I don´t know how can I use this STATELESS function in BI.
    Anobody have any idea to solve this problem?
    Thanks a lot,
    Carlos

    Hi,
    Heap dumps are generated when there is an inmbalance in Java VM parameterization..
    Also please remove the parameter "-XX:+HeapDumpOnOutOfMemoryError " in Config tool, so that heap dumps will not generated and fill up the disk space..
    My advise is to send the heap dumps to SAP for recommendations.. Meanwhile check with SAP notes for Java VM recommendations..
    Regards
    Thilip Kumar
    Edited by: Thilip Kumar on Sep 30, 2008 5:58 PM

  • When printing from an online PDF, the page prints in extra large scale. How do I fix this?

    when printing from an online PDF, the page prints in extra large scale. How do I fix this?

    This can happen when Firefox has misread the paper size from the information supplied by Windows. Clearing it can involve finding some obscure settings, but here goes:
    (1) In a new tab, type or paste '''about:config''' in the address bar and press Enter. Click the button promising to be careful.
    (2) In the search box above the list, type or paste '''print''' and pause while the list is filtered
    (3) For each setting for the problem printer, right-click and Reset it. The fastest way is to right-click with the mouse and then press the r key on the keyboard with your other hand.
    Note: In a couple other threads involving Brother printers, the preference '''printer_printer_name.print_paper_data''' was set to 256 and when the user edited it to 1 that fixed the paper size problem. If you see a 256 there, you can edit the value by doubling-clicking it or using right-click>Modify.

  • Can iCloud be used to synchronize a very large Aperture library across machines effectively?

    Just purchased a new 27" iMac (3.5 GHz i7 with 8 GB and 3 TB fusion drive) for my home office to provide support.  Use a 15" MBPro (Retina) 90% of the time.  Have a number of Aperture libraries/files varying from 10 to 70 GB that are rapidly growing.  Have copied them to the iMac using a Thunderbolt cable starting the MBP in target mode. 
    While this works I can see problems keeping the files in sync.  Thought briefly of putting the files in DropBox but when I tried that with a small test file the load time was unacceptable so I can imagine it really wouldn't be practical when the files get north of 100 GB.  What about iCloud?  Doesn't appear a way to do this but wonder if that's an option.
    What are the rest of you doing when you need access to very large files across multiple machines?
    David Voran

    Hi David,
    dvoran wrote:
    Don't you have similar issues when the libraries exceed several thousand images? If not what's your secret to image management.
    No, I don't  .
    It's an open secret: database maintenance requires steady application of naming conventions, tagging, and backing-up.  With the digitization of records, losing records by mis-filing is no longer possible.  But proper, consistent labeling is all the more important, because every database functions as its own index -- and is only as useful as the index is uniform and holds content that is meaningful.
    I use one, single, personal Library.  It is my master index of every digital photo I've recorded.
    I import every shoot into its own Project.
    I name my Projects with a verbal identifier, a date, and a location.
    I apply a metadata pre-set to all the files I import.  This metadata includes my contact inf. and my copyright.
    I re-name all the files I import.  The file name includes the date, the Project's verbal identifier and location, and the original file name given by the camera that recorded the data.
    I assign a location to all the Images in each Project (easy, since "Project" = shoot; I just use the "Assign Location" button on the Project Inf. dialog).
    I _always_ apply a keyword specifying the genre of the picture.  The genres I use are "Still-life; Portrait; Family; Friends; People; Rural; Urban; Birds; Insects; Flowers; Flora (not Flowers); Fauna; Test Shots; and Misc."  I give myself ready access to these by assigning them to a Keyword Button Set, which shows in the Control Bar.
    That's the core part.  Should be "do-able".  (Search the forum for my naming conventions, if interested.)  Or course, there is much more, but the above should allow you to find most of your Images (you have assigned when, where, why, and what genre to every Image). The additional steps include using Color Labels, Project Descriptions, keywords, and a meaningful Folder structure.  NB: set up your Library to help YOU.  For example, I don't sell stock images, and so I have no need for anyone else's keyword list.  I created my own, and use the keywords that I think I will think of when I am searching for an Image.
    One thing I found very helpful was separating my "input and storage" structure from my "output" structure.  All digicam files get put in Projects by shoot, and stay there.  I use Folders and Albums to group my outputs.  This works for me because my outputs come from many inputs (my inputs and outputs have a many-to-many relationship).  What works for you will depend on what you do with the picture data you record with your cameras.  (Note that "Project" is a misleading term for the core storage group in Aperture.  In my system they are shoots, and all my Images are stored by shoot.  For each output project I have (small "p"), I create a Folder in Aperture, and put Albums, populated with the Images I need, in the Folder.  When these projects are done, I move the whole Folder into another Folder, called "Completed".)
    Sorry to be windy.  I don't have time right now for concision.
    HTH,
    --Kirby.

  • To count number of occurances of a char in a very large string

    Hi All,
    I like to count the no of occurances of a char in a a very large string.
    for example
    char ch - 'c'
    string str - "practical example is always needed"
    c occured 2 times in the above string.
    Thanks,
    J.Kathir

    > string str - "practical example is always needed"
    Try to finish this:
            String str = "practical example is always needed";
            char search = 'c';
            int occurrence = 0;
            for(int i = 0; i < str.length(); i++) {
                // Use a method from the String-class to get a
                // char from a specific location in the String.
                // See: http://java.sun.com/j2se/1.5.0/docs/api/java/lang/String.html
            System.out.println("Occerrence of char "+search+" in \""+str+"\" is: "+occurrence);

  • My screen got very large - much so I couldn't access some icons - I turned it off and back on and now it's so big I can't even put my code in to get turned on

    My screen got very large - all the icons are huge where I can't see them all.  I turned it off and back on and now it's so large I can't even get to all the numbers to unlock it... help

    This question was asked and answered about a minute ago.
    This is asked and answered very often.  The forum search bar is on the right side of this page.
    It is also covered in the manual - zoom feature.
    Double tap with three fingers.
    iPhone User Guide (For iOS 4.2 and 4.3 Software)

  • LOAD UNIT OF COMPONENT IS VERY LARGE (GENERATION LIMIT)

    We are experiencing this meesage when compiling an ABAP WEB DYNPRO Application: "LOAD UNIT OF COMPONENT IS VERY LARGE (GENERATION LIMIT)"
    When Checking the Generation Limits In the Context menu, I have determined our size of Generated Load in bytes is to big.
    The documentation of recommendations is to restructure the program. I am not clear what this means and how this would reduce the Generation Load in bytes. Any ideas would be appreciated.

    > How should we reorganize the application and at the same time ensure smooth and user-friendly handling?
    We only want to use one Explorer window.
    Using multiple components doesn't mean that the user will notice any difference.  Component usages can be embedded within one another.  Using the ALV for instance is a perfect example of a component usage.
    >- Even the SAP reference application "LORD_MAINTAIN_COMP" (37 views) is way too big, according to the recommendation. Is there a better example from SAP?
    I wouldn't consider LORD_MAINTAIN_COMP a reference applicatoin.  It was one of the veryfirst WDA's shipped by SAP before we learned some of these lessons ourselves.  Have a look at the guidelines for Floorplan Manager if you are on 7.01. The FPM provides a very good (and well used by SAP) framework for building large scale WDA applications. 
    >- How could a complex transaction be built and at the same time stay in the green limit area (< 500k
    As described the usage of multiple components avoids the generation limit and is recommended for large scale applications.
    >- What at all is the problem in loading 2 Megabytes of data into memory? Could you please describe the technical background in more detail?
    It has nothing to do with 2Mb into memory.  It has to do with the generation load size in the VM for the generated class that represents your WDA Component.  The ABAP compiler and VM have limits (like all VMs and compilers) on total load size and the maximum size for operations and leaps. Generated code can be extremely verbose.  Under normal conditions, these load limits are almost never reached in human created classes. 
    In 7.02 we backported the 7.20 ABAP complier - which in additon tpbe rewritten to support multipass compelation, also increases some of the load limits.  However the general recommandation about componentization still stands.  Componentization of you WDA application improves maintainabilityand reusability over time.  My personal rule is that if you are getting between 10-12 views in your Component, it is time to think about breaking out into multiple components.
    >- Is there a maximum load size, which would lead to an error (reject of generation)?
    Yes there is.  However the workbench throws warnings well in advance.  At some point it won't even let you add more views to a component. However if you continue to add content to the existing views, you can reach a point where generation fails.

  • Editing very large images

    I have several very large images that require minor editing. The tasks that I require are only basic rotation, cropping, and maybe minor color adjustments.
    The problem is that the images range from 20780 x 15000 px down to 11150 x 2600 px.
    With images this size, when I try to perform any operation on the entire image, like a 3 deg rotation, fireworks simply times out. Maybe it is because it can't generate the preview image at that size, maybe it simply can't calculate the image at that size. If it is just the preview image, is there a way to lower the quality (say, jpeg quality of 5%) of the preview image only but when I export the image, I can keep it at 100%?
    Thank you,
    -Adam
    (Yes, I did search, but the forums search seemed mildly useless and engine results consistently returned nothing)

    Fireworks is not designed to work with images of this size. It's a screen graphics application. Photoshop would be a better option here, or other software designed for working with high resolution files.
    Jim Babbage

  • HELP!! Very Large Spooling / File Size after Data Merge

    My question is: If the image is the same and only the text is different why not use the same image over and over again?
    Here is what happens...
    Using CS3 and XP (P4 2.4Ghz, 1GB Ram, 256MB Video Card) I have taken a postcard pdf (the backside) placed it in a document, then I draw a text box. Then I select a data source and put the fields I wish to print (Name, address, zip, etc.) in the text box.
    Now, under the Create Merged Document menu I select Multiple Records and then use the Multiple Records Layout tab to adjust the placement of this postcard on the page. I use the preview multiple records option to lay out 4 postcards on my page. Then I merge the document (it has 426 records).
    Now that my merged document is created with four postcards per page and the mailing data on each card I go to print. When I print the file it spools up huge! The PDF I orginally placed in the document is 2.48 MB but when it spools I can only print 25 pages at a time and that still takes FOREVER. So again my question is... If the image is the same and only the text is different why not use the same image over and over again?
    How can I prevent the gigantic spooling? I have tried putting the PDF on the master page and then using the document page to create the merged document and still the same result. I have also tried createing a merged document with just the addresses then adding the the pdf on the Master page afterward but again, huge filesize while spooling. Am I missing something? Any help is appreciated :)

    The size of the EMF spool file may become very large when you print a document that contains lots of raster data
    View products that this article applies to.
    Article ID : 919543
    Last Review : June 7, 2006
    Revision : 2.0
    On This Page
    SYMPTOMS
    CAUSE
    RESOLUTION
    STATUS
    MORE INFORMATION
    Steps to reproduce the problem
    SYMPTOMS
    When you print a document that contains lots of raster data, the size of the Enhanced Metafile (EMF) spool file may become very large. Files such as Adobe .pdf files or Microsoft Word .doc documents may contain lots of raster data. Adobe .pdf files and Word .doc documents that contain gradients are even more likely to contain lots of raster data.
    Back to the top
    CAUSE
    This problem occurs because Graphics Device Interface (GDI) does not compress raster data when the GDI processes EMF spool files and generates EMF spool files.
    This problem is very prominent with printers that support higher resolutions. The size of the raster data increases by four times if the dots-per-inch (dpi) in the file increases by two times. For example, a .pdf file of 1 megabyte (MB) may generate an EMF spool file of 500 MB. Therefore, you may notice that the printing process decreases in performance.
    Back to the top
    RESOLUTION
    To resolve this problem, bypass EMF spooling. To do this, follow these steps:1. Open the properties dialog box for the printer.
    2. Click the Advanced tab.
    3. Click the Print directly to the printer option.
    Note This will disable all print processor-based features such as the following features: N-up
    Watermark
    Booklet printing
    Driver collation
    Scale-to-fit
    Back to the top
    STATUS
    Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section.
    Back to the top
    MORE INFORMATION
    Steps to reproduce the problem
    1. Open the properties dialog box for any inbox printer.
    2. Click the Advanced tab.
    3. Make sure that the Print directly to the printer option is not selected.
    4. Click to select the Keep printed documents check box.
    5. Print an Adobe .pdf document that contains many groups of raster data.
    6. Check the size of the EMF spool file.

  • Applying Oil Paint Filter to Large Scale Images

    I need to apply the effects available from the Oil Paint filter to very large, 80mb images. The filter works exactly as I need it to on small images, but not at large scale. A comment I have heard in a Lynda.com video on the Oil Paint filter mentioned that the filter does not work well on large images. However, I REALLY need this, even if I need to pay someone to write a program that can do it! Does anyone know if / how I can get the filter to work for large images and / or if there is a third-party plug-in that will provide the same results? Having this filter work on large scale images could make or break a business idea I have so finding a solution is extremely important to me.

    What's the problem you're having with applying it to an 80 MB image?  Is it that the effects don't scale up enough?
    Technically it can run on large images if you have the computer resources...  I've just successfully applied it to an 80 MB image, and with the sliders turned all the way up it looks pretty oil painty, though it kind of drops back into a realistic looking photo when zoomed out...
    If it's just that the sliders can't go high enough, given that it's a very abstract look you're trying to achieve, have you considered applying it at a downsized resolution, then upsampling, then maybe applying it again?  This is done that way...
    Oh, and by the way, Oil Paint has been removed from Photoshop CC 2014, so if you're planning a business based on automatically turning people's large photos into oil paintings you should assume you'll be stuck with running the older Photoshop version.
    -Noel

  • Acrobat xi makes my icons very large and the program very large.  It is like everything is super big.  Then it really does not work well to view files.  What can I do?

    My Acrobat XI enlarges my desktop, and the program is also very large.  I cannot see the whole screen because it is so large.  When I end Acrobat everything returns to normal.  I just want to use Acrobat XI to read pdf files.

    Hi Pat, that is the problem I am having.  I have tried to change my scale for screen settings to 100%, but every time I change it, the next time I open the program, it reverts to system settings.  It is not saving the new change I made.  I have looked all over for a save command but there isn’t any.  I even went to my regedit settings and did exactly what the instructions say.  Still the same problem.  Everything is at least 200%.  Once I exit the Reader, everything reverts to normal.  What am I doing wrong?  Or what else can I do?
    Thanks.
    Paul Bardotz

  • Working w/ large scale AI files

    What is the best way to work with large scale (wall mural) files that contain many gradients, shadows. etc. Any manipulations take a considerable time to redraw. Files are customer created, and we are then manipulating from there in-house. We have some fairly robust machines ( mac pro towers- 14GB RAM, RAID scratch disk).. I would guess there is a way to work in a mode that does not render effects, allowing for faster manipulation? Any help would be great- and considerably help reduce our time.
    First post- sorry if did something wrong- question / title wise/
    THX!

    In a perfect world, the customers would be creating their Illustrator artwork with the size & scale of the final image in mind. It's very difficult to get customers (who often have no formal graphic design training and are self taught at learning Illustrator & Photoshop) to think about the basic rule of designing for the output device -a graphics 101 sort of thing.
    Something like a large wall mural, especially one that is reproduced outdoors, can get by just fine with Illustrator artwork using raster effects settings of 72ppi or even less than that. Lots of users will have a 300ppi setting applied. 300ppi is fine for a printed page being viewed up close. 300ppi is sheer overkill for large format use.
    Mind you, Adobe Illustrator has a 227" X 227" maximum artboard size, so anything bigger than that will have to be designed at a reduced scale with appropriate raster effects settings applied. For example, I'll design a 14' X 48' billboard at a 1" = 1' scale. A 300ppi raster effects setting is OK in that regard. When blown up in a large format printing RIP the raster based effects have an effective 25ppi resolution, but that's good enough for a huge panel being viewed by speeding vehicles on city streets or busy highways.
    Outside of that, the render speed of vector-based artwork will depend on the complexity of artwork. One "gotcha" I tend to watch are objects with large numbers of anchor points (like anything above 4000 points). At a certain point the RIP might process only part of the object or completely disregard it.

  • Print comes out very large and oversize on my paper

    I have photosmart 6180.  When I print my stuff, suddenly everything is coming out oversized and very very large.  One page takes about 3 sheets to print.  How can I correct this setting?  My emails will print normal size but not the stuff from the website ;

    Hi Gatorjack61,
    What operating system are you using ?
    What about if you print a photo that is saved on your computer, does it look normal or oversized ?
    You can try the steps below which were suggestions from banhien and Pushkarpathania in this post.
    You can shrink before sending to printer (well also enlarge).
    Just use Print > Print Preview
    Under Shrink small drop down window, just select the right % for you.
    Or you could try Pushkarpathania's suggestion:
    •Click the arrow next to the Print button , click Print Preview, and then click Custom in the Change Print Size list. Specify how large you would like the webpage to be printed by setting a percentage in the Custom Size text box. This will enlarge the printed size of the entire webpage, but it might result in some of the webpage being cut off in the printed document. For more information about print preview.
    1. Open Internet Explorer by clicking the Start button . In the search box, type Internet Explorer, and then, in the list of results, click Internet Explorer.
    2. Click the arrow next to the Print button , and then click Print Preview.
    3. In the Change Print Size list SHRINK TO FIT , click Custom.
    4. In the Custom Size text box, specify how large you want the webpage to be printed by setting a percentage. This will enlarge the printed size of the entire webpage, but it might result in some of the webpage being cut off in the printed document.
    Hope this is helpful.
    If I helped you at all it would be great if you clicked the blue kudos star!
    If I solved your post please mark it as solved to help others.
    I'm a printer tech with HP.

Maybe you are looking for

  • Current date - x in variant

    Hi, I want to use program RHALEINI (tcode PFAL) with a variant. In the field "Data selection period from" I want to have a value like current date - x. In the Variant Attributes I select "D" for Selection variable (Dynamic date calculation) and as Va

  • BPS - Executing planning function with SAVE

    We are trying to use BPS for a slightly different purpose. We need a front-end to enter sonme data into a transaction cube. The issiue we are facing is we wish to execute a plnning function just before SAVING without having the user press any other b

  • MetaData Cleanup - Remove Domain Implications

    Morning All, I am working on a customers environment and after an AD health check, it is obvious they have Tombstoning domain controllers from a sub domain that shut down back in Jan (well over the 60 days). This is a single forest 'forest.local', wi

  • OUI-25031

    Hi everyone! I am trying to install Oracle Gateway 11 in its own Oracle Home (not in the existing Oracle Home which is in 10g). Oracle's version: Oracle Database 10g Release 10.2.0.4.0 - 64bit Production Windows Server 2008 R2 I am keep receiving thi

  • Oracle Enterprise Manager2.2

    Hi All, I am using OEM2.2 and Oracle8i Enterprise Edition. Well, When Oracle8i was installed on WindowsNT4.0 I could automate my export jobs very easily and it was working fine. Now, we upgrade our OS to Windows2000 Server. The installation of the Or