PL/SQL: Optimize for speed

I've got a very simple function that i need
use to order the result of a SELECT.
I takes two parameters. One is the current code (that keeps changing with the values of
the SELECT), and the other does not
change.
I'm using Oracle 8.1.5 under Linux and i'd
like to optimize this for speed.
Any suggestions?
Thanks a lot!
FUNCTION OrdenarNomenclator2 (CurrentCode in varchar2, CodeToFind in varchar2) return number IS
BEGIN
IF CurrentCode LIKE CodeToFind| |'%' then
IF CurrentCode = CodeToFind then
return 20;
ELSE
return 10;
END IF;
ELSE
return 0;
END IF;
END;

Ferran,
There is an overhead in calling stored functions. Try this as an expression in your select:
decode(CurrentCode ,CodeToFind,20,decode(substr(CurrentCode ,1,length(CodeToFind)),CodeToFind,10,0))
which seems quite fast.
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by Ferran Foz ([email protected]):
I've got a very simple function that i need
use to order the result of a SELECT.
I takes two parameters. One is the current code (that keeps changing with the values of
the SELECT), and the other does not
change.
I'm using Oracle 8.1.5 under Linux and i'd
like to optimize this for speed.
Any suggestions?
Thanks a lot!
FUNCTION OrdenarNomenclator2 (CurrentCode in varchar2, CodeToFind in varchar2) return number IS
BEGIN
IF CurrentCode LIKE CodeToFind| |'%' then
IF CurrentCode = CodeToFind then
return 20;
ELSE
return 10;
END IF;
ELSE
return 0;
END IF;
END; <HR></BLOCKQUOTE>
null

Similar Messages

  • Optimize for Speed = LV runtime error in file CCGArrSupp​ort2.c (LV2009 / MCB2400)

    Hello,
    I have a MCB2400 and LV2009 and the normal Code-Generation works fine. But if I try the Run-Time Option: "Optimize for Speed" I always get this Error:
    "LV runtime error in file ...\CCGArrSupport2.c at line 2621: 6 3"
    I already tried a workaround, which helped somebody for blackfin: http://forums.ni.com/ni/board/message?board.id=420​&thread.id=811
    But it didn´t help in my case.
    bye
    amin 
    Solved!
    Go to Solution.

    Hi,
    you are right. In the example it was just a Problem with the output.
    Now here is one little part of the project which brings the runtime error allone, which brings the runtime error (just if the optimize option ist activated).
    The Black Node is a Sub-Vi.
    bye & thanks
    amin
    Message Edited by aminat on 10-23-2009 09:02 AM
    Attachments:
    PUB.zip ‏15 KB

  • How to Optimize SCXI 1600 for speed with Thermocouples

    I'm working on a data acquisition system for my engineering firm and I'm trying to find a way to use our new thermocouple system as fast as possible.
    The requirements for the DAQ process are:
    Read 32 voltage channels from a PCI-6071E card
    Read 32 thermocouple channels from a SCXI-1600 with an 1102C accessory
    Complete the entire operation in under 5ms (this is so other parts of the program can respond to the incoming data quickly and trigger safety protocols if necessary)
    Using LabVIEW 7.1 and MAX 4.4, I've got the voltage channels working to my satisfaction (with traditional DAQ VIs) and the rep rates I measure when I run the program are around 1ms (I do this by putting the DAQ code in a loop and reading the millisecond timer every time through that loop, then calculating the average time between loop executions).  I have been trying to get similar performance from the thermocouple channels using DAQ Assistant and DAQmx.  Some of the problems I've encountered are:
    Very slow rep rates with 1-sample and N-sample acquisition modes (300-500ms)
    Good rep rates when I switch to continuous mode, but then I get buffer overflow error -200279.
    When I attempted to correct that error by setting the DAQmx buffer to overwrite unread data and only read the most recent sample, the calculated sample rate went to 20ms.  It was around 8ms when I left the error unhandled and continued acquisition.
    At this point I'm out of ideas and am just looking for something to try and optimize the DAQ process for speed, as much as is possible.
    Thank you for any help.

    I guess I would be interested in checking out your code to see if there is anything I can recommend on changing.  However, I do have a few general Ideas of how to improve your performance.  These recommendations are purely based on what you could be doing to slow down the speed of the program because I am not sure how exactly you have everything set up.  
    -Are you setting up the task and closing the task each time that you read from your daq card?  the way to get around this is to only have the DAQmx read vi in the while loop so you do not have time alloted for opening and closing the task each time.
    -Try using a Producer/Consumer architecture.  This architecture uses queues and splits the aquisition with the post processing.  Here is a link to how to set up this architecture and some information on when to use it.
    Application Design Patterns: Producer/Consumer
    http://zone.ni.com/devzone/cda/tut/p/id/3023 
    Message Edited by Jordan F on 02-06-2009 04:35 PM
    Regards,
    Jordan F
    National Instruments

  • PL/SQL forall operator speeds 30x faster for table inserts

    Hi!
    I find this quotes from one internet website. According to this site - "Loading an Oracle table from a PL/SQL array involves expensive context switches, and the PL/SQL FORALL operator speed is amazing. " But, i knew something opposite - that normal SQL insertion always faster than this one. There may be some exception - but not always. Pls go through the link --
    [url http://www.dba-oracle.com/oracle_news/news_plsql_forall_performance_insert.htm]PL/SQL FORALL operator speed is amazing In Oracle .
    Anyone who have knowledge regarding this - pls share their views or opinion here to clear my doubt. I'll be waiting for the reply. Thanks in advance.
    Regards.
    Satyaki De.

    Hi Satyaki,
    The forall bulk processing is always faster than the row by row processing as shown in this test.
    It is faster because you have much less executions of the insert statement and therefore less context switches.
    I must mention that bulk fetching 100,000 records at a time isn't always a great idea for production code because
    you'll use a large amount of PGA memory.
    I guess you are confused with the "insert into products select * from products" variant. This pure SQL will beat the
    forall variant.
    From slow to fast:
    1. row by row dynamic SQL without using bind variables (execute immediate insert ...)
    2. row by row dynamic SQL with the use of bind variables (execute immediate insert ... using ...)
    3. row by row static SQL (for r in c loop insert ... values ... end loop;)
    4. bulk processing (forall ... insert ...)
    5. single SQL (insert ... select)
    Hope this helps.
    Regards,
    Rob.
    Message was edited by:
    Rob van Wijk
    Way too late ...

  • SQL queries for finding values in 4 different tables

    I need to have certain queries to find specific data in this table, this is just an example table, but I will use the same ideas for my actual website and database.
    customers (customerID: integer, fName: string, lName: string)
    items (itemID: integer, description: string, price: float)
    orders (orderID: integer, itemID: integer, aID: integer, customerID: integer, date: date)
    addresses (aID: integer, housenum: integer, streetName: string, town:string, state: string, zip:integer)
    Values I need to find are
     List the town, first name, and last name of any customer who has shipped an item to the same town as another customer.
    Return the average amount of money each customer spent in March of 2013. (Note that the answer will be a single number
    List the first and last names of all customers who have had the same item shipped to at least two different addresses.
    List the top two states that have generated the most total revenue and the revenue they generated
    I did try a few different queries, for #3 I tried 
    SELECT customers.fName,
    customers.lName,
    COUNT(orders.itemID) AS `total items with diff address >= 2`
    FROM customers
    JOIN (SELECT customerID,itemID,
    COUNT(DISTINCT aID) AS diff_address
    FROM orders
    GROUP BY orders.itemID
              HAVING diff_address >= 2
             ) AS orders
          ON orders.customerID = customers.customerID 
    but I only got 1 result, and I do not think thats correct.
    Thanks for the help and I appreciate you taking the time to help me

    Why not post the sample data + desired result? Always state what version you are using.
    SELECT lname,A.aID,COUNT(*) cnt  FROM customers C JOIN orders O ON c.Customerid=O.Customerid 
    JOIN address A  ON A.aID=O.aID
    GROUP BY lname,aID
    Sorry  cannot test it right now...
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Batch fetch optimization for lazy collections

    Hi,
    I feel like this question must have been asked by somebody already but couldn't find any answers in the forum.
    We're trying to evaluate migrating from Hibernate to KODO JDO. One feature we use extensively from Hibernate is the "batch fetch optimization for lazy collections".
    For instance, there's a class User with collection "Set<String> permissions". In the DB, there're tables USER(id, name, etc) and USER_PERMISSIONS(user_id, permission) with one-to-many relationship b/w them.
    Suppose the code is the following:
    query = ....;
    List<User> users = (List<User>) query.execute();
    for (User user : users) {
    println("User: " + user.getName());
    println("Permissions: " + user.getPermissions());
    I've setup EagerFetchMode to parallel. For field "permissions" I had to specify default-fetch-group="true" b/c I wanted this to happen lazily. When I look through the logs, I see that permissions SQL query is executed for each user.
    With Hibernate we were to setup mapping the way that permissions are not fetched at first, but when they are requested for the first user (user.getPermissions()) they are automatically selected for several users in one query using SQL IN clause (similar to the parallel mode).
    Is this possible to recreate the same in KODO JDO? (JPA?) If it is, this would greatly simplify migration. Please notice, that we can't use explicit fetch groups for this, b/c we don't know ahead which collection can be navigated (in the real life User class have many relationships).
    thanks in advance,
    Dimitry

    Kodo doesn't have a direct analog for that behavior. The typical way to solve that problem is to keep the field out of the default fetch group, and then explicitly include the desired field in the current fetch group at runtime. In pure JDO, you can do this by creating a separate fetch group that includes the relationship field, and designating that the query should use that fetch group:
    <pre>
    query = ...;
    query.getFetchPlan().addGroup("relationshipGroup");
    List<User> users = (List<User>) query.execute();
    </pre>
    Kodo has JDO extensions that allow you to do this a bit more easily:
    <pre>
    query = ...;
    ((kodo.jdo.KodoFetchPlan) query.getFetchPlan()).addField("com.example.User.permissions");
    List<User> users = (List<User>) query.execute();
    </pre>
    Finally, you can do this with Kodo's JPA extensions like so:
    <pre>
    import org.apache.openjpa.persistence.OpenJPAQuery;
    query = (OpenJPAQuery) ...;
    query.getFetchPlan().addField("com.example.User.permissions");
    List<User> users = (List<User>) query.getResultList();
    </pre>
    Note that in all cases, you could also make this change to the current PersistenceManager / EntityManager's fetch plan instead, to make the change happen for the duration of the use of that manager. In that environment, the change would need to happen before the query was created.
    (And no, I have no idea why the edit box has a 'pre' button but does not seem to do anything with the resulting tags.)
    -Patrick

  • Optimize for ad hoc workloads.

    Hello all,
       I was refering BOL documentation on 'optimize for adhoc workloads' option and when this is enabled the first run of the adhoc query gets 'compiled plan stub'(plan is not cached) and upon on second run,it gets 'compiled plan' (plan is cached).
    I am wondering if there is any option where we can tell sql server, to make it a 'complied plan'  at 5 run(instead on the second run).
    Thanks.
    Hope it Helps!!

    If you feel that this is important, submit a request on
    http://connect.microsoft.com/SqlServer/Feedback.
    You should make an effort to clearly spell out why you think this would be a benefit.
    I guess that one reason they did it the way they did is that this is simpler. For instance, they don't need to maintain a counter for the shell query.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Equivalent OPTIMIZE FOR UNKNOWN in Oracle

    Hi,
    Do Oracle have something like "optimize for unknow" from SQL Server?
    Here is link which explain mechanism:
    http://blogs.msdn.com/mssqlisv/archive/2008/11/26/optimize-for-unknown-a-little-known-sql-server-2008-feature.aspx
    Best.

    Yes. Oracle has much more cool algorithm in version >= 11g called [Adaptive Cursor Sharing|http://optimizermagic.blogspot.com/2007/12/why-are-there-more-cursors-in-11g-for.html].

  • HTMLDB 1.5  SQL Optimization

    Hi All
    I'm using HTMLDB 1.5, and SQL optimization hints vanish from all regions when my app is migrated from development to production. e.g. /*+ hint INDEX */
    Tested re-importing the app in the dev environ and have the same issue.
    Is this a htmldb bug or am I doing something wrong?
    Thanks
    Kezie

    Kezie - Actually that particular bug was fixed in 1.5.1. If you can apply the 1.5.1 patch, the application installation page will not strip out hints. For SQL*Plus import/install, you must connect as FLOWS_010500 (DBA can change password) or connect as any schema assigned to the workspace into which your application will be installed. The workspace ID and app ID must be identical to those from the source HTML DB instance for this to work.
    Scott

  • Image Moment - Optimizing Code for Speed

    Hello 
    I'm want to find the moment of inertia of an 2d-array. The array is converted from an image using "IMAQ ImageToArray".
    The algorithm I'm using is discribed here:
    Wikipedia - Image Moments
    I need to calculate this formula with different values for i and j:
    i.j = 0.0 - 0.1 - 1.0 - 1.1 - 0.2 - 2.0
    I programmed the code shown above/attached VI, but I need the optimize it for speed.
    The 2d array can be any size with a maximum of 2048 x 2048 with values varying between 0 and 4095.
    My question:
    How can I make this code faster?  
    Thank you and kudos will be given! 
    The Enrichment Center is required to remind you that you will be baked, and then there will be cake.
    Solved!
    Go to Solution.
    Attachments:
    Image Moment.vi ‏19 KB

    Hello falkpl,
    " If you are looking at moments, the IMAQ particle analysis will do moments on particles all in imaq to avoide the slower image to array.
    As for optomizing your code a few observations
    1. why are you using doubles- your image is 12 (actually 16bit in imaq)
    2. do nor calculate intedex on each itteration pre calculate these and cache.
    3. when possible do calculation on arrays at a time, ie multiple 2 arrays instead of doing it in a loop. "
    Thank you for your reply, sir.  As stated before, the "IMAQ Particle Analysis" only calculates moments on non-weighted (e.g. binary) particles.
    The formula above includes the weight of each pixel.
    1. The doubles are because the image is first filtered. This filter needs to convert the image to the DBL Type.
    2. Could you please elaborate this, sir? I do not understand what you mean.
    3. Effectively done in the solution. Thank you.
    The Enrichment Center is required to remind you that you will be baked, and then there will be cake.

  • Create Test data using T-SQl script for each row

    Hi team,
    I am looking for a sql code snippet which read data from below table
    UserId username contact
     1      Anil    111
     2      Sunil   222
    and insert data to below table with some test data appending sequence number 1,2,3 for only City and Email. Both are different tables
    and does not have any referencial integrity
    No of records inserted for user is configurable for example count = 3
    Username  City  Email
    Anil      city1 email1
    Anil      city2 email2
    Anil      city3 email3
    Sunil      city1 email1
    Sunil      city2 email2
    Sunil      city3 email3

    DECLARE @cnt INT=3
    DECLARE @Users TABLE(UserId INT, UserName VARCHAR(99),Contact INT)
    INSERT INTO @Users VALUES
    (1,'Anil',111),
    (2,'Sunil',222)
    SELECT UserName,'city'+CAST(num AS varchar(10)) city FROM @Users
    CROSS APPLY
    SELECT TOP(@cnt) number +1 AS num
                    FROM master..spt_values
                    WHERE type = 'P') AS Der
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Error while executing a sql query for select

    HI All,
    ORA-01652: unable to extend temp segment by 128 in tablespace PSTEMP i'm getting this error while i'm executing the sql query for selecting the data.

    I am having 44GB of temp space, while executing the below query my temp space is getting full, Expert please let us know how the issue can be resolved..
    1. I dont want to increase the temp space
    2. I need to tune the query, please provide your recomendations.
    insert /*+APPEND*/ into CST_DSA.HIERARCHY_MISMATCHES
    (REPORT_NUM,REPORT_TYPE,REPORT_DESC,GAP,CARRIED_ITEMS,CARRIED_ITEM_TYPE,NO_OF_ROUTE_OF_CARRIED_ITEM,CARRIED_ITEM_ROUTE_NO,CARRIER_ITEMS,CARRIER_ITEM_TYPE,CARRIED_ITEM_PROTECTION_TYPE,SOURCE_SYSTEM)
    select
    REPORTNUMBER,REPORTTYPE,REPORTDESCRIPTION ,NULL,
    carried_items,carried_item_type,no_of_route_of_carried_item,carried_item_route_no,carrier_items,
    carrier_item_type,carried_item_protection_type,'PACS'
    from
    (select distinct
    c.REPORTNUMBER,c.REPORTTYPE,c.REPORTDESCRIPTION ,NULL,
    a.carried_items,a.carried_item_type,a.no_of_route_of_carried_item,a.carried_item_route_no,a.carrier_items,
    a.carrier_item_type,a.carried_item_protection_type,'PACS'
    from CST_ASIR.HIERARCHY_asir a,CST_DSA.M_PB_CIRCUIT_ROUTING b ,CST_DSA.REPORT_METADATA c
    where a.carrier_item_type in('Connection') and a.carried_item_type in('Service')
    AND a.carrier_items=b.mux
    and c.REPORTNUMBER=(case
    when a.carrier_item_type in ('ServicePackage','Service','Connection') then 10
    else 20
    end)
    and a.carrier_items not in (select carried_items from CST_ASIR.HIERARCHY_asir where carried_item_type in('Connection') ))A
    where not exists
    (select *
    from CST_DSA.HIERARCHY_MISMATCHES B where
    A.REPORTNUMBER=B.REPORT_NUM and
    A.REPORTTYPE=B.REPORT_TYPE and
    A.REPORTDESCRIPTION=B.REPORT_DESC and
    A.CARRIED_ITEMS=B.CARRIED_ITEMS and
    A.CARRIED_ITEM_TYPE=B.CARRIED_ITEM_TYPE and
    A.NO_OF_ROUTE_OF_CARRIED_ITEM=B.NO_OF_ROUTE_OF_CARRIED_ITEM and
    A.CARRIED_ITEM_ROUTE_NO=B.CARRIED_ITEM_ROUTE_NO and
    A.CARRIER_ITEMS=B.CARRIER_ITEMS and
    A.CARRIER_ITEM_TYPE=B.CARRIER_ITEM_TYPE and
    A.CARRIED_ITEM_PROTECTION_TYPE=B.CARRIED_ITEM_PROTECTION_TYPE
    AND B.SOURCE_SYSTEM='PACS'
    Explain Plan
    ==========
    Plan
    INSERT STATEMENT ALL_ROWSCost: 129 Bytes: 1,103 Cardinality: 1                                                        
         20 LOAD AS SELECT CST_DSA.HIERARCHY_MISMATCHES                                                   
              19 PX COORDINATOR                                              
                   18 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ10002 :Q1002Cost: 129 Bytes: 1,103 Cardinality: 1                                         
                        17 NESTED LOOPS PARALLEL_COMBINED_WITH_PARENT :Q1002Cost: 129 Bytes: 1,103 Cardinality: 1                                    
                             15 HASH JOIN RIGHT ANTI NA PARALLEL_COMBINED_WITH_PARENT :Q1002Cost: 129 Bytes: 1,098 Cardinality: 1                               
                                  4 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1002Cost: 63 Bytes: 359,283 Cardinality: 15,621                          
                                       3 PX SEND BROADCAST PARALLEL_TO_PARALLEL SYS.:TQ10001 :Q1001Cost: 63 Bytes: 359,283 Cardinality: 15,621                     
                                            2 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1001Cost: 63 Bytes: 359,283 Cardinality: 15,621                
                                                 1 MAT_VIEW ACCESS FULL MAT_VIEW PARALLEL_COMBINED_WITH_PARENT CST_ASIR.HIERARCHY :Q1001Cost: 63 Bytes: 359,283 Cardinality: 15,621           
                                  14 NESTED LOOPS ANTI PARALLEL_COMBINED_WITH_PARENT :Q1002Cost: 65 Bytes: 40,256,600 Cardinality: 37,448                          
                                       11 HASH JOIN PARALLEL_COMBINED_WITH_PARENT :Q1002Cost: 65 Bytes: 6,366,160 Cardinality: 37,448                     
                                            8 BUFFER SORT PARALLEL_COMBINED_WITH_CHILD :Q1002               
                                                 7 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1002Cost: 1 Bytes: 214 Cardinality: 2           
                                                      6 PX SEND BROADCAST PARALLEL_FROM_SERIAL SYS.:TQ10000 Cost: 1 Bytes: 214 Cardinality: 2      
                                                           5 INDEX FULL SCAN INDEX CST_DSA.IDX$$_06EF0005 Cost: 1 Bytes: 214 Cardinality: 2
                                            10 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1002Cost: 63 Bytes: 2,359,224 Cardinality: 37,448                
                                                 9 MAT_VIEW ACCESS FULL MAT_VIEW PARALLEL_COMBINED_WITH_PARENT CST_ASIR.HIERARCHY :Q1002Cost: 63 Bytes: 2,359,224 Cardinality: 37,448           
                                       13 TABLE ACCESS BY INDEX ROWID TABLE PARALLEL_COMBINED_WITH_PARENT CST_DSA.HIERARCHY_MISMATCHES :Q1002Cost: 0 Bytes: 905 Cardinality: 1                     
                                            12 INDEX RANGE SCAN INDEX PARALLEL_COMBINED_WITH_PARENT SYS.HIERARCHY_MISMATCHES_IDX3 :Q1002Cost: 0 Cardinality: 1                
                             16 INDEX RANGE SCAN INDEX PARALLEL_COMBINED_WITH_PARENT CST_DSA.IDX$$_06EF0001 :Q1002Cost: 1 Bytes: 5 Cardinality: 1

  • MSI ti 4200 128 MB and Problems with "Need for Speed: HighStakes"

    Does any one know of a fix for "Need for Speed: HighStakes" with the new 45.32 drivers. When trying to start this game it won't even load, the screen goes dark for a second and then falls back to the desktop. Did the same thing with 43.45, but if I install the drivers that came with the Video card ( I think they were 31.XX) then the game works fine. Haven't come across any other games that won't run except this one, some examples that work fine are UT 2003, C&C: Generals, Warcraft 3, Hot Pursuit 2.
    System:
    XP pro OS (sevice pack 1) DirectX 9
    Athlon 1800
    512 MB RAM
    MSI nforce 2 Main board
    MSI GeForce 4 4200 ti 128 MB 8X AGP
    Thanks for any info.

    well I "bugged" the game makers (EA) and the response is that the game doesn't suport Windows XP. Strange that the older nVidia 31.xx drivers work fine with the game. Also played it on my last machine which had XP and a ATI radeon video card. OH well who ever said Direct X was supposed to be backward compatible.
    Too bad the older drivers don't work so well with the newer games.

  • How to combine multiple columns into one column and delete value the row (NULL) in sql server for my example ?

    My Example :
    Before:              
    Columns
    name               
    address          
                   jon                      DFG
                   has                     NULL
                   adil                      DER
    After:                  
    Column 
                                    Total   
                      name : jon , address : DFG
                      name : has
                      name : adil , address : DER

    Why not doing such reports on the client site?
    create table #t (name varchar(10),address varchar(20))
    insert into #t values ('jon','dfg'),('has',null),('adil','der')
    select n,case when right(n,1)=':' then replace(n,'address:','') else n end
    from
    select concat('name:',name, ' address:',address  ) n from #t
    ) as der
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Acrobat 9.3.4 (or 9.3.3.177): Save As with Optimize for Fast Web View

    When I do a Save As with Optimize For Fast Web View checked, the saving stops and an Adobe Acrobat dialog displays:
         The document could not be saved. There was a problem reading this document (111).
    If I uncheck Optimize For Fast Web View, the Save As seems to work.
    Is there a way to have Fast Web View work with Save As?
    Acrobat.exe is version 9.3.4 (or 9.3.3.177 in the properties). The Acrobat.DLL version is 9.3.4.218.

    Thanks.  I did submit a report at the site.  I hope somebody reads it as this is a big problem for us.
    Thanks again.

Maybe you are looking for

  • External mail keeps in SBWP outbox

    hi friends,                      here i am sending a mail with attachement to external mail id .. if i am sending through normal executable program that is delivering to recievers mail.. if i am sending that same code through BADI (in PA40 i have don

  • SAP HR CAT2 transaction

    Hello, I am an ABAP consultant working with CAT2 transaction . I wanted to know in which table is the work center and plant for an employee stored?I mean does HR use common work center & plant tables used by other modules like PP,MM or it uses a data

  • Error code 805a0193 when updating apps and install...

    Up until Sunday the 12th of August 2012 I have been able to install and update my apps with no fuss. When I tried to install an app on Sunday I got an the error code mentioned above whether through wifi or 3g (for both wifi and 3g my carrier is CELL

  • Flash Builder 4.4, Application crashes every time when i start profiling and take memory snapshort?

    Hello all, i have a real wired problem, when ever i start profiling my application in Adobe Flash builder and take a memory  snapshot the application crash, and these dialogs appears. i have tried Adobe flash builder 4.4, 4.5, 4.6 all of these have s

  • Using stills with effects - they are turning magenta!

    I'm using imovie 6.0 because of the effects it has just as rain and fog. Although I'm getting this message lately and don't know how to fix it: This effect generates different results for each frame, which will not show up on Still Clips. I have been