SSL trade-off

I have an application where I need to protect only a certain number of pages like authentication page, user personal settings page, etc. I also have tight requirements on performance for this public application.
My first wish is to use ssl for all application totally as it seems much easy to implement and is totaly transparent for the protected application.
What about additional costs from the performance point of view? Has anyone tried to estimate them for your own application? Are they high enough to start thinking about economizing?
Thanks

Edward,
I don't have the empirical evidence you're seeking. But my opinion and observation - I believe the worry about the overhead introduced by SSL on both the server and the client are exaggerated. Maybe in 2000 there was a concern, but I believe the:
1) Speed of very cheap and powerful computers
2) Speed of servers
3) Efficiency of these algorithms
makes these concerns somewhat obsolete. Sure - if you're producing pages that are extraordinarily large, then that's going to consume cycles to encrypt and decrypt it all. But the average page size should be very small (on a relative basis).
We have an instance of APEX within Oracle that is all SSL all the time. It's on a relatively small server. It runs just great.
My suggestion - compare some rough results with and without SSL. I believe you'll find the overhead to be minimal (10% or under) for most applications.
Joel

Similar Messages

  • Firm or Trade-Off Zone Indicator not set

    Hi
    I have created the scheduling agreement and run the MRP by this the schedule lines are generated but the system has not set Firm or Trade-Off Zone Indicator how do I go head? IS there any customization/master data required to set the this indicator?
    Please suggest.
    Regards,
    Prashant.

    This depends a bit on how you communicate to your vendors.
    Assuming EDI, then the message to the vendor will contain a few date fields
    A. ABFDE End of production go-ahead
    Which is the 'end' of the FIRM ZONE
    B. ABMDE End of material go-ahead
    Which is the 'end' of the TRADEOFF ZONE.
    Upon receipt the vendors system will recognize these dates.
    Assuming printout, you need to work in SAP-Script/Smartform to detail the effect of these date on your print.
    Either by also mentioning these in the item header (remember each item has an unique FZ/ToZ setting). Or by using special markers on the date lines, based on these dates.
    Regards
    JP

  • Firm Zone and Trade off Zone in Scheduling Agreement

    Dear All
    Please explain in detailed the concept of Firm Zone and Trade off Zone in Scheduling Agreement, and it's effects in MRP run, i.e. If i take a MRP run for a material whci is having firm zone as 30 days and trade off zone as 60 days then what will be the result of MRP. The Material MRP type is VB
    Thanks and Regards
    Manoj

    Hi,
    Firm zone is the time frame in which you cannot change your orders (schedule lines) that you have ordered from a vendor in any way (Date change nor quantity change).
    Trade off zone is time frame within which you can make changes to your procurement proposals, these changes are acceptable from vendor's side.
    These time frames are agreed with the Vendor and then inserted for each scheduling agreement in 'Additional data'.
    For your example if you take firm zone 30 days and trade-off zone 60 days, the check starts from current day on which MRP runs. For exampe current day is 1st Oct, all the procurement proposals with delivery date within 30 days that is till 1st September are firm orders, which MRP will not change in any case (You can find such orders with * in front of them in MD04 list). Beyond 1st september they are in trade off zone, in which MRP can modify them.
    MRP types (VB in your case) have no correlation with these zones.
    Amit G

  • Some thoughts on trade-offs between usability and user-friendliness

    I use Awesome as a window manager over LXDE. Recently, a friend of mine tried to use my laptop. The experience frustrated him somewhat--he managed to start the browser through the menu shortcut, but it opened out of sight under the "web" tag. Once he realized where it had gone (tags are clickable in Awesome), he had already managed to start several instances of it. Naturally, he tried to close them--first looking for the close window button which is not there, since I dont use title bars, and then trying alt+F4 to no avail (the default key command for closing a window in Awesome is super+shift+c). In desperation, he finally launched the terminal and used xkill to get rid of the redundant Chromium sessions.
    The whole thing got me thinking. I find my system very usable--as I guess most of us Linux nerds do, as our systems are do-it-yourself projects to such a large degree. However, it is not exactly user-friendly. At the same time, I can navigate my confused friend's system quite well since he uses Gnome--a system that adheres more to the common desktop metaphor of e.g. Windows or OS X. However, I doubt I would be able to get around very well in some of the more minimalist *box setups so popular among Arch users (though, admittedly, it is probably easier to get the hang of a mouse-centered desktop than one that focuses on keyboard shortcuts). What I find interesting is that there is clearly a trade-off. If I was to build a system for more people than just me, I would probably go with Gnome, but I would not find it as usable as my current system.
    Which brings me to my questions. Do others here share that experience? How do you go about managing it--different default sessions for different accounts, or a compromise that is more user friendly but less usable to you personally? Or do you force people to relearn and adapt to your preferred way of doing things (which might sound worse than it is--after all, if my way makes more sense once people adjust to it, then why not)?
    Last edited by caligo (2010-03-12 09:53:01)

    lolilolicon wrote:
    mythus wrote:
    While I understand your point, I have to fundamentally disagree with your subject. Having memory issues does not make someone brainless, and I overtly object to such. It would be just as bad as me saying people who think user friendly is for brainless creatures with no memory are elitist pigs *shrugs*.
    Case in point, myself. I have memory issues. I didn't always have memory issues, having used to have a very sharp memory. But hey, getting hit by semi-trucks and having your head go through a windshield does some nasty things to your brain functions. Does me now having problems with remembering certain things  all the sudden make me brainless? I sure don't think so, being that I am still able to think, process equations and goals, as well as teach myself new things and relearn forgotten things on a daily basis. It is just that most stuff I have to write down or print out and have in a huge binder in front of me at all times now. Having memory issues simply caused me to have to adjust to how I do things, not make me into a brainless creature. In fact it was after my accident that I came to try and use linux, and while I do have a certain need of the mouse at times, I also have my printed out shortcuts here at my disposal.
    The lesson here, be careful of adding insults to posts when trying to make a point. Without that insult I could have easily agreed with your point.
    I'm really sorry if it came out like that, I didn't mean that. My point was not at all about memory you know. I forget things too, and it happens often, and I'd curse myself if it were really bad.
    I respect men like you. You managed to learn linux (and it was arch! ++), and resolve your issues, e.g. even you do forget sometimes your shortcuts, you still have your memory of them on the paper, plus backups of it. This is nothing like "brainless creators". What I meant by that was more about the lazy people who never know what they have.
    I brought my mood to somewhere extreme, because I was feeling again the reasons why I switched to linux. It was the moment I decided I had been a lazy pig who had hated his computer but never had done anything about it.
    s/it's for the brainless creatures who've got no memory/it's for the button lovers ;P/
    Sorry mythus, I wish you all the best.
    Thank you for your apology.
    As I said earlier, I do agree with most of your points, it was just that one glaring sentence at the beginning which took on a role as the subject of your post, which I disagreed with. I do agree that the modern day idea of user-friendliness is for lazy people who don't wish to learn how to use a computer and simply want their computer to know what he/she wants to do and do it without any real input from them.  Having to use all ten fingers versus one to two fingers is where that all boils down to. Just imagine having to actually sit up and make full use of both of your hands to use your computer instead of lean back in a recliner and rest your hand on a small plastic device, barely having to flinch your wrist and move your index finger. That is really where IMO the "User Friendly" systems of today are targeted. In reality, they aren't user friendly at all, but lazy friendly. Truly, I am still waiting for them to invent a mind reading device or a device that monitors your line of vision so that if you say.. look at the upper right side of a window it will close it for you, or if you open your eyes real big, it will full screen the window for you. *sarcasm intended*
    For a system to be user friendly it has to be completely usable to it's principal user with little or no complications. The user should be able to do his or her work and other computer related activities without confusion and/or delay. Unfortunately, no two users are alike so the method that fits them best wouldn't work for everyone. However what does seem to work for the majority isn't necessarily a user friendly system, but a lazy friendly system that is familiar after generations of being presented the same UI.
    It is also because of the lazy-friendly needs of a O/S so that it can be accepted by the largest amount of consumers possible (after all, it is all about money) that advancements and changes to the UI are sure to never come. At least from a corporation.... So you will always be faced with having to decide if you want your computer lazy-friendly and familiar for your friends/family, or user friendly for yourself.
    BTW- I do not think that the mouse is a bad tool. It is a highly useful tool. It just should not be treated as a keyboard.

  • Trade-offs of different Immutability designs

    Hi,
    I'm wondering what the trade-offs are for the following immutability designs:
    1) Object composition: a mutable class wraps an immutable class without any sort of inheritance. Example: String vs StringBuilder.
    2) Single interface with optional operations. Example: List throws UnsupportedOperationException when someone tries modifying an immutable list.
    3) Object inheritance: a mutable class extends an immutable class. Example: MutableString extends String by adding mutation methods to it. I could not find an example of this in the JDK.
    4) Are there other possible designs you're familiar with?
    Thank you,
    Gili

    Why would you need to do this? Why couldn't the mutable interface inherit from the immutable one?I was originally talking about classes where if the super is immutable then you quite literally cannot make it muttable (like String).
    But to answer the direction of the inheritance tree problem (mutable->immutable or vice versa), if Mutable extended Immutable it's then fundamentally not an Immutable breaking the "is a" rule, you have this problem with both directions of inheritance. More reason to steer clear of it I think.
    Deciding what modifiability to return/accept is an even more tricky business with both inherited route and with distinct classes, for example imagine the .iterator() method you would have to duplicate the methods to: mutableIterator() and immutableIterator(), this would plague your API designs and make it a royal pain to use.
    I meant unmodifiable class, not immutable.The discussion works for immutable and unmodifiables, though I think that converting between (im)mutables becomes even hairier (should be avoided) as users are lead to believe the internal class will never change, whereas unmodifiables allow the internal class data to change just not via API calls.
    I should explain that my "interest" here is the best way to introduce "turning modifiability on and off" (which is only a slight deviation to the original post). In my case, I've decided it's a must have feature.
    Why would the caller ever need to know whether a Collection is mutable or not?Where we've experienced this issue we've been designing an API that's not just a collection, more like heavyweight resources, where:
    -in certain cases we don't trust the user to give them access to the modifiable
    -where they can only receive/read from the resource and they should not alter it
    -to let them register interest in the resource before it's construction is finalised - ensuring they don't interrupt the finalisation process.
    -the API is about the same size as the collections API, which I think is bordering on the (to) largish size (if it's a small API I'd consider distinct classes).
    The decision was made to give them the ability to discover whether the resource was unmodifiable for two reasons:
    -code could be written (annoying that it's not "have to be" I agree) which would then be guaranteed not to fail unexpectedly later on (assuming compliance).
    -not having it meant that users are subtly encouraged to to write large chunks of code within an �ber try/catch to handle possible failures, which has it's obvious disadvantages.
    I also believe there should be some method of discovering whether the resource/collection is unmodifiable. Say you want fail fast behaviour for example, you have to call some modifiable method and deal with the resulting exception if it failed. And (slightly worse) try and undo it if it did work - which in some cases is impossible.
    In most cases (e.g. collections being used) this isn't required, as generally the collection's modifiability will stay the same for an apps lifetime and good testing (or the first time the error is discovered :) will out any mistakes which can be fixed once and for good.
    I think that Josh got it right when designing the collections framework, I can't see a better way of doing it given the size of the resulting API and ignoring the "not being able to discern modifiability" problem. Adding anything other than a tagging interface or simple checking method would have resulted in a seriously bloated API.
    I think the reason they didn't do anything about trying to discern modifiability was that, as collections don't change their modifiability (accepting composition), they probably decided, as you said, there's no benefit to adding the ability to check.
    I followed the link btw, it's interesting to get other peoples take on these issues, I constantly worry if I'm going down the right route. I am trying to get a few of our projects made open source so I can get other peoples feedback/input for that very reason.
    Wowsers, I also didn't realise I had this much to say about it, sorry.

  • Performance and security trade-off

    h1. Scenario
    When I run my code without implementing security its performance decreases from 40% to 70% .
    My goals is to decide: do I really need any trade-off (speed vs security)? in either case I must provide arguments
    Any kind of advice will be appreciated. I just need professional's view about the above scenario
    Thank you

    Hey Aasem,
    As such, there is no thumb-rule for this trade off concept. It all depends on the type of database and its security. It may happen that you have to secure the data more than the performance; e.g. banking, insurance, stock market secotrs. In these sectors, the on-line transactions are required, but the data security is more important than the performance. So, here you may have to cmpromise with the performance.
    But if you take an example of an live score of a cricket match, then the data security is not that much important rather the performance counts more. so, here you can compromise the security of the data with the database performance.
    But, my point is that as a dba, you have to always consider the security of the data more than the performance. And the calculation what you are talking of is not of a fixed manner. It varies as per the requirement and need. And there is no thumb rule for this. It is only your experience which will help you to find out the measurement of everything what you are asking for.
    Thanks and Regards,
    MSORA

  • Firm/Trade-off Zone Indicator - Data base field.

    I'm looking for the data base field that holds the Firm/Trade-off Zone Indicator as it displays on the Delivery Scheduling Agreement, Delivery Schedule for Items from ME33L.  The help for the field gives you the same information as for the Firm Zone And Trade-Off zone values in days (stored in EKPO), but this field displays differently for different schedule lines, and is a "1" - indicator for Firm Zone, or a "2" Indicator for Trade-Off Zone.  
    I'm beginning to wonder, is this field dynamic and just determined for display at the time the transaction is done ???
    As in fact the lines within the firm zone are displayed in a different way (with an * following the MRP element data) on MD04.
    But we were trying to determine where (IF) the field indicator is stored, and if so, what is setting it, as "some process" would have to be evaluating all orders as time passes to know they have now come within the firm zone.   If its not just a dynamic field.
    Ruth Jones

    Hi Ruth,
    This one is dynamic (I guess you are wrinting about screen field  RM06E-ETSTA visible e.g. in ME38/ME39 t-code).
    And I guess it is calculated here:
    Include: MM06EFET_ETT_ETSTA
    FORM ETT_ETSTA.
       CLEAR RM06E-ETSTA.
       IF EKPO-ETFZ1 NE 0.
         REFE1 = EKET-EINDT - SY-DATLO - EKPO-ETFZ1.
         IF REFE1 <= 0.
           RM06E-ETSTA = '1'.
           EXIT.
         ENDIF.
       ENDIF.
       IF EKPO-ETFZ2 NE 0.
         REFE1 = EKET-EINDT - SY-DATLO - EKPO-ETFZ2.
         IF REFE1 <= 0.
           RM06E-ETSTA = '2'.
         ENDIF.
       ENDIF.
    ENDFORM.
    I guess you can call this form in your report logic/query and you should get exactly what you need.
    Best Regards,
    Tomek

  • Firm zone & Trade off zone

    Hi,
    Can anywhere other than Sch agreement we can maintain Firm zone,Trade off zone & creation profile so that in SA automatically these fileds get sopied.
    Regards
    MSR

    Dear Raghavendrams,
    1) Creation profile is the combination of JIT and FRC(forecast) schedules which can be maintained in customising against the sub-node - "Maint. Rel. Creation Profile for Sched. Agmt. w. Rel. Docu." which is located under the node - Schedule agreement in purchasing.
    2) JIT is the just -in-time schedule which means the schedule which is necessarily a firm one and therefore this schedule should fall in the firm zone.
    On the other hand, FRC is the forecast shcedule which means a tentative one and therefore this should fall in the trade-off zone.
    3) From the above it is evident that one cannot have JIT and FRC schedules maintained before hand since they are subject to  change depending on the change in the production plan. Therefore you need to maintain these in the SA only. However you can create different creation profiles  in SPRO. For instance- "14 days JIT and 6 months FRC".  This means that for a material to be supplied from a vendor, you are giving a firm requirement for 14 days and balance 6 months requirement will be the forecasted one which enables the vendor to plan their resources and which in turn can reduce the lead times drastically in addition to reduction the production stoppages.
    Creation profiles can be different depending upon the nature of material procured, type of vendor you are dealing with,his flexibility,credibility etc. For instance in case of a packaging material, the vendor is usually located very close to the manufacturing to have better space management in the warehouse.In such cases the material can be called off on daily basis i.e JIT- 1 day or even hourly basis in some cases.
    Trust, you have understood the concept.
    Regards
    Venkat

  • Pixel/compression trade off

    Picture qulaity appears to be a trade off between the number of pixels you have & the amount of compression that is applied when saving a JPEG .
    Is there any gide or rule of thumb that you can provide that will tell which is best for your needs?   For instance will a 10MP picture with high commpression be than an 8MP picture with low compression?  What about 6MB etc?  Is there a chart?
    Any comments will be apprecieated.

    The Old Fart wrote:
    Picture qulaity appears to be a trade off between the number of pixels you have & the amount of compression that is applied when saving a JPEG .
    Is there any gide or rule of thumb that you can provide that will tell which is best for your needs?   For instance will a 10MP picture with high commpression be than an 8MP picture with low compression?  What about 6MB etc?  Is there a chart?
    Any comments will be apprecieated.
    I'll add a few personal thoughts to the advice already given.
    - First, nearly all digital cameras use Bayer pattern sensors : there are theoretical reasons you can downsize to about 70% without losing significant detail...
    - In many cases, you know the kind of output you want for a particular purpose : printing 4' x 6' or A4 format, displaying on high res displays (1900 x 1080 px) or saving for web (800 x 600 pix for instance). Then the first thing you can do before thinking about jpeg compression is to downsize to the adequate printing size. It's generally good enough not to output to more than 300 dpi or downsizing to the pixel dimension needed for display.
    - There are several downisizing method : you might prefer bicubic or bicubic sharper. Some prefer more advanced method (Lanczos, available in Faststone Photo Resizer)
    Now for jpeg compression :
    - The level of detail depends very much on the noise level : you are losing details with noise anyway, so the denoising will be a compromise between detail and noise, and that will be important for the jpeg file size.
    - The final size of an image depends highly on the level of detail of the scene and on the noise as stated above. For the same pixel dimensions, your file size may vary from 50% to 100% depending on detail and colors. If your purpose is to save with important compression, 'save for web' is your friend. If you only want to print without visible quality loss, use higher compression settings : for instance try the difference between 12 and 10 : you might gain 50% size without visible quality loss.

  • Trade-off between the one-arm and two-arm WAE designs

    We are configuring a WAE (model 512) for a branch office and I was wondering if someone could please tell me the trade-off between the one-arm and two-arm WAE designs..
    thanks..
    greg..

    if you are using WCCP then the WAE becomes the client withing the servcie groups 61, 62. In order to accelerate both vlans then apply the ip redirect 61 in on the client vlan ineterfaces to the one interface.
    If inline, you can use both 2 port groups for each client interface or trunk all to a single inetrface and configure which vlans you would like to accelerate.
    Now in terms of of using both GE inetrfaces, I would have to check. A topology diagram would help

  • Trade off between multiple small optimised aggregates - few large aggregate

    Hi SDN community
    I am asking a purely theoretical question to the sdn.sap.com community if any projects have done verifiable tests between the trade off in performance for a small number of cubes with large unoptimised aggregates, compared to that of multiple cubes with small optimised aggregates.
    Will queries run faster acrosss more cubes with more aggregates that are smaller, as opposed to less cubes with larger aggregates.
    We are currently trying to improve performance, and wish to get feedback on whether to change design to cater for ongoing performance issues.
    Thank you for your assistance.
    Simon

    Hi Ravi,
    Thank you for your reply.
    The reason why we need to consider smaller aggregates is such:
    - We have 2 value types Budget, and Forecasts in the same cube.
    Because of this, we cannot restrict the aggregates to 2 smaller sized aggregates to allow performance gain.
    - If we sepearate the data to the Forecasts Cube,
    We have potential to create 12 Version specific aggregates
    - If we create Fiscal Year Aggregates in addition, we split the size of the aggregates
    Now although the roll up times will be longer, we have very targetted small aggregates so our reports should run faster.
    we have consistent performance problems so i am proposing the above last major performance tuning that can be thought of, but will the performance be worth the expenditure.
    Thank you.
    Simon

  • If I wanted to change from I pad 2 to I pad mini is there any way Apple would do that trade off?

    If I wanted to change from I pad 2 to I pad mini is there any way Apple would do that trade off?

    Your only option is to sell the iPad and buy a new one.
    How to Sell Your Old iPad  http://tinyurl.com/85d69lk
    Other sources to sell.
    eBay Instant Sell http://instantsale.ebay.com/?search=ipad
    Sell and Recycle Used Electronics - Gazelle http://www.gazelle.com/
    For instant gratification in selling a used iPhone or iPad, Gazelle’s Gadget Trader, an iOS app, is tough to beat. In seconds it detects the device and reveals how much it is worth in good condition. Tap the Sell This Phone to Gazelle button and the deed is done.
    Sell Electronics for Cash - Next Worth  http://www.nextworth.com/
    Buy My Tronics  http://www.buymytronics.com/
    Sell Your iPad http://www.sellyourmac.com/mac-product-guides/ipad.html
    Totem http://www.hellototem.com/
     Cheers, Tom

  • Looking to buy new MacBook Pro for editing with Premiere/After Effects, but wondering about trade-off between Processor Speed and Graphics Card

    I'm a professional video editor (using Premiere and After Effects) looking to buy a new MacBook Pro and am deciding between two models. The slightly older model has a 2.8GHZ i7 (3rd generation) Quad Core processor with a 1GB SDRAM of NVIDIA GE FORCE GT 650M Graphics Card. Then newer model has a 2.3GHZ i7 (4th generation) Quad Core preocessor with 2GB SDRAM of NVIDIA GE FORCE 750M/Intel Iris Pro Graphics Card.
    Which makes the most difference (processor speed vs. graphics card) with editing with Premiere and After Effects?
    Any help/guidance would be greatly appreciated.
    Thanks!
    mike

    Poikkeus wrote:
    1. Your MBP will be somewhat slower than your iMac, as reflected in the general speed; desktop Macs have more RAM and storage.
    You recon? If he get's the 17", he would have up to 8x more RAM, 4 x more GPU,, and  a bit faster CPU;.
    2. Be aware of the advantages and disadvantages of extra RAM. Loading up the slot will make juggling multiple applications easier, like Photoshop, VLC, and Safari. However, more than 4gigs of RAM will make loading your MBP on startup twice as slow - at least a minute, probably longer. That's why a MBP user with extra RAM should sleep their machine nearly always when not in use, rather than powering off. 
    I did not know this, I just upgraded from 4gb to 8gb the other day. Have not noticed it being slower, but I don't often shut it down. It's nice to not even have to bother with ifreemem.
    3. Additional storage and RAM will maximize the basic capabilities of your MBP, but you won't be able to make a 2.3ghz machine any faster than it already is.
    SSD
    4. I still feel that your iMac will be faster than your prospective MBP. The only way to dramatically increase the speed would be the installation of a SSD drive (like the lauded OWC series). But they're not cheap.
    I don't want to rain on your parade, but want you to get a more realistic idea of your performance.
    I chose a macbook pro, 17" of cause. I use it for gaming. Yes a iMac is better for gaming. But, it's nice to be able to move around. Set up a man cave in the lounge 1 week, or in the bedroom, the next. But you fork out a lot more dosh for that luxury. And yes, not as much power as Poikkeus has said.

  • In-database mining vs database efficiency - Trade-off?

    Can anyone please throw some light on how Predictive Analytics (Automated Analytics/KXEN) deals with database load/resource sharing when both analytical and operational workloads are executed? In other words - does in-database scoring impacts database efficiency? If yes, how do we deal with this? This question came up during in-database feature discussion with a Partner.

    Dear Ankit,
    In Data Base Apply (IDBA) will generate a SQL Script that will be executed directly in the database. That is to say it may have an impact on the database efficiency as you will run this script.
    You can deal with that by using IDBA scoring when the database is not also used by the users (nights, week ends, etc...).
    Another option is to use Model Manager (formerly known as Factory) in order to schedule IDBA scripts at a date/timeframe you know that you database is not used or not in conflict with that execution. For example, you use Model Manager to schedule scoring action every night, then the users have fresh scorings every morning.
    Hope it answers your question, best regards,
    Gaëtan Saulnier.

  • Web service design trade-offs (TCP read mode vs headers)

    If I use HTTP headers, I can stimulate and monitor my LV RT code from a web browser while my PC-based LV code is also talking to the LV RT app.  Sometimes, however, the header is the only part that is returned by the web service.  In order to fix this occasional nuisance, the headers can be eliminated so that the Read TCP data in Immediate mode doesn't exit when only the header has been returned.  Unfortunately, the browser interface (in native, non-JavaScript mode) doesn't work any longer so hitting a simple URL request is useless.
    Has anyone found a way to get reliable comms with HTTP headers turned on without having to parse the entire return string line-by-line?

    Zack.L wrote:
    Hi Andy,
    Sorry, forgot to post the Source that's use by both AL1.1.4 and AL2.0.1.
    Source
    begin
    insert into scott.json_demo values(:title,:description);
    end;
    it's failing during the insert?
    Yes, it failed during insert using AL2.0.1.
    So the above statement produces the following error message:
    The symbol "" was ignored.
    ORA-06550: line 2, column 74:
    PLS-00103: Encountered the symbol "" when expecting one of the following:
    begin case declare end exception exit for goto if loop mod
    null pragma raise return select update while with
    <an identifier> <a double-quoted delimited-idThis suggests to me that an unprintable character (notice how there is nothing between the double quotes - "") has worked its way into your PL/SQL Handler. Note how the error is reported to be a column 74 on line 2, yet line 2 of the above block should only have 58 characters, so at a pure guess somehow there's extra whitespace on line 2, that is confusing the PL/SQL compiler, I suggest re-typing the PL/SQL handler manually and seeing if that cures the problem.

Maybe you are looking for

  • Logical Columns - Running Sum & 3Month Rollover

    Hi All, Need to build a logical column. I have a column with number of units (count distinct) in RPD. I need to build a new logical column in the REPOSITORY , such that it has running sum values.... sothat when i pick this #units column and months co

  • Problem with MAM30_050

    Hi, maybe anyone can help me to get this error message solved. It occurs after server driven replication of syncbo MAM30_050 when changing a partner. All necessary customer data is filled. 100      92081          1          1 I        04.09.2007 20:3

  • Smart Playlists using My Rating

    I'm currently using v6.03, but this problem has happened with earlier versions as well: stars do not appear when composing a Smart Playlist using My Rating. Different Google searches, etc. have led me to experiment and discover that clicking in the s

  • R/3  sends IDOC to PI 7.1 BPM -- IDOCs duplicates?

    Hi everybody, we ware sending an IDOC from R/3 to Pi 7.1. Receiver is a BPM. The IDOC duplicates? This seems to be a bug In SXMB_MONI we see that the IDOC is duplicated in two different Queues. Does anybody know about this phenomen? Thanks Regards Ma

  • EPub convert to Mobi TOC

    Hello, I'm trying to convert a ePub to Mobi. I've been using the code below to add the TOC. But when I converted the file and viewed the publication on Kindle fire, the TOC was grayed out. This code has been working before. Does anyone know what the