Abnormal behaviour

I'm testing the following in a trigger:
IF(UPPER(:OLD.Passport_Program) != vcNewState) THEN
     :NEW.Passport_Program := vcNewState;
END IF;...and when either :OLD.Passport_Program or vcNewState is NULL the code within the IF block is not executed. Why?
EDIT: Passport_Program is declared as VARCHAR2(1) NULLable and vcNewState as VARCHAR2(1) in the trigger DECLARE block.
Message was edited by:
0v3rloader

> Because a NULL value can never be equal (or not
equal) to another value even if it is also NULL. Null
values are compared using IS NULL or IS NOT NULL
instead of = or !=.
In this case you must use the NVL() function to get
around this.
Thanks for that.

Similar Messages

  • _tmp directories causing abnormal behaviour

    After the release on prod environment and restarting the clustered weblogic environment
    at times we face abnormal behaviour e.g. a particular JSP giving compilation error,
    chars not being displayed properly. If we hit the application in new browser window
    the error goes off. What we found was when the request is handled by a particular
    managed server then the response is abnormal. Logs of that server contain sth
    like :
    java.io.IOException: Compiler failed executable.exec(java.lang.String[javac, -classpath,
    /opt/customer/weblogic/bea/wlserver6.
    1/config/hicom/applications/.wlnotdelete_Server-7-1/hicom34353:/opt/customer/weblogic/bea/wlserver6.1/./config/hicom/applicati
    ons/.wlnotdelete_Server-7-1/hicom34353/WEB-INF/_tmp_war_hicom_hicom/WEB-INF/lib/Quick4rt34357.jar:/opt/customer/weblogic/bea/w
    lserver6.1/./config/hicom/applications/.wlnotdelete_Server-7-1/hicom34353/WEB-INF/_tmp_war_hicom_hicom/WEB-INF/lib/Quick4util3
    4358.jar:/opt/customer/weblogic/bea/wlserver6.1/./config/hicom/applications/.wlnotdelete_Server-7-1/hicom34353/WEB-INF/_tmp_wa
    After deleting _tmp folders from weblogic directories and restarting the managed
    server, the problem gets resolved. Would really appreciate if somebody could help
    in getting the cause of the issue as well as a cleaner resolution other than deleting
    the _tmp folders.
    Thanks
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Firstly I would like to say that I really appreciate your help and assistance Arnav Sharma, but my problem is this.
    I have a genuine XP SP3 OS with Product Key......but I don't have an XP installation CD.
    I have in the past had boot problems and BSODs with this computer......these were mostly rectified by computer technicians / reinstallation of Windows, although the few BSODs I had recently I tried to deal with myself.
    Having a working computer is of paramount importance to me......if this were a spare computer I would be happy to go ahead and attempt a clean boot (something I've not done before)......but I am too afraid that something will go wrong and I'll end up with
    a black screen that I cannot deal with without an XP installation CD.
    I am not currently in a position to buy a new computer or summon a computer technician.
    I have not had any further unexpected Restarts / Stand Bys since I wrote this question or any further BSODs for several weeks. If this changes, of course I might have no other choice but to go ahead and attempt a clean boot.

  • I wish to do a clean-install of Lion due to abnormal behaviour.

    Currently I have Lion on my iMac.  I have been having abnormal behavior as of late and I wish to start fresh as I am think a piece of software is causing my troubles.  I just cannot pinpoint what it is.  So I wish to do a complete erase and refresh of the drive.
    I wish to do a backup first since I have applications and other files I do not want to lose.
    I have read i could use either Time Machine, CarbonCopy or Super Duper.  According to what I have read, CarbonCopy or SuperDuper is favoured because it is easier to restore files.  Which method would you recommend?
    Once I have made backup, what is best way to do a fresh install of LION.  Gues what I want to do is wipe the disk and then install Lion.  What would be the best way to do this?
    Thanks and I look forward to your reply.
    ...Bruce

    BGmail wrote:
    I have read i could use either Time Machine, CarbonCopy or Super Duper.  According to what I have read, CarbonCopy or SuperDuper is favoured because it is easier to restore files.  Which method would you recommend?
    Any of these will work
    Once I have made backup, what is best way to do a fresh install of LION.  Gues what I want to do is wipe the disk and then install Lion.  What would be the best way to do this?
    Boot from Recovery HD (Hold ⌘R on Boot)
    Select Disk Utility
    Erase Macintosh HD
    Exit Disk Utility
    Select Reinstall OS X Lion

  • 2 Node RAC abnormal behaviour

    Platform: "HP-UX 11.23 64-bit"
    Database: "10.2.0.4 64-bit"
    RAC: 2 Node RAC setup
    Our RAC setup has been properly done and RAC is working fine with load balancing i.e clients are getting connection on both instances. BUT the issue I am facing with my RAC setup is High Availability testing. When I send reboot signal to "Node-2" and the "Node-1" is up what I observe and receive complain from clients that they have lost connection with database ALSO no new connections are being allowed. When I see the alert log of "Node-1" I see the following abnormal messages reported in it:
    List of nodes:
    0 1
    Global Resource Directory frozen
    Communication channels reestablished
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    Tue Aug 9 04:02:15 2011
    LMS 2: 0 GCS shadows cancelled, 0 closed
    Tue Aug 9 04:02:15 2011
    LMS 0: 0 GCS shadows cancelled, 0 closed
    Tue Aug 9 04:02:15 2011
    LMS 1: 0 GCS shadows cancelled, 0 closed
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    Tue Aug 9 04:02:15 2011
    LMS 1: 1908 GCS shadows traversed, 1076 replayed
    Tue Aug 9 04:02:15 2011
    LMS 2: 1911 GCS shadows traversed, 1086 replayed
    Tue Aug 9 04:02:15 2011
    LMS 0: 1899 GCS shadows traversed, 1164 replayed
    Tue Aug 9 04:02:15 2011
    Submitted all GCS remote-cache requests
    Post SMON to start 1st pass IR
    Fix write in gcs resources
    Reconfiguration complete
    Tue Aug 9 04:02:16 2011
    ARCH shutting down
    ARC2: Archival stopped
    Tue Aug 9 04:02:21 2011
    Redo thread 2 internally enabled
    Tue Aug 9 04:02:35 2011
    Reconfiguration started (old inc 4, new inc 6)
    List of nodes:
    0
    Global Resource Directory frozen
    * dead instance detected - domain 0 invalid = TRUE
    Communication channels reestablished
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    Tue Aug 9 04:02:35 2011
    LMS 1: 0 GCS shadows cancelled, 0 closed
    Tue Aug 9 04:02:35 2011
    LMS 2: 0 GCS shadows cancelled, 0 closed
    Tue Aug 9 04:02:35 2011
    LMS 0: 0 GCS shadows cancelled, 0 closed
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    Post SMON to start 1st pass IR
    Tue Aug 9 04:02:35 2011
    Instance recovery: looking for dead threads
    Tue Aug 9 04:02:35 2011
    Beginning instance recovery of 1 threads
    Tue Aug 9 04:02:35 2011
    LMS 1: 1908 GCS shadows traversed, 0 replayed
    Tue Aug 9 04:02:35 2011
    LMS 2: 1907 GCS shadows traversed, 0 replayed
    Tue Aug 9 04:02:35 2011
    LMS 0: 1899 GCS shadows traversed, 0 replayed
    Tue Aug 9 04:02:35 2011
    Submitted all GCS remote-cache requests
    Fix write in gcs resources
    Reconfiguration complete
    Tue Aug 9 04:02:37 2011
    parallel recovery started with 11 processes
    Tue Aug 9 04:02:37 2011
    Started redo application at
    Thread 2: logseq 6, block 2, scn 1837672332
    Tue Aug 9 04:02:37 2011
    Errors in file /u01/app/oracle/product/10.2.0/db/admin/BAF/bdump/baf1_smon_10253.trc:
    ORA-00600: internal error code, arguments: [kcratr2_onepass], [], [], [], [], [], [], []
    Tue Aug 9 04:02:38 2011
    Errors in file /u01/app/oracle/product/10.2.0/db/admin/BAF/bdump/baf1_smon_10253.trc:
    ORA-00600: internal error code, arguments: [kcratr2_onepass], [], [], [], [], [], [], []
    Tue Aug 9 04:02:38 2011
    Errors in file /u01/app/oracle/product/10.2.0/db/admin/BAF/bdump/baf1_smon_10253.trc:
    ORA-00600: internal error code, arguments: [kcratr2_onepass], [], [], [], [], [], [], []
    SMON: terminating instance due to error 600
    Tue Aug 9 04:02:38 2011
    Dump system state for local instance only
    System State dumped to trace file /u01/app/oracle/product/10.2.0/db/admin/BAF/bdump/baf1_diag_10229.trc
    Tue Aug 9 04:02:38 2011
    Instance terminated by SMON, pid = 10253
    Tue Aug 9 04:04:09 2011
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Interface type 1 lan3 192.168.1.0 configured from OCR for use as a cluster interconnect
    Interface type 1 lan2 172.20.21.0 configured from OCR for use as a public interface
    Picked latch-free SCN scheme 3
    Autotune of undo retention is turned off.
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.4.0.
    System parameters with non-default values:
    processes = 300
    sessions = 335
    timed_statistics = TRUE
    Kindly help me to get rid out of this issue. Waiting for the quick and helpful response from the gurus in the forum. Thanks in advance.
    Regards,

    if above were really 100% correct, you would not be here posting about errors!Definitely but these situations could become the cause for new BUGS, isn't it?
    I don't know what is real & what is unnecessary obfuscation.What part of the thread you didn't understand.
    It is not a good idea to have subtraction sign/dash character as part of object/host name; i.e. "Node-1"."Node-1" is not the hostname it is just to make clear understanding. the hostname is "sdupn101" for node-1 and "sdupn102" for node-2.
    ORA-00600/ORA-07445/ORA-03113 = Oracle bug => search on Metalink and/or call Oracle supportNewbie is my status on this forum but I have little bit ethics of using forums and suppot blogs. I searched but unfortunately didn't find any matching solution.
    Anyway will update you once find any solution so that you can assist someone else in future.

  • Emulator's abnormal behaviour

    Hi,
    I have developed a small and simple game in J2ME, in which I implemented all the functionalities which are required for a game such as Menu, Music, Record Maintain etc.
    All the things are runnig right if we run the application single time or less than three time on the emulator. But after 3 times execution, emulator fails to start the application on next time. My application is for Nokia 3230 and the emulator I am using is Prototype_4_0_S60_MIDP_Emulator.
    I am not able to understand this type of behaviour of emulator.
    I tried to resolve this by my own way of concerning memory issue as I am basically a BREW Developer. So I caught the memory leakage between starting and ending of program, that means memory is not totally freed when application is terminated and for next time less memory is availabe for it. So I made a freeResource Function just like BREW and calling it at destroyApp() method but problem is as it is.
    Kindly quote yours suggestion which I can try.
    Thanks

    hii
    I checked the garbage collection and exceptions too, but it is not showing any wrong thing. Exception is also not coming when I start the application 4th time. And the emulator screen hangs.

  • Abnormal behaviour with Lion. Opening Pages causes my Mac to have a lethargic performance since I installed the Lion OS, anyone has a similar problem?

    Since i first installed the Lion OS, my mac has been having performance slowdowns while working on Pages. It appears to affect all functional software and hardware. E.G. when clicking either the dashboard or the desktop button it does nothing, Pages has a horrible lethargic processing, etc.
    The strange thing is that Pages does reflect a considerable improvement if working in the fullscreen environment.
    If anyone else is having or had this issue could you please give advice.
    Thank you

    You would be better asking these questions in the Mac OS X 10.7 Lion Discussions area. The Pages Discussions area is just about Pages so we won't be able to help you with general Mac OS X problems.
    However, there is a big difference opening Pages/Numbers/Keynote from Launchpad or my previous Dock. If I use Launcpad, those software open now decently quick. Using Dock is much more slower.
    Have you tried removing the Dock icons for iWork and readding them?
    I also downloaded Lion Disk Recovery Assistant - I am surprised that these were not on my Mac already, though my Automatic upgrade assistant informs that no new upgrades are available.
    You may be surprised, but you're not being singled out. The Lion Disk Recovery Assistant was not made available to anyone via Software Update. It's a manual download from Apple's Support web pages.
    If you were not aware of the Mac OS X 10.7.1 update being available you should check System Preferences > Software Update and ensure it's set to check for updates as frequently as you require.

  • How to block a visitor with an abnormal behaviour (visiting 50 times/day a website)?

    I've developed a website with Adobe Muse. My website is hosted by Business Catalyst. The website displays resources, mostly links to other websites. I don't keep any information about my visitors, there is no registration. I haven't any forum or blog, just a contact form with the BC Captcha (the hardest one to read).
    A visitor from Germany, always the same town, is coming on my website periodically. It looks like every half an hour, night and day. The number of visits equals the number of page views.
    The Adobe help/support line has no answer about that. I'm not a web designer at all, I don't use HTML programming, just Muse. I'd like to make it stop because this problem is modifying my website analytics. I'm using the visitors geographic location to find sponsors because it's a non-profit project.
    Any idea of what is happening? How to fix it?
    Many Thanks in advance!
    Jackie

    Jackie,  I am having the same issue and agree that it obscures our site analytics.
    I only have the "WebMarketing" subscription - so I cannot see how often people visit my site (unless I just dont know how to view that).  Here is a screen shot of how many different locations I have and how much it affects my site visitor count based on the other locations which seem "legit" because they have more page views than visits.
    I hope there is a way to fix or block this.

  • Abnormal, Same query get data in sql but not working on Fron-end

    Dear,
    Version :Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    We have created packed in oracle database befor two months ago & was working fine, but since morning select statment in package is not working while running via application which mentioned below and raise not data found but surprising thing is that same query is getting data when we execut on sql plus return one record.
    i don't know either it's abnormal behaviour or sth else becuase the same query run without changing any singl column in where_clause work in sql but not getting data while submition request through oracle application and raising not data found.
    --thankse
    Edited by: oracle0282 on Dec 29, 2011 2:20 AM

    Actully when i run this query in sql it return one record on the same parameter, while when we exeucte this select in pl/sql on the same parameter oracle raise no data found error.
    so i got confused the same query with same parameter retur record in sql but when we call it fron-end through packege raise 'no data found error'.
    hope you understand now.
    --thanks                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How to delete duplicates in oracle 10g ( strange behaviour)

    Recently we migrated from oracel 8i to oralce 10g and we face this problem.
    When we try to delete duplicates using rowid ( analytical functions row_number()/
    or normal delete ) and commit the same , still we find some duplicates to be existing because of which we are not able to enable costriants and resulting in process failure.
    When we run the same delete statement next time it removes more duplicates than the required or sometimes required duplicates resulting in abnormal behaviour.
    I don not understand this strange behaviour after upgrading to oracle 10g.
    It'd be great if some one who has idea on this can throw light on the same.
    thanks

    Gasparotto,
    Thanks a lot for letting me know a new procedure to delete duplicates sing lead function.
    I've tried this code on a temp table and it worked, let me use the same query on the prodn side and test the same.
    Procedure for deletion of duplicates using lead analytical function.
    create table temp ( col1 number(2) , col2 number(2) , col3 number(2));
    insert into temp values ( 1,2,10);
    insert into temp values ( 1,2,20);
    insert into temp values ( 1,2,30);
    insert into temp values ( 3,2,10);
    insert into temp values ( 3,4,12);
    insert into temp values ( 3,4,45);
    commit;
    COL1 COL2 COL3
    1 2 10
    1 2 20
    1 2 30
    3 2 10
    3 4 12
    3 4 45
    select col1,col2 , col3, LEAD(rowid) OVER (PARTITION BY col1,col2 order by null) from temp;
    COL1 COL2 COL3 LEAD(ROWID)OVER(PA
    1 2 10 AAAVBjAApAAAFyGAAB
    1 2 20 AAAVBjAApAAAFyGAAC
    1 2 30
    3 2 10
    3 4 12 AAAVBjAApAAAFyGAAF
    3 4 45
    6 rows selected.
    select rowid , temp.* from temp ;
    ROWID COL1 COL2 COL3
    AAAVBjAApAAAFyGAAA 1 2 10
    AAAVBjAApAAAFyGAAB 1 2 20
    AAAVBjAApAAAFyGAAC 1 2 30
    AAAVBjAApAAAFyGAAD 3 2 10
    AAAVBjAApAAAFyGAAE 3 4 12
    AAAVBjAApAAAFyGAAF 3 4 45
    SQL> DELETE temp
    WHERE rowid IN
    ( SELECT LEAD(rowid) OVER (PARTITION BY col1, col2 ORDER BY null)
    FROM temp ); 2 3 4
    3 rows deleted.
    SQL> select rowid , temp.* from temp ;
    ROWID COL1 COL2 COL3
    AAAVBjAApAAAFyGAAA 1 2 10
    AAAVBjAApAAAFyGAAD 3 2 10
    AAAVBjAApAAAFyGAAE 3 4 12
    Thanks for the reply

  • Clearing values from request in decode method

    I am using a custom table paginator. In its ‘decode’ method I have the next code to control whether ‘next’ link is clicked:
    String pLink = (String)requestMap.get("pLink" + clientId);
    if ((pLink != null) && (!pLink.equals(""))) {
         if (pLink.equals("next")) {     
         } else if (pLink.equals("previous")) {
    }But the next sequence produces some problems:
    1.     Initial page load.
    2.     Click on ‘next’ link.
    3.     Table navigates ok to next page.
    4.     Reload page (push F5).
    5.     The previous click still remains in the request, so decode method think ‘next’ link is pressed again.
    6.     Application abnormal behaviour arises.
    So, I am trying to clear the ‘next_link’ key from the request, but next code throws an UnsupportedOperationException:
    String pLink = (String)requestMap.get("pLink" + clientId);
    if ((pLink != null) && (!pLink.equals(""))) {
         if (pLink.equals("next")) {     
         } else if (pLink.equals("previous")) {
         requestMap.put("pLink" + clientId, "");
    }Do any of you have some ideas?

    Hey, where are you RaymondDeCampo, rLubke, BalusC ... the masters of JSF Universe?
    ;-)

  • PLEASE HELP ME WITH DM PROBLEM

    Hi Guys,
    I kindly ask for your help with regards to my DM project. As you might recall, I am working on a project that is related to the field of agriculture and that has as an objective to find the "optimal values" of the operating conditions that affect the outcome (the amount of meat produced i.e. the weight) of an animal production (chicken broilers in my case). To do so, I have to use historical data of previous productions as my training dataset. The length a production cycle is typically around 44 days. For each production, a data acquisition system stores the real-time and historical data of hundreds of parameters. These parameters represent sensor measurements of all the operating conditions (current temperature, set point temperature, humidity, static pressure, etc...) and these are what I refer to as the inputs. The operating costs and the production outcome are what I refer to as outputs. The operating cost is indirectly computed from parameters like water consumption, feed consumption, heater/cooling runtimes, and lighting runtime; and the outcome of a production is defined by parameters like animal mortality and conversion factor (amount of feed in Lbs to produce 1Lb of meat). So the main objective of this project is to find the set of “optimal daily values” (1value/day) for the inputs that would minimize the operating costs and conversion ratio outputs.
    The biggest problem I am facing right now is the following: The historical data that I have in the DB are time series for each measured parameter. Some of these time series follow some kind of cyclic pattern (e.g. daily water/feed consumption …) while others follow an increasing/decreasing trend (animal weight, total heater run time, total water/feed consumption…..). My goal is to be able to come up with a model that suggests a set of curves for the optimal daily values throughout the length of the production cycle, one curve for each measured input/output parameter. This model would allow the farmer to closely monitor his production on a daily basis to make sure his production parameters follow the “optimal curves” suggested by my model. I have looked at ANN and I think it might be the solution to my problem since it allows to model multiple input/outputs problems (Am I wrong?), but I could not figure out a way to model the inputs/outputs as time series (an array of values for each parameter). As far as I know, all kinds of classifiers accept only single valued samples.
    One approach would be to create one classifier/day (e.g. for day1: extract a single value for each parameter and use these values as a training sample and repeat this for all previous production to construct the training set). The problem with this approach is that 44 or so classifiers will be constructed (hard to manage all of this) and each of these resulting ANN will be some kind of “typical average” of the training data but not necessarily the “optimal values” leading to the best production outcome, if I am not mistaken.
    Another approach would be to find a way to feed in the inputs and outputs as time series (an array of 44 daily values for each input/output parameter). In this case, there would be only one resulting ANN and the training samples, would be a set of arrays for each parameter, as opposed to single daily parameter values in the first case. The problem is, I could not find any classifier that would allow me to do that.
    Another issue that I have is the amount of data. While a single production cycle could represent 1-2GB of data, the length of the production cycle (44 days) makes it difficult to have 100’s of production cycle historical data, as I could gather data for no more than 7 full cycles/year. Fortunately, a farm can have many production units (5-10 barns/site in big sites), so this makes it possible to have 40-70 cycles/yr. My question is: would this be enough to come up with an acceptably accurate model or is it necessary to have hundreds of samples?
    Thanks for taking the time to reading this lengthy post, and I really appreciate your help and thank you in advance.
    Cheers.

    Hi Guys,
    I would really appreciate any help please. Maybe I was not precise enough
    last time, so I am going to decribe things more clearly with examples.
    Since the produced model will be used as part of an alarm/flagging
    system, I will have to produce a curve of each of the parameters of
    interest using 4 values/day=once/6h, and do this for the 44 days, this
    is to flag and correct any abnormal behaviour ASAP. So, the whole
    curve would have 4*44=176 values. E.g. for the water consumption
    curve: day1: 12AM=65Gal, 6AM=150, 12PM ... DAY44=6PM=1500Gal. I would
    have to come up with similar curves for each of the parameters of
    interest (inputs/outputs). Now as far as ANNs are concerned, do I
    have to produce 176 of these ANNs, one for each predicted value?
    ANN1: input1 (temperature-value Day1@12AM) input2 (humidity-value
    Day1@12AM)... output1 (feed consumption-value Day1@12AM), output2
    (heater_runtime-values Day1@12AM)... and train the ANN with the 50-60
    samples (Day1@12AM) from previous productions. This would produce an
    ANN for predicting the value of each parameter for Day1@12AM for
    future productions, etc.... This would quite intensive
    computationally, so I am wondering if there is a better way to maybe
    feed-in all the 176 values time series in one shot to have something
    like input1(temperature-values 1-176), input2(humidity-values
    1-176)... output1(feed consumption-values 1-176), output2 (heater
    runtime-values 1-175)... and this will produce only one ANN which
    will predict the 176 values for all parameters of future productions?
    I would really appreciate your help as I am really stuck at this.
    Cheers.

  • GR/IR difference visible in MB5S but not clearing via MR11

    Dear gurus,
    I am facing abnormal behaviour of the system. System is showing few POs in MB5s but not showing it in MR11 for clearing purposes. Has anyone faced the same situation ?
    Regards,
    FR

    Transaction MB5S is used to analyse the purchase order history and will list all entries where the goods receipts quantity is not equal to the invoice receipts quantity. The report looks at the purchase order
    from a logistics viewpoint.
    What MB5S does is to look in table EKBE where the PO history is
    stored.If there is any balance between the GR/IR , this is shown
    in the report. Sometimes there is confusion in the interpertation of the results of
    RM07MSAL.  The aim of this program is to show how the GR is effectively valuated
    if the existing invoices are taken into account, independently from how the GR itself was valuated (or not valuated).
    MB5S is not the appropriate functionality for analysis of the clearing account.
    Transaction MR11 will only pick up variances in quantity, not in values.
    The aim of MR11 is to clear the GR/IR account, not the quantities.
    However, an open item on the GR/IR account can only exist, if the
    quantities are different (for normal POs with valuated GR).
    MR11 posts like an debit/credit note with value 0 and the open quantity.
    This clears the GR/IR account, and the quantities are cleared as well,
    but only as a by-product of this mechanism.
    POs with non valuated GR never have open items on GR/IR account,
    because neither GR nor IR post to the GR/IR account.
    For additional information about the MR11 transaction please check
    the SAP note 10757  FAQ: MR11, clear GR/IR clearing account.

  • Coherence 3.6.0 transactional cache and POF - NULL values

    Hi,
    We are trying to use the new transactional scheme defined in 3.6.0 and we encounter an abnormal behaviour. The code executes without any exception or warnings but in the cache we find the key associated with a NULL value.
    To try to identify the problem, we defined two services (see cache-config below):
    - one transactional cache
    - one distributed cache
    If we try to insert into transactional cache primitives or strings everything is normal (both key and value are visible using coherence console). But if we try to insert custom classes using POF, the key is inserted with a NULL value.
    In same cluster we defined a distributed cache that uses the same POF classes/configuration. A call to put will succeed in any scenario (both key and value are visible using coherence console).
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>cnt.*</cache-name>
                   <scheme-name>storage.transactionalcache.cnt.scheme</scheme-name>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>stt.*</cache-name>
                   <scheme-name>storage.distributedcache.stt.scheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <transactional-scheme>
                   <scheme-name>storage.transactionalcache.cnt.scheme</scheme-name>
                   <service-name>storage.transactionalcache.cnt</service-name>
                   <thread-count>10</thread-count>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>cnt-pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
                        <local-scheme>
                             <high-units>250M</high-units>
                             <unit-calculator>binary</unit-calculator>
                        </local-scheme>
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </transactional-scheme>
              <distributed-scheme>
                   <scheme-name>storage.distributedcache.stt.scheme</scheme-name>
                   <service-name>storage.distributedcache.stt</service-name>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>cnt-pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
                        <local-scheme>
                             <high-units>250M</high-units>
                             <unit-calculator>binary</unit-calculator>
                        </local-scheme>
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </distributed-scheme>
         </caching-schemes>
    </cache-config>
    Failing code (uses transaction APIs 3.6.0):
         public static void main(String[] args)
              Connection con = new DefaultConnectionFactory().createConnection("storage.transactionalcache.cnt");
              con.setAutoCommit(false);
              try
                   OptimisticNamedCache cache = con.getNamedCache("cnt.t1");
                   CId tID = new CId();
                   tID.setId(11111L);
                   C tC = new C();
                   tC.setVal(new BigDecimal("100.1"));
                   cache.insert(tID, tC);
                   con.commit();
              catch (Exception e)
                   e.printStackTrace();
                   con.rollback();
              finally
                   con.close();
    Code that succeeds (but without transaction APIs):
         public static void main(String[] args)
              try
                   NamedCache cache = CacheFactory.getCache("stt.t1");
                   CId tID = new CId();
                   tID.setId(11111L);
                   C tC = new C();
                   tC.setVal(new BigDecimal("100.1"));
                   cache.put(tID, tC);
              catch (Exception e)
                   e.printStackTrace();
              finally
    And here is what we list using coherence console if we use transactional APIs:
    Map (cnt.t1): list
    CId {
    id = 11111
    } = null
    Any suggestion, please?

    Cristian,
    After looking at your configuration I noticed that your configuration is incorrect. For a transactional scheme you cannot specify a backing-map-scheme.
    Your config contained:
    <backing-map-scheme>
    <local-scheme>
    <high-units>250M</high-units>
    <unit-calculator>binary</unit-calculator>
    </local-scheme>
    </backing-map-scheme>To specify high-units for a transactional scheme, simply provide a high-units element directly under the transactional-scheme element.
    <transactional-scheme>
        <scheme-name>small-high-units</scheme-name>
        <service-name>TestTxnService</service-name>
        <autostart>true</autostart>
        <high-units>1M</high-units>
    </transactional-scheme>http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/api_transactionslocks.htm#BEIBACHA
    The reason that it is not allowable to specify a backing-map-scheme for a transactional scheme is that transactional caches use their own storage.
    I am not sure why this would work with primitives and only fail with POF. We will look into this further here and try to reproduce.
    Can you please change your configuration with the above changes and let us know your results.
    Thanks,
    John
    Edited by: jspeidel on Sep 16, 2010 10:44 AM

  • Same query giving different results

    Hi
    I m surprised to see the behaviour of oracle. I have two different sessions for same scheema on same server. In both sessions same query returns different results. The query involves some calculations like sum and divisions on number field.
    I have imported this data from another server using export / import utility available with 9i server. Before export every thing was going fine. Is there some problem with this utility.
    I m using Developer 6i as the front end for my client server application. The behaviour of my application is very surprizing as once it shows the correct data and if I close the screen and reopen, it shows wrong data.
    I m really stucked with the abnormal behaviour. Please tell me the possiblities and corrective action for these conditions.
    Regards
    Asad.

    There is nothing uncommitted in both the sessions. But still different results are returned.
    I m sending u the exact query and result returned in both sessions.
    Session 1:
    SQL> rollback;
    Rollback complete.
    SQL> SELECT CC.CREDIT_HRS,GP.GRADE_PTS
    2 FROM GRADE G, COURSE_CODE CC, GRADE_POLICY GP
    3 WHERE G.COURSE_CDE=CC.COURSE_CDE
    4 AND G.SELECTION_ID=45 AND G.GRADE_TYP=GP.GRADE_TYP
    5 AND G.TERM_PROG_ID=17 AND GP.TERM_ID=14
    6 /
    CREDIT_HRS GRADE_PTS
    3 4
    4 3.33
    4 3.33
    3 4
    3 4
    3 4
    3 4
    7 rows selected.
    SQL>
    SESSION 2:
    SQL> rollback;
    Rollback complete.
    SQL> SELECT CC.CREDIT_HRS,GP.GRADE_PTS
    2 FROM GRADE G, COURSE_CODE CC, GRADE_POLICY GP
    3 WHERE G.COURSE_CDE=CC.COURSE_CDE
    4 AND G.SELECTION_ID=45 AND G.GRADE_TYP=GP.GRADE_TYP
    5 AND G.TERM_PROG_ID=17 AND GP.TERM_ID=14
    6 /
    CREDIT_HRS GRADE_PTS
    3 4
    4 3.33
    3 4
    3 4
    3 4
    3 4
    6 rows selected.
    SQL>
    U can see in session 1, seven rows are returned while in session 2 six rows are returned. I have issued a rollback before query to be sure that data in both sessions is same.

  • Pricing populating for period not valid in sales order

    Hi Frnds
    I have an issue with a sales order which has say 10 line items and the pricing for each line item is maintained in VK11. The client updates the pricing say every year therefore the record shows previous prices for that material too. When creating the sales order the system is picking pricing for few materials from year 2005 and few from 2008. This seems to be an abnormal behaviour to me. I checked the pricing record and the new ones are active.
    Could anyone give me point me in the right direction??

    Seems we are missing a point here, the validity dates maintaines for the new record are correct. The system is pulling the old record for god knows what reasons. for few line items the right pricing is populating but for some its pulling old pricing record.
    I hope this helps

Maybe you are looking for