Differences in 1.4 over 1.3

Hi,
I am new to Java IDL and I probably do not even know what i am doing, but i am trying to learn. I followed the Java IDL tutorial over at the 1.3 release and am using 1.4.01 JDK. The 1.3 release tutorial says that when you run idlj -fall Hello.idl you should get the following file
_HelloImplBase.java
_HelloStub.java
Hello.java
HelloHelper.java
HelloHolder.java
HelloOperations.java
But when I run it I get this.... on 1.4
Hello.java
HelloOperations.java
HelloPOA.java
_HelloStub.java
HelloHelper.java
HelloHolder.java
Which of these is the server classes and which is the client classes. Please guide me.
Thanks in advance.
TP

Ok..
This is fast, posting a reply within 5 minutes of the previous post.
Used the option idlj -fall -oldImplBase Hello.idl
This gave me all the classes that were generated with 1.3.

Similar Messages

  • What is the difference between Macs Voice Over and the text to talk feature

    what is the difference between Macs Voice Over and the text to talk feature

    Voice Over doesn't need someone to type in the text first. Voice Over works with specific screen displayed information so that a blind person can know where his/her cursor is or what is being displayed. Although it is an application of text to talk it's more than the text to talk feature.

  • Difference between Share Disks over WAN vs. over Internet using Bonjour?

    I would like to find out the Difference between selecting the "Share Disks over WAN" option only vs also selecting the additional "Share disks over Internet using Bonjour" option. Any info is greatly appreciated! Thanks!

    you can access drives plugged into your AirPort extreme base station from outside the network by typing in the IP address of the modem/base station in the "Connect to Server" dialogue on the Finders "Go" menu (or press Command+K when in Finder). You can get the base station IP by opening the AirPort Utility from your utilities folder, and click "manual setup" at the very bottom of the summary tab should be the IP address given the base by your modem.
    As long as you have "Share Disks over WAN enabled" when you connect using the IP from another location you will be prompted to authenticate to the AirPort Base station and if authentication is successful you will be prompted to mount a disk (the disk plugged into your AirPort base station should be the only available disk to mount).
    Keep in mind you must give out the Base Station password unless you allow guest connections to the disk.

  • SQL Loader Truncate and SQL TRUNCATE difference

    Could any one let me know what is difference between truncate used by control file of the SQL Loader and TRUNCATE command used by SQL? Is there any impact or difference of these both over the data files.
    Thanks

    Mr Jens I think TRUNCATE in SQLLDR control file reuses extents, unlike SQL TRUNCATE command. In my opinion it is best to truncate these to show the normal usage of these tables, not the elevated values.
    Could you please further comment?

  • SQLLDR TRUNCATE and SQL TRUNCATE Difference

    Could any one let me know what is difference between truncate used by control file of the SQL Loader and TRUNCATE command used by SQL? Is there any impact or difference of these both over the data files.
    Thanks

    Duplicate posting
    SQL Loader Truncate and SQL TRUNCATE difference

  • Voice over wireless calc

    Can someone give one example about calculation
    parameters for use voice over wireless ?
    Is this equal to voip over ethernet or not

    Sorry for imprecise question ?
    My question is for WAN link between two sites, not
    local telephony ? How to treat this for calculation as voip on Ethernet, voip on ppp, or what ? Are there any differences in calculating overhead over
    wirelles wan link ?

  • Difference between Asset Register and Trial Balance

    Hai Guys,
    Our client went live in the year 1999. At present they are in Ver. 6.0 but New GL is not activated.
    There is a situation where some of the Asset accounts have a difference between the amounts shown by the Asset Register and the amounts shown by the Trial Balance.
    The issue was written to SAP support team. I was told about a special T code (ABF1) which is not available through the standard menu. But the same can be used only if we need to change the GL in line with the Asset accounting and that too if the GL balance is less than the Asset accounting balance. But, in our case, the situation is such that the GL figures have been freezed for the year end and cannot be changed.
    In such a situation, SAP support replied me about a program RACORR05 which will give the missing line items in asset accounting. If I run the same, it is giving a output of some document numbers and I am not exactly understanding the same.
    The balance has been carried over for quiet some years and it is of no used to go into those years and find out the reason for the difference. Now, the Asset accounting is to be set in line with the GL balances and that too without hitting the GL balances by any way. I know that the same is difficult.
    But will any of you be able to help me in this regard as I have tried all ways out but could not find a solution.

    Hai Paul,
    Thanks for your reply. I have already gone through the note 69225. In our case, the requirement is to align the Asset Accounting without hitting the GL balances of the concerned assets which is a difficult situation. When contacted, the SAP Support team has given a program RACORR05, which outputs some list of documents which are existing both in GL and AA. I do not understand the output. I have seen all the related notes in this regard. The difference has been carried over for quiet some years and there is no use in going into the details of the reason for the difference as the years are already closed.
    The total difference as between asset Register and GL is around 55 lacs (GL balance is less than asset register balance) and the difference in accum. depreciation is around 43 Crores (!!!!!!!!) (Balance as per Asset Register is less than the GL balance).
    Is there any way out to resolve this. can I handle this using AS92 (Legacy asset transfer values). But I need to change the Legacy asset transfer date for this. I hope the same is not advisable in a live company code. Please comment on the same as to whether it can be done.
    THanks again for your reply.

  • Difference between versions 4.7EE , ECC 5.0  and ECC 6.0.

    Hi Guys..Today I attended one interview. There They asked me the difference between the versions of 4.7EE , ECC 5.0  and ECC 6.0?.
    So, Could anyone please post the differences and Advantages one over another?
    9916596344

    Could you please search the forum first? There's a lot of questions asking for the same...
    Greetings,
    Blag.

  • How to clear vendor open items if vendor invoice currency and payment currency different

    Hi All.
    How to clear vendor open items through f-44 if vendor invoice currency is EUR , payment currency is USD  but local currency is INR
    while clearing through f-44 system showing error as "to large for clearing clearing is not possible"
    I checked all configuration, configuration wise no problem
    BR.
    Chandra

    Hi Chandra,
    You chose any one of the currency i.e. EUR/INR/USD for clearing in F-44. After selecting line items for clearing, system will show a difference. Click on over view button and manually write off the difference by selecting any one account i.e. dummy or small diff.account, after that click on process open items then system will show the difference 0 and simulate the document, here system will post gain/loss exchange GL postings along with other line items. After save the document, manually pass journal entry to dummy account and gain/loss account. I have explained clearly in the below example.
    Invoice is in USD - 1000 & INR - 60000
    Payment is in INR - 60000
    Now I am going to clear these in INR currency in F-44 on 31.03.2015. On this date the exchange rate for USD is 60.10. At the time of clearing system will post the below entry
    Vendor A/c Dr 60000 (invoice)
    Vendor A/c Cr 60000 (Payment)
    Gain from exchange rate A/c Cr  100 (60000 - 60100)
    Small diff.write off A/c (or) Dummy A/c Dr 100
    After done the above posting, we have to pass below manual JV in FB01
    Gain from exchange rate A/c Dr  100
    Small diff.write off A/c (or) Dummy A/c Cr 100
    Regards,
    Mukthar

  • AP Payment User Responsbility's Data Group to be assigned for application as Payables or Payments?

    Hi,
    I am in version R12.1.3 . I have a requirement to create a user who must have access only to create payments in AP module.So when i try to define a responsibility for this payment user
    there is a data group to be assigned. In this data group block for application there are 2 options i.e Payables & Payments? What should I select & what is the difference of selection one over the other?
    Appreciate your help.
    Thanks

    Hi,
    i did not see any difference .... I performed a test case, where i created a payment using the new responsibility i have created having data group application as Payments, the payment was successful ... Later i have modified the data group for this new responsibility from payments to payables, then the test payment i have made was also successful....
    Not sure about its impact, may be i am having responsibilities which has full access, hence i cannot see the difference in my environment ....
    Regards,
    Ivruksha

  • Opening and fetching cursor.. what accually happens?

    So, if we have an explicit cursor... we open it with open c_cursor_name... and fetches it into some variable...
    my question is - what really happens when we open cursor?
    does oracle make only instance of cursor object in memory? or... does it reads some basic data about rows that are going to be fetched (rowid's?)..? or what?
    also, if we make some insert into a table that is going to be rolled over by cursor (while cursor is fetching..), should cursor read this new data? for example, what is difference if cursor roll over an indexed ID and we make an insert with an ID=10, and cursor is currently on ID=100? or opposite? (commit included in insert)...
    oh, so many questions :)
    tnx :)

    Not really. The same SQL cursor in the shared pool applies.
    The difference is on the client side. Each of the methods you've listed is essentially using a different method on the client side to interact with the cursor on the server.
    Some methods are more desirable in some cases than another - there's not a single superior method. Each has pros and cons and addresses a specific set of client coding requirements.
    There are some preferred methods though on the client. The three primary ones are:
    a) use bind variables (enables one to re-use the same SQL cursor)
    b) re-use the same client cursor handle for tight loop operations
    c) use bulk binding
    The first one is kind of obvious - as bind variables creates a cursor "program" that can be called/executed with different values. Like an INSERT cursor for example. The same SQL cursor can be used to insert a million rows, with each execution using different variable values to insert.
    The second one is not that applicable to the PL language in PL/SQL - thanks to the integration between the PL and SQL languages. In other client languages though, you want to re-use not only the same server cursor, but the same client cursor handle. As this saves you a soft parse. So instead of this approach:
    // pseudo code
    for i in 1..count
      CreateCursor( c, <sql statement> );
      BindValue( c, 1, myVariable[i] );
      ExecCursor( c );
      CloseCursor( c );
    end loop;..the following is far more efficient:
    // pseudo code
    CreateCursor( c, <sql statement> );
    for i in 1..count
      BindValue( c, 1, myVariable[i] );
      ExecCursor( c );
    end loop;
    CloseCursor( c );This approach re-uses the client handle - which means a single db parse was needed. The prior approach required a brand new parse for each loop iteration. (even soft parsing is an overhead that should be avoided where possible)
    The last primary factor is bulk processing. This is fairly well documented for the PL language - the important bit to remember that bulk processing is only intended to pass more data with a single call between the PL and SQL engines. Nothing more. The price of which is a bigger PGA memory footprint. Thus there is a balance between how much data (rows/variables) one should pass between the two versus the amount of memory that consumes.

  • Is there a way for Itunes to compare your library to the music that is on the ipod?

    I have recently moved all my music to my laptop as my ipod may break any day now because it is so old and I wanted to organise my library so I don't lose any thing. Over the past few years I have manually been adding music to my ipod because it meant i could have music on my desktop computer and my laptop, however on deciding to move it all to one place I have descovered I have more songs on my itunes than my ipod but I can't work out which ones they are. Is there any way Itunes can run a program to determine the differences between the music on the ipod and itunes? I would just sync the entire library however it would mean that if there are songs on my ipod that are not on my itunes they would be deleted. (I would go through my library manually to determine the differences however I have over 4500 songs so that would be a little difficult)

    I have only synced a device on a single occasion in my life.  Doesn't it say something about how many songs will transfer?
    The laptop is supposed to be the master repository for everything on the iPod and they are designed to be in sync.
    You can check the Dougscripts web site.  There's all kinds of goodies there but I don't know if the extend to something like this.  There's also third party software but again I don't know if it goes to that detail.  People usually use it when they are trying to rescue content for a non-backed-up computer.

  • Very frustrated with poor customer service and lack of integrity

    Earlier today my wife and I bought a TV from your Huntsville, AL store (removed per forum guidelines). We were looking at your 32" TVs and had several questions about them. After flagging down a sales associate, we questioned the pricing of several TVs. They were advertised under the display as one price, then where the inventory was at, were labeled a different price. The sales person confirmed that the smaller price was the correct price and the displays were incorrectly labeled. We proceeded to pick out a 32" Vizio E320I-B1/E3 TV that was labeled $219.99 where it was stocked (There were two rows of this TV all labeled $219.99) and the display was labeled $259.99. Again, I flagged down the sales associate (Taylor) on the floor and questioned the display pricing vs the shelf pricing. He AGAIN confirmed that the shelf pricing was the correct pricing and the display pricing was for a different model. We proceeded to check out, purchased a 2 year protection plan along with an HDMI cord and were on our way, new TV in tow. On the way home we reviewed the receipt and noticed we had been charged incorrectly for the TV. It rang up as 259.99 even though it was clearly labeled 219.99 and we had verified TWICE with an associate that 219.99 was the correct price. We went back to the store, thinking we would be refunded the difference. After spending over 30 minutes with customer service explaining what happened, customer service informed us that they could not refund the difference even after we showed them where the TVs were at and priced as 219.99 AND verified with the sales associate that he had confirmed that price with us TWICE. We then returned the TV for a full refund, to which my money is still tied up in the transaction until sometime next week (all for a TV that was FRAUDULENTLY sold to me!!). Not only was it mislabeled, but your sales associate told me TWO times that the labeling was correct. I am disappointed with the lack of customer service that we received. It is beyond frustrating to me that your company is unable to honor a $40 error in pricing that your very own sales associate confirmed. I believe that the store was 100% at fault for this situation and am more than displeased with the original situation and the events that followed. I consider it a disgusting lack of integrity to fraudulently sell a customer an item and refuse to right the situation when able. There is no doubt in my mind that the store manager (Cedric) or division manager could have authorized the TV to be sold at the price listed (and confirmed by the sales associate twice!) but they simply chose not to.  Prior to today I had a positive view of Best Buy and considered it my store of choice for many items. My wife even did seasonal work for this store several years prior. Unfortunately, I will no longer choose to give my money to a cooperation that does not value integrity, honesty, and accountability. I understand that you are a large corporation and my few purchases probably won't even put a dent in your profit margin, but I would just like to remind you that I represent your entire customer base. If this is the type of customer service that your company prides itself on, then I am sure this is only one of many incidents. I find that extremely unfortunate. I will now be purchasing the same TV from another company for $259.99 (where it was actually labeled correctly, and the sales associate is able to accurately confirm the pricing). My future business will not be with you after today.

    Hello lauralou1105,
    Purchasing a new TV should be fun, exciting, and straightforward, and not be complicated by unclear signage and conflicting information. I very much regret that this was your experience at the Huntsville store.
    I completely understand your conclusion that -- after the attempts to verify pricing that you made -- you should have had no questions and no surprises about the cost of the TV after purchase. Did Cedric give you any explanation as to why the TV you bought rang up at a different price or for his not honoring the display pricing?
    It is wholly disheartening to hear that this experience may influence how you shop in the future. It is my hope that you will one day give Best Buy another chance to win you over.
    Please know that I'm grateful that you shared your experience with us.
    Sincerely,
    John|Social Media Specialist | Best Buy® Corporate
     Private Message

  • A64 Tweaker and Improving Performance

    I noticed a little utility called "A64 Tweaker" being mentioned in an increasing number of posts, so I decided to track down a copy and try it out...basically, it's a memory tweaking tool, and it actually is possible to get a decent (though not earth-shattering by any means) performance boost with it.  It also lacks any real documentation as far as I can find, so I decided to make a guide type thing to help out users who would otherwise just not bother with it.
    Anyways, first things first, you can get a copy of A64 Tweaker here:  http://www.akiba-pc.com/download.php?view.40
    Now that that's out of the way, I'll walk through all of the important settings, minus Tcl, Tras, Trcd, and Trp, as these are the typical RAM settings that everyone is always referring to when they go "CL2.5-3-3-7", so information on them is widely available, and everyone knows that for these settings, lower always = better.  Note that for each setting, I will list the measured cange in my SiSoft Sandra memory bandwidth score over the default setting.  If a setting produces a change of < 10 MB/sec, its effects will be listed as "negligible" (though note that it still adds up, and a setting that has a negligible impact on throughput may still have an important impact on memory latency, which is just as important).  As for the rest of the settings (I'll do the important things on the left hand side first, then the things on the right hand side...the things at the bottom are HTT settings that I'm not going to muck with):
    Tref - I found this setting to have the largest impact on performance out of all the available settings.  In a nutshell, this setting controls how your RAM refreshes are timed...basically, RAM can be thought of as a vast series of leaky buckets (except in the case of RAM, the buckets hold electrons and not water), where a bucket filled beyond a certain point registers as a '1' while a bucket with less than that registers as a '0', so in order for a '1' bucket to stay a '1', it must be periodically refilled (i.e. "refreshed").  The way I understand this setting, the frequency (100 MHz, 133 MHz, etc.) controls how often the refreshes happen, while the time parameter (3.9 microsecs, 1.95 microsecs, etc.) controls how long the refresh cycle lasts (i.e. how long new electrons are pumped into the buckets).  This is important because while the RAM is being refreshed, other requests must wait.  Therefore, intuitively it would seem that what we want are short, infrequent refreshes (the 100 MHz, 1.95 microsec option).  Experimentation almost confirms this, as my sweet spot was 133 MHz, 1.95 microsecs...I don't know why I had better performance with this setting, but I did.  Benchmark change from default setting of 166 MHz, 3.9 microsecs: + 50 MB/sec
    Trfc - This setting offered the next largest improvement...I'm not sure exactly what this setting controls, but it is doubtless similar to the above setting.  Again, lower would seem to be better, but although I was stable down to '12' for the setting, the sweet spot here for my RAM was '18'.  Selecting '10' caused a spontaneous reboot.  Benchmark change from the default setting of 24:  +50 MB/sec
    Trtw - This setting specifies how long the system must wait after it reads a value before it tries to overwrite the value.  This is necessary due to various technical aspects related to the fact that we run superscalar, multiple-issues CPU's that I don't feel like getting into, but basically, smaller numbers are better here.  I was stable at '2', selecting '1' resulted in a spontaneou reboot.  Benchmark change from default setting of 4:  +10 MB/sec
    Twr - This specifies how much delay is applied after a write occurs before the new information can be accessed.  Again, lower is better.  I could run as low as 2, but didn't see a huge change in benchmark scores as a result.  It is also not too likely that this setting affects memory latency in an appreciable way.  Benchmark change from default setting of 3:  negligible
    Trrd - This controls the delay between a row address strobe (RAS) and a seccond row address strobe.  Basically, think of memory as a two-dimensional grid...to access a location in a grid, you need both a row and column number.  The way memory accesses work is that the system first asserts the column that is wants (the column address strobe, or CAS), and then asserts the row that it wants (row address strobe).  Because of a number of factors (prefetching, block addressing, the way data gets laid out in memory), the system will often access multiple rows from the same column at once to improve performance (so you get one CAS, followed by several RAS strobes).  I was able to run stably with a setting of 1 for this value, although I didn't get an appreciable increase in throughput.  It is likely however that this setting has a significant impact on latency.  Benchmark change from default setting of 2:  negligible
    Trc - I'm not completely sure what this setting controls, although I found it had very little impact on my benchmark score regardless of what values I specified.  I would assume that lower is better, and I was stable down to 8 (lower than this caused a spontaneous reboot), and I was also stable at the max possible setting.  It is possible that this setting has an effect on memory latency even though it doesn't seem to impact throughput.  Benchmark change from default setting of 12:  negligible
    Dynamic Idle Cycle Counter - I'm not sure what this is, and although it sounds like a good thing, I actually post a better score when running with it disabled.  No impact on stability either way.  Benchmark change from default setting of enabled:  +10 MB/sec
    Idle Cycle Limit - Again, not sure exactly what this is, but testing showed that both extremely high and extremely low settings degrade performance by about 20 MB/sec.  Values in the middle offer the best performance.  I settled on 32 clks as my optimal setting, although the difference was fairly minimal over the default setting.  This setting had no impact on stability.  Benchmark change from default setting of 16 clks:  negligible
    Read Preamble - As I understand it, this is basically how much of a "grace period" is given to the RAM when a read is asserted before the results are expected.  As such, lower values should offer better performance.  I was stable down to 3.5 ns, lower than that and I would get freezes/crashes.  This did not change my benchmark scores much, though in theory it should have a significant impact on latency.  Benchmark change from default setting of 6.0 ns:  negligible
    Read Write Queue Bypass - Not sure what it does, although there are slight performance increases as the value gets higher.  I was stable at 16x, though the change over the 8x default was small.  It is possible, though I think unlikely, that this improves latency as well.  Benchmark change from default setting of 8x:  negligible
    Bypass Max - Again not sure what this does, but as with the above setting, higher values perform slightly better.  Again I feel that it is possible, though not likely, that this improves latency as well.  I was stable at the max of 7x.  Benchmark change from the default setting of 4x:  negligible
    Asynch latency - A complete mystery.  Trying to run *any* setting other than default results in a spontaneous reboot for me.  No idea how it affects anything, though presumably lower would be better, if you can select lower values without crashing.
    ...and there you have it.  With the tweaks mentioned above, I was able to gain +160 MB/sec on my Sandra score, +50 on my PCMark score, and +400 on my 3dMark 2001 SE score.  Like I said, not earth-shattering, but a solid performance boost, and it's free to boot.  Settings what I felt had no use in tweaking the RAM for added performance, or which are self-explanatory, have been left out.  The above tests were performed on Corsair XMS PC4000 RAM @ 264 MHz, CL2.5-3-4-6 1T.     

    Quote
    Hm...I wonder which one is telling the truth, the BIOS or A64 tweaker.
    I've wondered this myself.  From my understanding it the next logic step from the WCREDIT programs.  I understand how clock gen can misreport frequency because it's probably not measuring frequency itself but rather a mathmatical representation of a few numbers it's gathered and one clk frequency(HTT maybe?), and the non supported dividers messes up the math...but I think the tweaker just extracts hex values strait from the registers and displays in "English", I mean it could be wrong, but seeing how I watch the BIOS on The SLI Plat change the memory timings in the POST screen to values other then SPD when it Auto with agressive timings in disabled, I actually want to side with the A64 tweaker in this case.
    Hey anyone know what Tref in A64 relates to in the BIOS.  i.e 200 1.95us = what in the BIOS.  1x4028, 1x4000, I'm just making up numbers here but it's different then 200 1.95, last time I searched I didn't find anything.  Well I found ALOT but not waht I wanted..

  • How to preset the order of rows in the outer query of a correlated query ?

    Good morning,
    I have the following simple query:
    select empno,
           ename,
           sal,
           sum(case
                 when rn = 1 then sal
                 else -sal
               end) over (order by sal, empno) as running_diff
       from (
             select empno,
                    ename,
                    sal,
                    row_number() over (order by sal, empno) as rn
               from emp
              where deptno = 10
             );That calculates a running difference and uses "row_number() over (...)" which is an Oracle specific feature to do so. It yields the following result (which we will consider correct):
         EMPNO ENAME             SAL RUNNING_DIFF
          7934 MILLER           1300         1300
          7782 CLARK            2450        -1150
          7839 KING             5000        -6150I wanted to come up with a solution that was not Oracle specific different solution. I tried the following code:
    (EDIT: after additional thought, that code is totally different in meaning and will never come close, to the above result. Consider it wrong and ignore this attempt altogether.)
    select a.empno,
           a.ename,
           a.sal,
           (select case
                     when a.empno = min(b.empno) then sum(b.sal)
                     else sum(-b.sal)
                   end
              from emp b
             where b.empno <= a.empno
               and b.deptno = a.deptno) as running_diff
      from emp a
    where a.deptno = 10;but the result is
         EMPNO ENAME             SAL RUNNING_DIFF
          7782 CLARK            2450         2450
          7839 KING             5000        -7450
          7934 MILLER           1300        -8750which is a long way from the original result. I've tried everything I could think of to order the rows before the running difference is calculated but, have been unsuccessful.
    Is there a way to change this second query --(without using Oracle specific features)-- without using windowing features that would yield the same result as the first query ?
    Rephrase of the above question:
    Is there a way, using plain vanilla SQL (that is aggregate functions and set operations such as joins and unions) to create a query that yields the same result as the first one ?
    Also, this is not for production code. This is simply an exercise in set manipulation that I'd like to see a solution for.
    Thank you for your help,
    John.
    Edited by: 440bx - 11gR2 on Jul 18, 2010 12:50 AM - correct "ho w" to "How"
    Edited by: 440bx - 11gR2 on Jul 18, 2010 1:42 AM - struck out all references to row_number and windowing features being Oracle specific features.
    Edited by: 440bx - 11gR2 on Jul 18, 2010 3:51 AM - Noted that my try is woefully wrong and restated the objective to make it clearer.

    Hi, John,
    One way to get a running total (which is basically what you want) is to do a self-join. Join each row (let's call it the current row, or c) to itself and everything that came before it (let's call this the previous row, or p), and do a regular aggregate SUM, like this:
    WITH     got_base_sal     AS
         SELECT       deptno
         ,       2 * MIN (sal)     AS base_sal
         FROM       scott.emp
         GROUP BY  deptno
    SELECT       c.deptno
    ,       c.empno
    ,       c.ename
    ,       c.sal
    ,       b.base_sal - SUM (p.sal)     AS running_diff
    FROM       scott.emp     c
    JOIN       scott.emp     p     ON     c.deptno     = p.deptno
                        AND     (     c.sal     > p.sal
                             OR     (     c.sal     =  p.sal
                                  AND     c.empno     >= p.empno
    JOIN       got_base_sal     b     ON     c.deptno     = b.deptno
    WHERE       c.deptno     IN (10)
    GROUP BY  c.deptno
    ,       c.empno
    ,       c.ename
    ,       c.sal
    ,       b.base_sal
    ORDER BY  c.deptno
    ,       running_diff     DESC
    ;Output:
        DEPTNO      EMPNO ENAME             SAL RUNNING_DIFF
            10       7934 MILLER           1300         1300
            10       7782 CLARK            2450        -1150
            10       7839 KING             5000        -6150I said you basically want a runninng total. There are two differences between a running total and your requirements
    (1) You want to have a total of the negative of what's in the table. That's trivial: use a minus sign.
    (2) You want the first item to count as positive instead of negative. That's not so trivial. The query above counts all sals as negative, but adds an offset so that it appears as if the first item had been counted as positive, not negative.
    You didn't say what you want to do in case of a tie (two or more rows having the same sal). The query above uses empno as a tie-breaker, so that all sals are calculated as if they were distinct. This is similar to what analytic functions do when the windowing is based on rows. If you want something similar to windowing by range, that might actually be simpler.
    The query above calculates a separate running_diff for each deptno, similar to "PARTITION BY deptno" in analytic functions. You happen to be interested in only one deptno right now, but you can change the main query's WHERE clause, or omit it, and the query will still work. If you don't want this feature (analagoud to not having any PARTITION BY), it's easy to modify the query.
    You could also get these results using a recursive WITH clause. That meets the criteria of avoiding analytic functions and Oracle-specific features, but not the one about using only plain, simple SQL features.

Maybe you are looking for

  • ORABPEL-02100 - Deployed Process "Lost" under load.

    We've experience yet another load related BPEL issue. We have three servers running OAS+BPEL with the same set of BPEL processes deployed on each. Requests are round-robin'ed across all three. We had an unexpected load placed on the servers by a non-

  • ASA 5510 Anyconnect VPN question-"Hairpin" vpn connection on same external interface

    I have a Cisco ASA 5510, I want to allow a VPN connection to be established by a client on one of the inside interfaces(10.20.x.x) to be able to go out the single External interface and get authenticated by the ASA to create a VPN tunnel to the other

  • Using a jar file in my application... how?

    I have a JAR file which contains some classes I need to use in my application. I however, after looking in several books, cannot figure out how to use this JAR file within my application. Do I simply put it in the directory with my .java files, and u

  • Can't access old backups, but are still there!

    So i tried today to access some old backups, and lo and behold, when i open time machine, the only ones i could get to were the ones from today. I looked in sys.prefs. and it said that the oldest backup i had was, in fact what it should be, but i can

  • How to update my iMovie version?

    I have a Mac OS X 10.6.8 and iMovie 7.1.4. I would like to update my iMovie version. Which version is suitable for my Mac and where do I find it? Many thanks in advance!