Minor inconsistency in javac prediction of branch

Just curious why javac can predict branches when possible to determine if local variable is initialized but cannot do the same to determine if unreachable code exists:
public void foo(){
     int x;
     if(true){
          x = 7;
     int p = x;
}no problem here, javac can detect that the local variable x will be initialized every time and does not complain that x is not initialized when it is assigned to p. very good, javac.
public void foo(){
     if(true){
          return;
     int x = 7;
}in this case, javac fails to notice that the branch with the same condition will always execute, which leaves the int x=7 as unreachable code.
So in the first code example, javac confidently believes the code will execute to remove the possibility of an uninialized variable being used; however, in the second code example, it believes there is a possibility that this same if branch will not be taken so the code that follows may be executed. This seems inconsistent.
I'm not disappointed in this behavior, just surprised and wondering if there is a reason for it.

in this case, javac fails to notice that the branch
with the same condition will always execute, which
leaves the int x=7 as unreachable code. It's probably not malicious or deliberate - just a condition that the compiler implementor forgot to check. And it's not a required check according to the language semantics either - you just lucked out in one case, and didn't in the other.
Send in a bug report/enhancement request for consideration in a future release..

Similar Messages

  • Database limits for Grid control repository?

    Document "Oracle® Database Licensing Information / 10g Release 2 (10.2)" [Part Number B14199-10] discusses the restricted use license of database for grid control repository and rman repository use.
    nfrastructure Repository Databases A separate Oracle Database can be installed and used as a Recovery Manager (RMAN) repository without additional license requirements, provided that all the Oracle databases managed in this repository are correctly licensed. This repository database may also be used for the Oracle Enterprise Grid Control repository. It may not be used or deployed for other uses.
    A separate Oracle Database can be installed and used as a Oracle Enterprise Manager Grid Control (OEM Grid Control) repository without additional license requirements, provided that all the targets (databases, applications, and so forth) managed in this repository are correctly licensed. This database may also be used for the RMAN repository. It may not be used or deployed for other uses.
    There is no discussion of licensing requirements for options of the database (RAC, partitioning, etc) when the database stays within these repository use restrictions. Therefore it is unclear whether the Grid Control can be configured to provide appropriate monitoring without additional licenses.

    This is the reply I received from release
    management:
    If they have deployed an EE edition of the database
    as an OEM repository, and then want to protect that
    with RAC or Data Guard, or make more Secure with ASO
    or Database Vault, or indeed get any of the
    additional benefits any of the options provide, then
    they will need to license the option.
    Let me know if that is still not clear, and I'll go
    back again.Thanks.
    There is a minor inconsistency in their response ... they say "if deployed on EE". However, as far as I know OEM repository required VPD, which is available only on Enterprise Edition (see http://download-east.oracle.com/docs/cd/B16240_01/doc/em.102/e10953/pre-installation_req.htm#CHDGIEAD)
    However, it is quite clear that 'they' are telling us that the RAC option is not inherent in the "Special Use Licensing" as described http://download.oracle.com/docs/cd/B19306_01/license.102/b14199/editions.htm#CJAHFHBJ .
    The implication (not necessarily a topic for this forum) is that the OEM is not guaranteed to be able to provide basic monitoring and reporting functionality without additional license fees to make OEM more highly available than the targets being monitored.
    It would be nice to have the information about which Oracle products (database and app server) have included licenses with a basic Grid Control install and which require additional licenses as part of Chapter 1 of http://download.oracle.com/docs/cd/B19306_01/license.102/b40010/toc.htm (That document goes into excruciating detail about the extra packs, but ignores the basics.)
    For that matter, since OEM Grid Control can not be purchased, it would be nice to have a list of products that provide license to install Grid Control, again in Chapter 1 of the Grid Control Licensing document listed above.
    I appreciate your time in having dug into this.

  • LR3 Bug when JPEG EXIF DateTimeOriginal ends in 00 Seconds?

    I am observing a strange behavior when working with JPG files where the EXIF DateTimeOriginal and the CreateDate fields have a date / time where the seconds part of the time is 0 seconds.
    The library filter shows these pictures under "unknown" date instead of the correct year like with other pictures where everything is the same except they are taken a few seconds later so the time is e.g. 10:13:22 instead of 10:13:00
    I am seeing this with 180 pictures out of 18.000 so it looks like not all pictures with this attribute have the problem however all pictures that appear in the library filter date column as "unknown" have a date/time with 00 seconds ...
    The metadata section on the right side in LR displays the date/time when the picture was taken correctly. It is just in the library filter that it is treated as "unknown"
    Is this a known bug?
    regards,
       Stefan

    Stefan,
    I recently converted from PSE 8 to LR 3 and encountered the same issue, with about 150 photos out of 20K showing up in the LR Library Filter with date "unknown".  Thanks for identifying the problem.
    I'm the author of "psedbtool", so I tracked down more details on the problem:
    1. LR 3 is buggy -- Library Filter Date does not recognize XMP:DateTimeOriginal values that are missing seconds, even though Adobe's XMP spec specifically allows that form:
    http://www.adobe.com/devnet/xmp/pdfs/XMPSpecificationPart2.pdfhttp://www.adobe.com/devnet/xmp/pdfs/xmp_specification.pdf
    Date
    A date-time value which is represented using a subset of ISO RFC 8601 formatting, as described in http://www.w3.org/TR/NOTE-datetime. The following formats are supported:
    YYYY-MM-DDThh:mmTZD
    Note that when examining metadata fields, one must always be careful to figure out whether you're looking at the EXIF fields or the XMP fields.  For example, if a file has XMP metadata, than LR will show fields taken from the XMP metadata, but the Metadata panel will call it "EXIF" (this confusion has its roots in the multiple, ambiguous meanings of the term "EXIF" in the standards).
    The EXIF:DateTimeOriginal field does indeed require seconds and does not allow time zones, while the XMP:DateTimeOriginal lets seconds and a time zone be optional.
    Note that the LR command Metadata > Edit Capture Time also doesn't recognize date/times that are missing seconds.
    2. PSE 8 (and probably earlier versions) sometimes omitted seconds from XMP:DateTimeOriginal when it wrote metadata.  I haven't figured out under which conditions it does that (and I certainly don't plan on spending any more time on PSE 8!).  Technically, this isn't a bug, since it conforms to the standards, thought it could fairly be called a minor inconsistency, since in almost all cases it includes seconds.
    3. The version of Exiftool included in the most recent version of psedbtool (7.89) had a bug -- it always added ":00" to the end of XMP date/time values if they were missing seconds.   This prevented psedbtool from ensuring that XMP:DateTimeOriginal always had seconds, as was the intention.  This appears to be fixed in Exiftool 8.24 (but I doubt that I will update psedbtool with that version).

  • Why are the contents of a message folder there on my iPad but not my iMac?

    On my iMac, the list of Mailboxes on the left of my Mail window shows a mailbox with the same name under both the "on my Mac" heading and the "iCloud" heading. but neither Mailbox appears to contain any contents. The Mailbox with the same name is on my iPad and has all the messages that should be there. I want the is data available on both my iMac and my iPad. My iCloud I/D on my iMac is my me.com e-mail address and that on my iPad uses the mac.com e-mail address. I use Mavericks 10.9.4 and IOS 7.1.2

    At a guess there is some minor inconsistency with capitalisation which is triggering the incorrect order. See Grouping tracks into albums for more detail.
    tt2

  • Complete Album makes two albums?

    Hi, I have an itunes problem (suprise!)
    About a year ago, I bought three songs from Linkin Park's album LIVING THINGS. Today, I used the Complete Album feature to buy the rest of the songs. But after it all downloaded, the three songs I bought before were in a different album than the ones I just bought. How do I fix it?
    Screenie:
    [IMG]http://i.imgur.com/yrco8MR.png[/IMG]

    IMG tags don't work here. The camera tool will let you upload an image to the site.
    Usually there is some minor inconsistency between the Album, Album Artist, Sort Album or Sort Album Artist fields. Edit all at the same time, if needs be add a trailing X to each of those values, apply, then remove them again. That should merge everything to a single album.
    See Grouping tracks into albums for more information.
    tt2

  • RVKRED77  Issue

    Dear all,
    when we run RVKRED77  report, we meet bellowed issue:
    1) we set RVKRED77  as backjob, when someone modify sale order or delivery note or billing , the RVKRED77 will stopped and can't run again, although we set the backjob at night, so how to don't stop RVKRED77 and let RVKRED77 go on.
    2) We have over 1000 customer, so it take many time to run the RVKRED77 .
    thanks!

    Additionally; to ensure consistent credit data, you have to run RVKRED77,        
    respectively RVKRED07 with NOBLOCK = ' '. This shall avoid, that   
    any user change any SD document during the re-build run and        
    causes by this inconsistent credit values again.                                                                               
    Normally this risk is minor, but you have to decide by yourself,   
    if you want to bear this risk or not.                                                                               
    I think if you have totally incorrect credit values to one         
    credit account, it is better to run RVKRED07 with NOBLOCK = 'X'    
    and to accept the risk of minor inconsistent values than to        
    work always with totally incorrect credit values. Additionally you 
    can check the correctness of the credit values after re-build      
    by comparing the results of RVKRED88 and the values to see in      
    FD32 transaction.                                                                               
    So maybe you could do the following in your situation:             
    - Determine credit accounts with incorrect credit values           
      (e.g. you can use Z_CREDIT_VALUE_COMPARE from note 666587).      
    - Run RVKRED07 with NOBLOCK = 'X' for such credit accounts with    
      big inconsistencies.                                             
      Run the report for single credit accounts and not for a range    
      of credit accounts! This will lower the runtime and minimize     
      the risk of again inconsistent credit values.                    
    - Afterwards you can check for the processed credit account        
      again for incorrect credit values.                                                                               
    Please ensure that you use only KNKLI, KKBER and NOBLOCK as        
    selection parameters (and PROTB if you want result list) !!        
    By using other selection parameters you will really get            
    incorrect credit values!!     
    I hope this help.
    Gerard

  • Inconsistency between get-childitem -include and -exclude parameters

    Hi,
    Powershell 2.0
    Does anyone else consider this a minor design bug in the Get-ChildItem command?  
    # create dummy files
    "a","b","c" | % {$null | out-file "c:\temp\$_.txt"}
    # this "fails", returns nothing
    get-childitem c:\temp -include a*,b*
    # this "works", returns desired files
    get-childitem c:\temp\* -include a*,b*
    # this "works", excludes undesired files
    get-childitem c:\temp -exclude a*,b*
    # this "fails", excludes undesired files BUT RECURSES sub-directories
    get-childitem c:\temp\* -exclude a*,b*
    I'm writing a wrapper script around the GCI cmdlet, but the inconsistency between the two parameters is problematic.  My end user will surely just type a path for the path parameter, then wonder why -include returned nothing.  I can't unconditionally
    add an asterisk to the path parameter, since that messes up the exclude output.
    I'm just wondering why Microsoft didn't make the parameter interaction consistent???  
    # includes desired files in the specified path
    get-childitem -path c:\temp -include a*,b*
    # excludes undesired files in the specified path
    get-childitem -path c:\temp -exclude a*,b*
    # combine both options
    get-childitem -path c:\temp -include a*,b* -exclude *.log,*.tmp
    # same as above, the asterisk doesn't matter
    get-childitem -path c:\temp\* -include a*,b*
    get-childitem -path c:\temp\* -exclude a*,b*
    get-childitem -path c:\temp\* -include a*,b* -exclude *.log,*.tmp
    # same as above, but explicitly recurse if that's what you want
    get-childitem -path c:\temp\* -include a*,b* -recurse
    get-childitem -path c:\temp\* -exclude a*,b* -recurse
    get-childitem -path c:\temp\* -include a*,b* -exclude *.log,*tmp -recurse
    If I execute the "naked" get-childitem command, the asterisk doesn't matter...
    # same results
    get-childitem c:\temp
    get-chileitem c:\temp\*
    If this isn't considered a bug, can you explain why the inconsistency between the two parameters when combined with the -path parameter?
    Thanks,
    Scott

    The Get-ChildItem cmdlet syntax is horrific for advanced use. It's not a bug in the classic sense, so you shouldn't call it that. However, feel free to call it awful, ugly, disastrous, or any other deprecatory adjective you like - it really is
    nasty.
    Get-ChildItem's unusual behavior is rooted in one of the more 'intense' dialogues between developers and users in the beta period. Here's how I recall it working out; some details are a bit fuzzy for me at this point.
    Get-ChildItem's original design was as a tool for enumerating items in a namespace -
    similar to but not equivalent to dir and
    ls. The syntax and usage was going to conform to standard PowerShell (Monad at the time) guidelines.
    In a nutshell, what this means is that the Path parameter would have truly just meant Path - it would not have been usable as a combination path specification and result filter, which it is now. In other words
    (1) dir c:\temp
    means you wanted to return children of the container c:\temp
    (2) dir c:\temp\*
    means you wanted to return children of all containers inside
    c:\temp. With (2), you would never get c:\tmp\a.txt returned, since a.txt is not a container.
    There are reasons that this was a good idea. The parameter names and filtering behavior was consistent with the evolving PowerShell design standards, and best of all the tool would be straightforward to stub in for use by namespace
    providers consistently.
    However, this produced a lot of heated discussion. A rational, orthogonal tool would not allow the convenience we get with the dir command for doing things like this:
    (3) dir c:\tmp\a*.txt
    Possibly more important was the "crash" factor.  It's so instinctive for admins to do things like (3) that our fingers do the typing when we list directories, and the instant failure or worse, weird, dissonant output we would get with a more pure Path
    parameter is exactly like slamming into a brick wall.
    At this point, I get a little fuzzy about the details, but I believe the Get-ChildItem syntax was settled on as a compromise that wouldn't derail people expecting files when they do (3), but would still allow more complex use.  I think that this
    is done essentially by treating all files as though they are containers for themselves. This saves a lot of pain in basic use, but introduces other pain for advanced use.
    This may shed some light on why the tool is a bit twisted, but it doesn't do a lot to help with your particular wrapping problem. You'll almost certainly need to do some more complicated things in attempting to wrap up Get-ChildItem. Can you describe some
    details of what your intent is with the wrapper? What kind of searches by what kind of users, maybe? With those details, it's likely people can point out some specific approaches that can give more consistent results.

  • Aperture usage/feature request/ minor gripe (sorry!)/ question...

    Hi all
    Quick post to say I've been using aperture for all my photography for a good while probably the MOST annoying thing that I find as a heavy user (I do photojournalism stuff my last job for example yielded 1500 images) is the lack of a progress bar... I never thought I;d hear myself say this but PLEASE PLEASE PLEASE make it so that I can tell how much more before an import/thumb/previews is due! so that's my request
    Not sure if anyone else is using vault however for backing up here's my homebrew solution. I used the vault feature only to discover I can't do all the jolly things (like printing etc) of course having not backed up my images when I deleted the vault they were gone too.... (just a test shoot but nonetheless....)! Perhaps others have more joyous stories to tell about vault however if you're not happy maybe this'll help. I have an external drive which I use solely for backup the minute I bring a job in the images go in a folder called (project name)RAWbackup and then into aperture (effectively saving twice although you can reverse the order -aperture first and export next- I wouldn't reccomend it as you can mistakenly reduce the quality -by choosing to export > versions instead of masters). I found that as the months progressed my MackBook drive became clogged up which forced me to edit out a few images to start with and then more more... which means I now have a very good historical set which I can show and be assured the originals are backed up.
    minor gripe predictive form text... defaults alphabetically in ascending order need to be in descending order to assist workflow. I presume other users use numbers/dates in IPTC i.e. I usually copyright in the following format --© (date) George Bitsanis-- which means all my image inputs default to 2004 (earliest image date I've imported) tedious really if it were descending it would default to 2009 and save the hassle.... also a way to control of be able to filter predictive texting i'e coming up with most used term first
    and I'll finish off with a question ... is there a way to organize my projects other than alphabetically?
    any comments would be great!
    cheers
    George
    www.bitsanis.co.uk

    If you go to the "Window -> Show Activity" menu item, it will open up a floating, well, activity window. Is this not what you are looking for, in terms of a progress bar? (FYI, you can also open up the window by just clicking just to the right of the "red-eye" tool; there is an "invisible" button there that opens and closes the Activity window.
    As for backups and vaulting, I just have two external 1 TB drives, that I alternate. Haven't run into any problems doing that.

  • Branch Office setup

    Hello All.
    I have a problem with a branch office setup, and I can't for the life of me think of what the problem is.
    I have a remote office setup, using an ASA 5505 that is set up to establish an easy vpn connection to the central network.  The connection at the branch office is a 20/5 cable modem, the central network has a 25/25 fiber connection.
    The issue I have is this.  Wired clients work fine at this branch office, at least 95% of the time.  I have a lightweight AP there that can come up and join the controllers at the central network, no problem.  I haven't done anything with H-REAP because there are really no resources locally they need that would allow them to do their work, so all traffic is tunneled back to the WLC.
    Wireless clients can authenticate to the AP, and I can get 15-20ms ping responses from them all day.  Latency never comes close to the 600ms proposed limit with CAPWAP.  Yet, for some reason the performance of the clients is problematic.  Webpages will frequently not load correctly, they experience some freezing, and with one application we use - it refuses to load completely.
    If we bring these same computers to an AP connected to our central network, on the same SSID, they work flawlessly.
    Something about this particular location is causing a lot of grief for our users.
    For what it's worth, we are running WCS 7.0.230.0 and the WLCs are on 7.0.116.0.  The ASA is running a pretty basic configuration, pretty much out of the box with the easy vpn configuration entered.
    Any help on this would be appreciated, I am at my wit's end with this setup.

    Yes, 20/5 Download/Upload. 
    So I did as you suggested, here are the results with a 1400 byte packet:
    Ping statistics for 172.16.253.50:
        Packets: Sent = 100, Received = 99, Lost = 1 (1% loss),
    Approximate round trip times in milli-seconds:
        Minimum = 17ms, Maximum = 2208ms, Average = 42ms
    That 2208ms response was an anomaly.  I ran it again and got this:
    Ping statistics for 172.16.253.50:
        Packets: Sent = 100, Received = 100, Lost = 0 (0% loss),
    Approximate round trip times in milli-seconds:
        Minimum = 16ms, Maximum = 93ms, Average = 21ms
    With this one specific application we're testing with - it stops loading at a predictable point, every time.  However, I can remain VNC'd to this machine the entire time, and do anything else on the machine, but the application will fail to load at the same point every time.  But like I said, if I bring that client back to our main network, it works just fine, so it's not the application itself causing the problem, and we have other, smaller issues with other applications we have.  It's really bizarre.
    It's really not acting like interference.  I just set up a new site with an identical configuration - but with a 3502i AP, and I can replicate the behavior at that location too.  Unfortunately at this time we don't have anything to study the traffic with - I actually have a call on a solution for that this afternoon.

  • Router Performance - Branch equipment in Datacenter Enviroment

    Hello,
    3 Years ago I designed a new Datacenter LAN & WAN Network for a Customer.
    He told me the final Size and growing for the next 2 Years and i made several suggestions.
    The Smallest Platforms in my Suggestion was 3945E.
    But he told me to start with the smallest possible Router Platforms and WAN links.
    The idea behind this was to keep the Initial Budget low and to change Router and Links as needed.
    So almost all routers expect the VPN HUPs are at the moment 1921/K9.
    Now after 2 Years running the Systems without Problems there customer like to start a performance Review for budgeting due to the need to double the User Load.
    According to the "small" Router Platform i was expecting the need to Change some or all of them. At least the Internet Routers und Main routers.
    I checked CPU, Memory and Interfaces of all of them.
    But Memory is Ok and CPU usage are still below 50%. So those "small Branch" equipment seems to work very well in this Datacenter environment.
    The Platform 1921 should only be able to handle up to 15Mbit (with Features enabled) but the Routers deliver much more without Problems.
    I know that it depends on the Features used like VPN IPS ACLs ect. ....
    Some of them are used for Plain Routing with small Tables (Up to 150 Networks) and basic BGP functionality, other ones for Internet Access at 100Mbit (used only 30Mbit average / Peak up to 50Mbit at present)
    So my Question is now:
    Are there some other Indicators than the CPU & Memory usage who tell us that the Routers may run out of Power?
    Or are there some Soft limits that may produce some Problems before CPU & Memory limits are reached?
    I don’t seed any Packet drops or any other bad behavior.
    What are your recommendations?
    Thanks for the Reply!
    Tracer

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    On most small Cisco routers, I would say CPU usage is the key indicator.  However, capacity isn't always linear.  50% current usage may not predict you can handle twice the bandwidth of the same kind of traffic.  (Also, as a general rule, you don't want to average above 75%.)
    The attachment, might help you "shop" for Cisco router to meet your future needs.

  • MBP Retina inconsistent UI graphics / battery life

    I apologize in advance for the 1,000,000th thread on this topic.  I (like so many others) have noticed very inconsistent smooth/choppy graphics performance with basic UI functions of OS X 10.8.2.  I have tried a number of user suggetsions to alleviate this problem including turning off dynamic switching.  That seems to have only a minor improvement, and kills battery life.  I'm assuming that OS X (like windows machines) scales the CPU on a notebook to improve battery life.  What I have noticed, is that if you have processor intense applications running/loading, the graphics lag (ie - dock zoom) improves dramatically.  Has anybody else noticed this?  This would at least suggest that the current retina MBP hardware components are more than capable of producing a smooth UI experience, in contrast to what other reveiw forums have suggested.  Is there a way to modify CPU throttling at least when the device is plugged in?  This whole issue is not a deal-breaker for me, but it's not what I've come to expect from Apple. 

    Update, for some reason puting the dock on the left or right side of the screen DRAMATICALLY improves performance of both dock magnification and the genie effect.  This may be a software issue.

  • Cache not cleared when branching to a page with before-header branch?

    I tried to make up and example in apex.oracle com.
    http://apex.oracle.com/pls/apex/f?p=20469:112
    Assign some value to items and experiment the with the buttons.
    The first button:
    Action: Redirect to Page in this application
    Page: 112
    Clear Cache: APP
    Set These Items: P112_TEST_ITEM_1
    With these values: 1
    The second button is identical to the first except that the page it branches to is 113 instead of 112.
    Page 113 has nothing more on it than a Before Header branch branching back to page 112.
    Its seems a bit inconsistent that assigning values to the items survives with the before_header branch - but clearing them does not.
    Andres

    Welcome to my thread Varad!
    As I wrote in my previous reply, the button branching to page two is part of the report and totally independent from the buttons that call the Ajax script.
    The forwarding to page two takes place after item P1_SORT has been set!
    I have no more ideas what is causing this problem. The item that is supposed to be bound on page two is of type text without session saving property. It doesn't have a standard value and neither an item source that could potentially cause the problem!
    Regards,
    Sebastian
    PS: Do you know if it's recommendable and possible to store the value of the Ajax code in an application item!?
    Edited by: skahlert on 20.09.2009 18:19

  • [Oracle 8i] ORA-00932 Inconsistent Datatypes

    Hopefully this is a quick question to answer. In running a query I get the 'inconsistent datatypes' error, and I've narrowed the error down to one part of my query:
    SELECT     CASE     
              WHEN     sopn.plan_start_dt < SYSDATE     THEN SYSDATE
              ELSE     sopn.plan_start_dt
         END                                             
    FROM     SOPN sopn(replacing SYSDATE with an actual hard-coded date doesn't work either)
    The field sopn.plan_start_dt is definitely a date datatype. If I do a simple query for just that field with a date parameter in my where statement, it works just fine, for example:
    SELECT     sopn.plan_start_dt
    FROM     SOPN sopn
    WHERE     sopn.plan_start_dt<SYSDATEdoes not give me any errors.
    I'm guessing there's something wrong with my case statement, or maybe case statements with date parameters are something Oracle 8i doesn't get along with...
    If anyone can tell me why my case statement is causing an 'inconsistent datatype' error, I'd appreciate it. Thanks!

    Well, this tells me column sopn.plan_start_dt datatype is not DATE. CASE requires all branches to return same type expressions. Most likely sopn.plan_start_dt is VARCHAR2 and it is very bad coding to rely on implicit comversions. Use TO_DATE with proper format. For example:
    SELECT     CASE     
              WHEN     TO_DATE(sopn.plan_start_dt,'DD-MON-YY') < SYSDATE     THEN SYSDATE
              ELSE     TO_DATE(sopn.plan_start_dt,'DD-MON-YY')
         END                                             
    FROM     SOPN sopnSY.

  • Inconsistent logic for viewing others schemes objects in browser

    A while ago when we upgraded our db's to Oracle 10 we also replaced our famous green friend for Sql Developer with our entire development crew.
    Among a few other things, people were not all that happy, seeing that they lost the desired "check for access to dba views option". But as long as we could rely on a consistent view in the "other schema" in all possible views (via table, or directly in the navigator) we could defend this way of thinking for not being able to see everything.
    But unfortunately, in the latest version "1.5.1 5440", we still see inconsistent things...
    for example :
    via browser (other users schema)
    => Indexes are not visible in the indexes branch point, but they do show up on the indexes tab for the corresponding table.
    => For the same "other users scheme", Triggers do show up in the triggers branch point, but then when we check the trigger tab for the corresponding table, they are not shown
    It looks like waiting for yet another version is no option and we will be forced to retreat to the green commercial swamp on high demand of our developers...
    Ribbit :-(

    There are still a few glitches in the queries used for certain things - I cannot remember the problem you mention with indexes being posted before (although I get the same problem with indexes visible in all_indexes not appearing on other users index node), but the trigger tab problem has been discussed before (1.5PROD/EA3/EA2/EA1: Only table owners triggers show in Table's trigger tab
    theFurryOne

  • RF Predictions in WCS

    So I am adding access points to WCS 6.0 and I am using a specialty antenna. When I choose other for the antenna type and I enter the dbi of the antenna when I hit save to place the APs it tells me it cannot give em the rf prediction for those APs. How do I get around this other than selecting a cisco antenna?

    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
    mso-para-margin-top:0cm;
    mso-para-margin-right:0cm;
    mso-para-margin-bottom:10.0pt;
    mso-para-margin-left:0cm;
    line-height:115%;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-fareast-font-family:"Times New Roman";
    mso-fareast-theme-font:minor-fareast;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;
    mso-bidi-font-family:"Times New Roman";
    mso-bidi-theme-font:minor-bidi;}
    Hi nshoe18
    To do RF predictions the radiation patterns (
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
    mso-para-margin-top:0cm;
    mso-para-margin-right:0cm;
    mso-para-margin-bottom:10.0pt;
    mso-para-margin-left:0cm;
    line-height:115%;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-fareast-font-family:"Times New Roman";
    mso-fareast-theme-font:minor-fareast;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;
    mso-bidi-font-family:"Times New Roman";
    mso-bidi-theme-font:minor-bidi;}
    Azimuth and Elevation) for the antenna needs to be known not just the power.
    Therefore, as WCS doesn’t have the detail for the antenna you’re using it can’t perform the prediction, this is a limitation of WCS but it guarantees the accuracy of the coverage prediction.
    It also enforces the point of buying Cisco antennas!
    Troy

Maybe you are looking for

  • 24" iMac Video Card difference

    Hello, Is there a noticeable difference between the NVidia 128mgs and 256mgs cards - specifically while gaming - as to how the graphics show on the screen? Thanks. Jen

  • Design view disappears when I uncheck Restrict Page Occurrence

    Dear Adobe Forms Experts, I have an Adobe print form with two master pages: one mailing address page that should only occur once, as the first page, and then the main page, which can occur many times. I have been asked to make the mailing page condit

  • How to get the session name of a batch input

    hi everybody does anybody know how to get the session name of a batch input? I have to put the name of the session at the end of my program so that the user can click on the session name to go directly to SM35 to run the batch input when the program

  • Debug & release versions of flash player?

    i installed flex 3.0 on my XPSP2 system, and worked through some examples. all was well, but when i try to use internet explorer, all sites that require flash 9 now complain it's not installed on my system. i am assuming a query is sent from ie7 aski

  • System information reports

    I'm trying to run a report that would potentially give me all the system information for all the solaris servers. So I add all the solaris servers to the target list On attributes i select the ones I want (typically hostname, hostID, serial number et