Question regarding classes taken

Hi all.
Does anyone know where I can get proof that I took a class from Sun? I had taken a course, Fundamentals of the Java Programming Language, Java SE 6 (SL-110-SE6) (or whatever its equivalent was) sometimes between 2006 and 2008 (I know, I should KNOW what year - I think it might have been in 2006 but I can't remember). I lost all paperwork associated with this course in a flood earlier this year, so I wanted to know if there were some way I could get some proof that I've taken this course. Any help would be appreciated.
-George Kulz
Senior Java Programmer
Information Services
Memorial Hospital of Rhode Island
Address: 111 Brewster Street, Pawtucket, RI 02860
Phone: (401) 729-3259
Email: [email protected]

Wouldnt your training provider have a record of that? I suppose Sun might have a record of that but I took the Sun SL-314-EE5 – Web Component Development with Servlet and JSP Technology course through QA training in London and I wouldnt expect SUN to have any record of that.... Although they may do.
I dont think I have the paper certificate I was issued with at the end of the course, but as no exam is required for these courses I dont really think it matters all that much as its just evidence that you attended a 4-5 day course.
Hopefully your ability in Java should be enough that it wouldnt even be questioned. If I had to prove that I really had my degree or A-Level courses and I didnt have the certificate I would probably be more worried!

Similar Messages

  • Question regarding palcing cache related classes into a package

    Hi all,
    I have a question regarding placing classes into packages. Actually I am writing cache feature which caches the results which were evaluated previously. Since it is a cache, I don't want to expose it outside because it is only for internal purpose. I have 10 classes related to this cache feature. All of them are used by cache manager (the manager class which manages cache) only. So I thought it would make sense if I keep all the classes into a separate package.
    But the problem I have is, since the cache related classes are not exposed outside so I can't make them public. If they are not public I can't access them in the other packages of my code. I can't either make it public or private. Can someone suggest a solution for my problem?

    haki2 wrote:
    But the problem I have is, since the cache related classes are not exposed outside so I can't make them public. If they are not public I can't access them in the other packages of my code.Well, you shouldn't access them in your non-cache code.
    As far as I understand, the only class that other code needs to access is the cache manager. That one must be public. All other classes can be package-private (a.k.a default access). This way they can access each other and the cache manager can access them, but other code can't.

  • Question regarding MM and FI integration

    Hi Experts
    I have a question regarding MM and FI integration
    Is the transaction Key in OMJJ is same as OBYC transaction key?
    If yes, then why canu2019t I see transaction Key BSX in Movement type 101?
    Thanks

    No, they are not the same.  The movement type transaction (OMJJ) links the account key and account modifier to a specific movement types.  Transaction code (OBYC) contains the account assignments for all material document postings, whether they are movement type dependent or not.  Account key BSX is not movement type dependent.  Instead, BSX is dependent on the valuation class of the material, so it won't show in OMJJ.
    thanks,

  • Question regarding ONT connection via Ethernet and Cable cards

    Hi,
    We recently upgraded to Fios Quation 150Mbps/65  plan. We are not getting the advertised speeds (we only get like 5mbps upload) so verizon is sending a tech to switch the ONT connection from coax to ethernet.
    I have 2 questions regarding this new setup:
    1. If the ONT communicates with the Fios Actiontect router via ethernet instead of coax from now on, How will the set top boxes and Tivo-esque cable card powered device I currently have connected to coax, talk to the verizon system from now on, if coax is taken out of the equation? Will fios signal still flow through the internal coax wiring of the house? And moreover, I was under the impression that coax was the way set top boxes communicated and derived independent ip addresses from the Fios router, for on deman purposes and what not. How will this work from now on?
    Quiestion 2.
    Right next to the wall where the ONT sits, theres's a basement office where we have a PC that connects to the Fios system  via an Actiontect MoCa adapter (ECB2500C) which I assume derives it internet connection from the Fios Actiontec router which sits upstairs in the living room. 
    Again, with the Coax about to be disabled next Friday in favor of ethernet connection from the ONT, I assume this PC will be left without internet because of the lack of internet signal in the coax? Is this correct? 
    Question 2.5 If my above assumption is correct, since this office is right on the other side of the wall where the ONT sits outside the house, would it be possible to run an ethernet wire through the wall that connects straight from the ONT to an ethernet switch inside the office, from which I would derive a connection for this basement PC (properly firewalled of course) and then, from said switch, continue running the ethernet wire that would ultimately reach the Actiontec Fios router upstairs from which the rest of the house derives it's internet?  and would this setup affect in any way the propper functioning of the cable boxes in the house?
    I'd appreciate your input and any help you can provide so I can have a ballpark idea of what to tell the Fios guy to do when he comes on Friday.
    Cheers.
    Solved!
    Go to Solution.

    It's not valid to have two devices connected to the ONT, PC and VZ Router.  Must be a single device. The ONT locks onto the MAC Address of the first device it sees. Since you have TV you should have the VZ router as the internet facing router.
    Other options:
    1.  Have the VZ Router located next to the PC in the basement and then use Wireless for all other PC's.
    2.  Have the VZ Router located next to the PC in the basement but run one wire upstairs and connect a switch where other PCs and devices can connect via a wire.
    Hope that helps.

  • A question regarding Management pack dependency.

    Hi All,
    I am new to SCOM, I have a question regarding management pack dependency.
    My question is, Is Dependency is required when New alerts are created in a unsealed MP and the object class selected during alert creation is (i.e Windows server 2012 full operating system) and it is on a Sealed management pack ? 
    For example i have a Sealed Windows server 2012 monitoring management pack.
    I have made a custom one, for windows server 2012, So if it the custom is not dependent on the sealed Windows server 2012 monitoring management pack, Then cant i create any alerts in the custom management pack targeting the class Windows server
    2012 full operating system ?

    Hi CyrAz,
    Thank you for the reply. Now if your's and my understanding is the same, Then look at the below what happened.
    I created a Alert monitor targeting a Windows Server 2012 class in my custom management pack which
    is not dependent on the Windows server 2012 management pack, But how was i successfully able to create them when the dependency is not there at all, If our understanding is same, then there must be an Error thrown while creating the monitor its self right
    ? But how was SCOM able to create that ?
    Look at the below screenshot.
    I was able to create a monitor targeting Windows server 2012 Full operating system and create a alert on the custom management pack which is not at all dependent
    on the Windows server 2012 Sealed MP.
    Look at the dependency of the management pack where i do not have the Windows server 2012 management as my custom management is not dependent on that.
    Then how come this is possible ?

  • Questions regarding Optimizing formulas in IP

    Dear all,
    This weekend I had a look at the webinar on Tips and Tricks for Implementing and Optimizing Formulas in IP.
    I’m currently working on an IP-implementation and encounter the following when getting more in-depth.
    I’d appreciate very much if you could comment on the questions below.
    <b>1.)</b> I have a question regarding optimization 3 (slide 43) about Conditions:
    ‘If the condition is equal to the filter restriction, then the condition can be removed’.
    I agree fully on this, but have a question on using the Planning Function (PF) in combination with a query as DataProvider.
    In my query I have a filter in the Characteristic restriction.
    It contains variables on fiscal year, version. These only allow single value entry.
    The DataProvider acts as filter for my PF. So I’d suppose I don’t need a condition for my PF since it is narrowed down on fiscal year and version by my query.
    <b>a.) Question: Is that correct?</b>
    I just one to make sure that I don’t get to many records for my PF as input. <u>How detrimental for performance is it to use conditions anyway?</u>
    <b>2.)</b> I read in training BW370 (IP-training) that a PF is executed for the currently set filter (navigational state) in the query and that characteristics that are used in restricted keyfigures are ignored in the filter.
    So, if I use version in the restr. keyfig it will be ignored.
    <b>Questions:
    a.) Does this mean that the PF is executed for all versions in the system or for the versions that are in the filter of the Characteristic Restrictions and not the currently set filter?</b>
    <b>b.) I’d suppose the dataset for the PF can never be bigger than the initial dataset that is selected by the query, right?
    c.) Is the PF executed anaway against navigational state when I use filtering? If have an example where I filter on field customer thus making my dataset smaller, but executing the PF still takes the same amount of time.
    d.) And I also encounter that the PF is executed twice. A popup comes up showing messages regarding the execution. After pressing OK, it seems the PF runs again...</b>
    <b>3.)</b> If I use variables in my Planning Function I don’t want to fill in the parameter VAR_VALUE with a value. I want to use the variable which is ready for input from the selection screen of the query.
    So when I run the PF it should use the BI-variable. It’s no problem to customize this in the Modeler. But when I go into the frontend the field VAR_VALUE stays empty and needs a value.
    <b>Question:
    a.) What do I enter here? For parameter VAR_NAME I use the variable name, but what do I use for parameter VAR_VALUE?  Also the variable name?</b>
    <b>4.)</b> Question regarding optimization 6 (slide 48) about Formulas on MultiProviders:
    'If the formula is using data of only one InfoProvider but is defined on a MultiProvider, the the complete formual should be moved to the single base InfoProvider'.
    In our case we have three cubes in the MP, two realtime and one normal one. Right now we have one AggrLevel (AL) on op of the MP.
    For one formula I can use one cube so it's better to cretae another AL with the formula based on that cube.
    For another formula I need the two <u>realtime</u> cubes. This is interesting regarding the optimization statement.
    <b>Question:
    a.) Can I use the AL on the MP then or is it better to create a <u>new</u> MP with only these two cubes and create an AL on top of that. And than create the formula on the AL based on the MP with the two cubes?</b>
    This makes the architecture more complex.
    Thanks a lot in advance for your appreciated answers!
    Kind regards, Harjan
    <b></b><b></b>

    Marc,
    Some additional questions regarding locking.
    I encounter that the dataset that is locked depends on the restrictions made in the 'Characteristic Restrictions'-part of the query.
    Restrictions in the 'Default Values'-part are not taken into account. In that case all data records of the characteristic are locked.
    Q1: Is that correct?
    To give an example: Assume you restrict customer on hierarchy node in Default Values. If you want people to plan concurrently this is not possible since all customers are locked then. When customer restriction is moved to Char Restr the system only locks the specific cutomer hier node and people can plan concurrently.
    Q2: What about variables use in restricted keyfigures like variable for fy/period? Is only this fy/period locked then?
    Q3: We'd like to lock on a navigational attribute. The nav attr is put as a variable in the filter of the Characteristic Restrictions. Does the system then only lock this selection for the nav.attr? Or do I have to change my locking settings in RSPLSE?
    Then question regarding locking of data for functions:
    Assume you use the BEx Analyzer and use the query as data_provider_filter for your planning function. You use restricted keyfigures with char Version. First column contains amount for version 1 and second column contains amount for version 2.
    In the Char Restrictions you've restricted version to values '1' and '2'.
    When executing the inputready query version 1 and 2 are locked. (due to the selection in Char Restr)
    But when executing the planning function all versions are locked (*)
    Q4: True?
    Kind regards, Harjan

  • Questions regarding new functionalities in EhP 4 - Reporting Financials 2

    Dear Forum,
    in a project we would like to use some new functionalities from Reporting Financials 2 - ie. Datasource 0FI_AA_20 for Depreciation and Amortization loading to BI for following years as this can not be done by old extractor.
    We are know looking for reliable information about impact and changes that are made in ERP if we switch on the functionality Reporting Financials 2 via SFW5? Will old extracors work nevertheless? Will all reports in ERP work without problems? Is there any impact on business processes? Or is this just additional functionality which will not affect current implementation?
    Can anybody give information about this?
    Thanks, regards
    Lars
    Edited by: Lars Hermanns on Jun 2, 2010 10:29 AM
    Edited by: Lars Hermanns on Jun 2, 2010 10:29 AM

    Marc,
    Some additional questions regarding locking.
    I encounter that the dataset that is locked depends on the restrictions made in the 'Characteristic Restrictions'-part of the query.
    Restrictions in the 'Default Values'-part are not taken into account. In that case all data records of the characteristic are locked.
    Q1: Is that correct?
    To give an example: Assume you restrict customer on hierarchy node in Default Values. If you want people to plan concurrently this is not possible since all customers are locked then. When customer restriction is moved to Char Restr the system only locks the specific cutomer hier node and people can plan concurrently.
    Q2: What about variables use in restricted keyfigures like variable for fy/period? Is only this fy/period locked then?
    Q3: We'd like to lock on a navigational attribute. The nav attr is put as a variable in the filter of the Characteristic Restrictions. Does the system then only lock this selection for the nav.attr? Or do I have to change my locking settings in RSPLSE?
    Then question regarding locking of data for functions:
    Assume you use the BEx Analyzer and use the query as data_provider_filter for your planning function. You use restricted keyfigures with char Version. First column contains amount for version 1 and second column contains amount for version 2.
    In the Char Restrictions you've restricted version to values '1' and '2'.
    When executing the inputready query version 1 and 2 are locked. (due to the selection in Char Restr)
    But when executing the planning function all versions are locked (*)
    Q4: True?
    Kind regards, Harjan

  • Question regarding Inheritance.Please HELP

    A question regarding Inheritance
    Look at the following code:
    class Tree{}
    class Pine extends Tree{}
    class Oak extends Tree{}
    public class Forest{
    public static void main(String args[]){
      Tree tree = new Pine();
      if( tree instanceof Pine )
      System.out.println( "Pine" );
      if( tree instanceof Tree )
      System.out.println( "Tree" );
      if( tree instanceof Oak )
      System.out.println( "Oak" );
      else System.out.println( "Oops" );
    }If I run this,I get the output of
    Pine
    Oak
    Oops
    My question is:
    How can Tree be an instance of Pine.? Instead Pine is an instance of Tree isnt it?

    The "instanceof" operator checks whether an object is an instance of a class. The object you have is an instance of the class Pine because you created it with "new Pine()," and "instanceof" only confirms this fact: "yes, it's a pine."
    If you changed "new Pine()" to "new Tree()" or "new Oak()" you would get different output because then the object you create is not an instance of Pine anymore.
    If you wonder about the variable type, it doesn't matter, you could have written "Object tree = new Pine()" and get the same result.

  • Question regarding Inrefaces:Please GUIDE.

    A question regarding INTERFACES.
    'Each interface definition constitutes a new type.
    As a result, a reference to any object instantiated from any class
    that implements a given interface can be treated as the type of
    the interface'.
    So :
    interface I{}
    class A implements I{
    }Now,class A is of type I.Right?
    Now,if class A implements more than one interface,then what
    is the actual type of A?
    For example:
    interface I{}
    interface R{}
    class B implements I,R{
    }What is now B's type? I or R? or both?

    >
    The class (that implements the interface) actually
    defines the behavior, and the interface just serves as
    a contract for that behaviorYes.
    - a view.Call it that if you want, but it being "a view" doesn't take away is-a-ness.
    IMHO, the 'types' are the classes, which qualify for
    the 'is a' relationshipAs yawmark points out, your use of "type" is not consistent with the JLS. Regardless of how you want to define type, the face it that it makes sense to say "A LinkedList is a List" and "A String is (a) Comparable" etc. Additionally, the way I've always seen the is-a relationship described, and the way that makes the most sense to me, is that "A is-a B" means "A can be used where B is expected." In this respect, superclasses and implemented interfaces are no different.
    (which is what the words "extends" and "implements"
    strongly suggest)"Foo extends Bar" in plain English doesn't suggest to me that Foo is a Bar, but quite clearly, in the context of Java's OO model, it means precisely that.
    "Foo implements Bar" in plain English doesn't suggest much to me. Maybe that Foo provides the implementation specified in Bar, and therefore can be used where a Bar is required, which is exactly what implements means in Java and which, as far as I can tell, is the core of what the is-a relationship is supposed to be about in general OO.

  • Question regarding ExternalizableLite

    One question regarding extending ExternalizableLite
    If I have a class A which extends ExternalizableLite as
    class A {
    int index;
    B objectB;
    I'd need to use the ExternalizableHelper's writeObject and readObject when deal with objectB member.
    If I also make class B extends ExternalizableLite, will that help on serialize/deserialize class A? Or it not going to help at all?
    Regards,
    Chen

    Hi Chen,
    if B is an ExternalizableLite implementor, then ExternalizableHelper.writeObject() will delegate to ExternalizableHelper.writeExternalizableLite(). I.e., it will write out a byte indicating that it is an externalizable lite, then writes out the classname and then delegates to your class B's writeExternal method. The improvement is the difference between the class B writeExternal method implementation and the Java serialization of the same class.
    If you know the type of the objectB member without writing the class name to the stream (e.g. if class B is final), then you can improve further by not delegating to writeObject, you can instead directly delegate to your class B's writeExternal method. Upon deserialization, you can instantiate the objectB member on your own, and delegate to the readExternal method of the newly instantiated objectB member.
    This allows you to spare the class name plus one byte in the serialized form of your A class.
    Best regards,
    Robert

  • Questions regarding Disk I/O

    Hey there, I have some questions regarding disk i/o and I'm fairly new to Java.
    I've got an organized 500MB file and a table like structure (represented by an array) that tells me sections (bytes) within the file. With this I'm currently retrieving blocks of data using the following approach:
    // Assume id is just some arbitary int that represents an identifier.
    String f = "/scratch/torum/collection.jdx";
    int startByte = bytemap[id-1];
    int endByte = bytemap[id];
    try {
              FileInputStream stream = new FileInputStream(f);
              DataInputStream in = new DataInputStream(stream);
                    in.skipBytes(startByte);
              int position = collectionSize - in.available();
              // Keep looping until the end of the block.
              while(position <= endByte) {
                  line  = in.readLine();
                  // some pocessing here
                  String[]entry = line.split(" ");
                  String docid = entry[1];
                  int tf = Integer.parseInt(entry[2]);
                  // update the current position within the file.
                  position = collectionSize - in.available();
       } catch(IOException e) {
              e.printStackTrace();
       }This code does EXACTLY what I want it to do but with one complication. It isn't fast enough. I see that using BufferedReader is the choice after reading:
    http://java.sun.com/developer/technicalArticles/Programming/PerfTuning/
    I would love to use this Class but BufferedReader doesn't have the function, "skipBytes(), which is vital to achieve what I'm trying to do. I'm also aware that I shouldn't really be using the readLine() function of the DataInputStream Class.
    So could anyone suggest improvements to this code?
    Thanks
    null

    Okay I've got some results and turns out DataInputStream is faster...
    EDIT: I was wrong. RandomAccessFile becomes a bit faster according to my test code when the block size to read is large.
    So I guess I could write two routines in my program, RAF for when the block size is larger than an arbitary value and FileInputStream for small blocks.
    Here is the code:
    public void useRandomAccess() {
         String line = "";
         long start = 1385592, end = 1489808;
         try {
             RandomAccessFile in = new RandomAccessFile(f, "r");
             in.seek(start);
             while(start <= end) {     
              line = in.readLine();     
              String[]entry = line.split(" ");
              String docid = entry[1];
              int tf = Integer.parseInt(entry[2]);
              start = in.getFilePointer();
         } catch(FileNotFoundException e) {
             e.printStackTrace();
         } catch(IOException ioe) {
             ioe.printStackTrace();
    public void inputStream() {
         String line = "";
         int startByte = 1385592, endByte = 1489808;
         try {
             FileInputStream stream = new FileInputStream(f);
             DataInputStream in = new DataInputStream(stream);
             in.skipBytes(startByte);
             int position = collectionSize - in.available();
             while(position <= endByte) {
              line  = in.readLine();
              String[]entry = line.split(" ");
              String docid = entry[1];
              int tf = Integer.parseInt(entry[2]);
              position = collectionSize - in.available();
         } catch(IOException e) {
             e.printStackTrace();
        }and the main looks like this:
       public static void main(String[]args) {
         DiskTest dt = new DiskTest();
         long start = 0;
         long end = 0;
         start = System.currentTimeMillis();
         dt.useRandomAccess();
         end = System.currentTimeMillis();
         System.out.println("Random: "+(end-start)+"ms");
         start = System.currentTimeMillis();
         dt.inputStream();
         end = System.currentTimeMillis();
         System.out.println("Stream: "+(end-start)+"ms");
        }The result:
    Random: 345ms
    Stream: 235ms
    Hmmm not the kind of result I was hoping for... or is it something I've done wrong?

  • Question regarding Command pattern

    Hi!
    I have a question regarding the Command pattern:
    //Invoker as defined in GOF Design Patterns
    public class SomeServer {
        //Receiver as defined in GOF Design Patterns.
        private Receiver receiver;
        //Request from a network client.
        public void service(SomeRequest request) {
            Command cmd = CommandFactory.createCommand(request);
            cmd.execute();
    }The concrete command which implements the Command needs a reference to Receiver in order to execute it's operation but how is the concrete command best configured? Should I send Receiver along with the request
    as a parameter to the createCommand method or should i configure the receiver inside the CommandFactory or
    send it as a paramter to the execute method? Since SomeServer acts as both client and invoker, SomeServer "knows" about the Commands receiver. Is this a bad
    thing?
    Regards
    /Fredrik

    #!/bin/bash
    DATE=$(date '+%y-%m-%d')
    if find | grep -q $DATE ; then
    echo "OK - Backup files found"
    exit 0
    else
    echo "Critical - No Backups found today!"
    exit 2
    fi
    should work too and it's a bit shorter.
    Please remember to mark the thread as solved.

  • Few questions regarding Oracle Scorecard and strategy management.

    Hi,
    I have following questions regarding Oracle Scorecard and strategy management:
    1. What are the ways in which i can show status of a KPI, like we have colors and symbols, what are others?
    2. can we keep log of KPIs, store them, keep report of feedback/action taken on that?
    3. Does Scorecard and strategy management have ability to retain history on feedback and log of
    entries i.e. date/time, user name?
    4. Does Scorecard and strategy management have ability to use common mathematical formulas e.g. median, average, percentiles. Describe.?
    Thanks in advance for your help.

    bump.

  • A question regarding deallocating arrays

    hello. just to check if i learned arrays correct..
    for example i have an array 'anArray[ ]'
    public String[ ] anArray;
    -> to deallocate anArray, i simply used:
    anArray = null; instead of performing a for loop maybe..
    is this correct?
    it's quite confusing. does it apply to multidimensional arrays? for example anArray[ ] [ ] = null???

    A question regarding eclipse.
    No offence meant.People are not usually offended by Eclipse.
    I have added a jar file to the current project in its
    build path.
    This executes the concerned class file which is in
    the jar file.
    If I remove the jar file from the build path and run
    the project,
    it still executes that class file.
    I tried building a clean project and running it,but
    still executes the jar file and the class.
    Any idea why this is happening?You have that class file elsewhere
    or you have confused your problem statement.
    Why don't you create a new project and add everything except that class/jar (whichever yu mean)

  • A question regarding Java project in ECLIPSE IDE

    A question regarding eclipse.
    No offence meant.
    I have added a jar file to the current project in its build path.
    This executes the concerned class file which is in the jar file.
    If I remove the jar file from the build path and run the project,
    it still executes that class file.
    I tried building a clean project and running it,but still executes the jar file and the class.
    Any idea why this is happening?

    A question regarding eclipse.
    No offence meant.People are not usually offended by Eclipse.
    I have added a jar file to the current project in its
    build path.
    This executes the concerned class file which is in
    the jar file.
    If I remove the jar file from the build path and run
    the project,
    it still executes that class file.
    I tried building a clean project and running it,but
    still executes the jar file and the class.
    Any idea why this is happening?You have that class file elsewhere
    or you have confused your problem statement.
    Why don't you create a new project and add everything except that class/jar (whichever yu mean)

Maybe you are looking for

  • How to save as AI file in same location with all text curves?

    Hi Guys, I am also looking for a script. Here are the details.... I do have lot of AI files (from different paths), which are completly work  done. So I just need to make them (all files) a copy with all text  curves in the same path should create fo

  • Querying CMS tables for users' security, history & favorites

    Hello, We are running BOBJ over a BW environment, and we've been asked to help design a Webdynpro interface that reads a user's security credentials in both BW and BOBJ servers and pulls in all reports, Bex or BOBJ, that they have access to. To do th

  • How do I upgrade from final cut 5.1 to final cut 5.1.4?

    I wanted to upgrade from final cut studio 5.1 to 5.1.4 because it doesn't recognise my camera JVC HD111E or the format I have used 730p 25 frame. 5.1 codecs are extremely limited. However my Mac doesn't see that I need this update when I go to softwa

  • IPhone 5 reset itself and half of my photostream got deleted!

    Im so gutted, I forgot my passlock for my iPhone and it reset itself and I had over 350 photos on photostream and when i re-activated the photostream i only have 111 for some reason. Any help would be greatly appreciated.

  • Are Merge Fields gone in iWork 2013?

    My previous version of Pages had an Insert for Merge Fields. I created Address Headers and went into Contacts, dragged and dropped, the information fell into place. No extra typing needed. I'm using my Templates from before. Has this handy feature be