5.1 -   6.1 interoperability

Hi!
On http://edocs.bea.com/wls/docs61/interop/interop.html there's a warning
saying that I need a special interoperability jar in order lookup beans in a
5.1sp11 container from a 6.1sp2 container via iiop.
Where can I find this jar, or am I simply mistaken in assuming that I need
it?
I've tried generating stubs with the -iiop option to ejbc, this works, but
when I attempt to lookup the ejbhomes i get this error:
<snip>
28430 [Thread-2] INFO no.sb1.util.Trace - javax.naming.NamingException:
Unhand
led error in lookup [Root exception is
weblogic.rmi.extensions.RemoteRuntimeExce
ption - with nested exception:
[java.rmi.UnmarshalException: Exception waiting for response; nested
exception is:
        java.io.EOFException: endOfStream called by muxer]]
28460 [Thread-2] WARN no.sb1.via.internet.is.log.SystemFel - Stack trace:
java.rmi.UnmarshalException: Exception waiting for response; nested
exception is
java.io.EOFException: endOfStream called by muxer
java.io.EOFException: endOfStream called by muxer
at
weblogic.iiop.MuxableSocketIIOP.endOfStream(MuxableSocketIIOP.java:63
9)
at
weblogic.socket.NTSocketMuxer.processSockets(NTSocketMuxer.java:586)
at
weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:
24)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
--------------- nested within: ------------------
weblogic.rmi.extensions.RemoteRuntimeException - with nested exception:
[java.rmi.UnmarshalException: Exception waiting for response; nested
exception i
s:
        java.io.EOFException: endOfStream called by muxer]
at weblogic.rmi.internal.ProxyStub.invoke(ProxyStub.java:60)
at $Proxy70.resolve(Unknown Source)
at
weblogic.jndi.cosnaming.IIOPInitialContext.lookup(IIOPInitialContext.
java:107)
at javax.naming.InitialContext.lookup(InitialContext.java:350)
</snip>
I'm, running jdk1.3.1_01 on both the 5.1 and the 6.1 server, the 5.1 being
on solaris, and the 6.1 on windows (*urk*) The generated jar deploys fine on
5.1.
best regards
Trond Strømme, mogul technology

"Trond Strømme" <[email protected]> writes:
On http://edocs.bea.com/wls/docs61/interop/interop.html there's a warning
saying that I need a special interoperability jar in order lookup beans in a
5.1sp11 container from a 6.1sp2 container via iiop.
Where can I find this jar, or am I simply mistaken in assuming that I need
it?You need it and you need to go through support to get
it. Alternatively you could use SP12/SP3. SP12 has just been release
SP3 should be out next month.
I've tried generating stubs with the -iiop option to ejbc, this works, but
when I attempt to lookup the ejbhomes i get this error:
<snip>
28430 [Thread-2] INFO no.sb1.util.Trace - javax.naming.NamingException:
Unhand
led error in lookup [Root exception is
weblogic.rmi.extensions.RemoteRuntimeExce
ption - with nested exception:
[java.rmi.UnmarshalException: Exception waiting for response; nested
exception is:
java.io.EOFException: endOfStream called by muxer]]
28460 [Thread-2] WARN no.sb1.via.internet.is.log.SystemFel - Stack trace:
java.rmi.UnmarshalException: Exception waiting for response; nested
exception is
java.io.EOFException: endOfStream called by muxer
java.io.EOFException: endOfStream called by muxer
at
weblogic.iiop.MuxableSocketIIOP.endOfStream(MuxableSocketIIOP.java:63
9)
at
weblogic.socket.NTSocketMuxer.processSockets(NTSocketMuxer.java:586)
at
weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:
24)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:139)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
--------------- nested within: ------------------
weblogic.rmi.extensions.RemoteRuntimeException - with nested exception:
[java.rmi.UnmarshalException: Exception waiting for response; nested
exception i
s:
java.io.EOFException: endOfStream called by muxer]
at weblogic.rmi.internal.ProxyStub.invoke(ProxyStub.java:60)
at $Proxy70.resolve(Unknown Source)
at
weblogic.jndi.cosnaming.IIOPInitialContext.lookup(IIOPInitialContext.
java:107)
at javax.naming.InitialContext.lookup(InitialContext.java:350)
</snip>
I'm, running jdk1.3.1_01 on both the 5.1 and the 6.1 server, the 5.1 being
on solaris, and the 6.1 on windows (*urk*) The generated jar deploys fine on
5.1.I think you need to get the interop patch and take this up with
support.
andy

Similar Messages

  • Oracle 8.1.5 (6?) interoperability with Java/C++ clients and various ORBs

    Hi
    Machine/OS: SGI VWS 540, 256MB RAM, Win2000
    Oracle: Oracle 8i (8.1.6)/Enterprise Edition Release 2
    Problem:
    I want to bind to a CORBA object (pureCORBA bank example) from c++.
    I found in the FAQ an article about the interoperability with c++. I downloaded an archieve with all the necessary code. But....
    I won't compile. It has been tested on Visibroker 3.2, but now i get compilation errors... (GIOP::ObjectKey CORBA::OctetSequence convertion).
    Is there an update of this article that uses 8i 8.1.6 and Visibroker 3.4 (that ships with 8.1.6). I compile it using MS VC++ 6.0 SP 4.
    I can make the current files compile, but then get a link error (login.lib uses stuff already defined in msvcrt.lib ....)
    Thank you
    Bart De Lathouwer

    Hi
    Machine/OS: SGI VWS 540, 256MB RAM, Win2000
    Oracle: Oracle 8i (8.1.6)/Enterprise Edition Release 2
    Problem:
    I want to bind to a CORBA object (pureCORBA bank example) from c++.
    I found in the FAQ an article about the interoperability with c++. I downloaded an archieve with all the necessary code. But....
    I won't compile. It has been tested on Visibroker 3.2, but now i get compilation errors... (GIOP::ObjectKey CORBA::OctetSequence convertion).
    Is there an update of this article that uses 8i 8.1.6 and Visibroker 3.4 (that ships with 8.1.6). I compile it using MS VC++ 6.0 SP 4.
    I can make the current files compile, but then get a link error (login.lib uses stuff already defined in msvcrt.lib ....)
    Thank you
    Bart De Lathouwer

  • Working on ETL tools interoperability using Common Warehouse Model (CWM)

    Hi All,
    Its just a piece of information and not a question.
    I have been working on proving the ETL tools interoperability using Common Warehouse Metamodel(CWM), an OMG standard. The whole concept is to take out the metadata from an ETL tool say OWB and put it into CWM Metadata Repository and this metadata can be used for building the same project in any other tool say Informatica or may be in the same ETL tool.
    The main thing in this process is to map each ETL tool with the CWM concepts and then using Model to Model Tranformations(Technologies like Xtend) one can set up a communication between different ETL tools.
    Till now I have worked with OWB only. I, with my team, have extracted all information from an OWB project (which has medium level of complexity like two oracle modules(schemas) and few tables, views and mappings with various operators) and put it in CWM repository and extracted it back from CWM MDR to OWB itself. We haven't worked with any other ETL tool because of the unavailablity of any other ETL tool with us. We will be working with Pentaho kettle in near future and try to prove the whole process as two way communication.
    The whole process can be described in steps as below :
    1. Creation of a manual OWB Ecore model(model representaion in Eclipse Modelling Framework) which gives all dependencies and reletionships in OWB objects like Project,OracleModule etc.
    2. Creation of CWM ecore model from Rational Rose mdl which has been provided by OMG on their site.
    3. Generation of Java code(Gen Model) from the above mentioned ecore model(It is needed to create an object from OWB).
    4. Extraction of project from OWB using public views which has been exposed by OWB itself. You can refer to [http://download.oracle.com/docs/cd/B31080_01/doc/owb.102/b28225/toc.htm|http://download.oracle.com/docs/cd/B31080_01/doc/owb.102/b28225/toc.htm] link for OWB public views and other APIs.
    5. (Actually Step 4 is a part of this step only )Writing a Java code which has JDBC connection for accessing OWB public views and Ecore model as imported java files(Step 3 has been done for this part only). This java code will return an OWB project object(instance of the Ecore model) which will be used in further steps.
    6. Writing an Xtend code to do a model to model tranformation from OWB to CWM.
    7. Writing an Open Architecture Workflow to combine all the steps in one step which will take the output of java code(step 5) and put it into Xtend Code(step 6) and then will take the output of Xtend code and give it to the XMIWriter(an OAW component) to write an XMI which is actually a CWM Ecore Model instance.
    8. Saving above XMI(CWM model instance) to the CWM MDR using Hibernate and Teneo.
    In the same way we can extract metadata from CWM MDR and put it into OWB. But the only problem with OWB is that we cannot persist OWB object in OWB repositories as OWB tables are very cryptic and tough to understand. So for that we have used TCL scripts(OMB Plus scripts) to create a project in OWB using OWB ecore instance. You can refer to the above oracle documentation link for TCL scripts.
    Let me know if I can assist you if you are working on the same.
    You can mail me for any queries. My email id is [email protected].
    Thanks,
    Deepak

    Hi
    1. Why do we need to install another standalone HTTP server in a separate home? Where do we use that server?
    DA: The separate HTTP server is for the Workflow Monitor, which is not necessary (it has some use cases mind you).
    2. To make the OWB work correctly while using ETL features, do we always need to run Workflow Configuration Assistant, because I wasn't able to generate code from OWB editor after building a mapping while Workflow Configuration Assistant wasn't running.
    DA: Not necessary, what error did you get? Mappings can be designed, deployed and executed without Workflow. Workflow can be used for orchestrating the the mappings (ie .running a bunch of them in a specific order with others tasks).
    3. Whenever I try to save my work in OWB, I get an error , message : Preference.properties (Access is denied). Though it saves my work but I don't understand why I am getting this error. It looks like OWB is trying to access some Property from the Preferences (Tools menu) but can't access.
    DA: It sounds like the directory where you have installed OWB does not have permissions for the OS user you are executing it. Is the install user different from the execution user? Either run using the installed user, or change the permissions of the directories (grant the executing user write permissions under all directories under owb).
    4. I also get error while closing the Mapping Editor :-
    DA. same issue as 3.
    Cheers
    David

  • Berkeley DB Java Edition (JE) and JRuby Interoperability

    I finally got around to doing a quick test of calling Berkeley DB Java Edition (JE) from JRuby (JRuby is a 100% pure-Java implementation of Ruby).
    Before we get to JE and JRuby you probably want to know the answer to this question: "Why you would want to run Ruby on a JVM?" The answer is threefold:
    1. Ruby Performance. A large amount of effort has been put into tuning contemporary JVMs (e.g. Hotspot, Java 6, etc.) and Ruby programmers (through JRuby) can benefit from these tuning efforts. The JRuby guys have set a goal to make JRuby the fastest Ruby implementation available and Sun is certainly throwing their weight behind that effort.
    2. Portability. JRuby is a Ruby interpreter that runs anywhere a Java 5 JVM runs. You download it as a single tar.gz and it will run pretty much anywhere.
    3. Legacy Code. JRuby makes legacy Java apps and libraries available to Ruby programmers (did you ever think you'd see the word "legacy" next to the word "Java"?).
    JE interoperability with JRuby is important because it means that Ruby programmers now have a simple, embeddable, ACID storage engine (JE) available to them.
    To test this interoperability, I cobbled together a simple Ruby test program which does the following:
    * Opens an Environment, Database, and Transaction
    * Creates 10 records with keys 1..10 and marshaled Ruby Time instances as the corresponding data. This uses the Ruby Marshal package for the data binding and the JE Integer binding on the key side. There's no reason why you couldn't use different marshaling packages or methods for keys and data.
    * Commits the transaction,
    * Performs a Cursor scan to read those 10 records and prints out the Time instances, and
    * Searches for and reads the record with key 5 (an arbitrary key) and prints out the Time instance that is the corresponding data
    By the way, hats off to the JRuby developers: all of this code "just worked", out of the box, and most of my two hour investment was spent learning enough basic Ruby to make it all work. If you already know Ruby and JE, then demonstrating this interoperability would take you all of about 10 minutes.
    This was all done at the "base API" level of JE and no modifications to JE were required. I used Transactions in my code, but there's no reason that you need to. Mark and I have been talking about how to integrate JE's Direct Persistence Layer (DPL) with JRuby and we think it can be done with some remodularization of some of the DPL code. This is exciting because it would provide POJO ACID persistence to Ruby programmers.
    Linda and I have been talking about whether it makes sense to possibly use Ruby as a scripting platform for JE in the future. Given how easy it was to bring up JE and JRuby, this certainly warrants some further thought.
    The Ruby code and corresponding output is shown below. By the way, if you see something that I didn't do "The Ruby Way", feel free to let me know.
    I'd love to hear about your experiences with JE and JRuby. Feel free to email me a charles.lamb at <theobviousdomain dot com>.
    require 'java'
    module JESimple
      require 'date'
      # Include all the Java and JE classes that we need.
      include_class 'java.io.File'
      include_class 'com.sleepycat.je.Cursor'
      include_class 'com.sleepycat.je.Database'
      include_class 'com.sleepycat.je.DatabaseConfig'
      include_class 'com.sleepycat.je.DatabaseEntry'
      include_class 'com.sleepycat.je.Environment'
      include_class 'com.sleepycat.je.EnvironmentConfig'
      include_class 'com.sleepycat.je.OperationStatus'
      include_class 'com.sleepycat.je.Transaction'
      include_class 'com.sleepycat.bind.tuple.IntegerBinding'
      include_class 'com.sleepycat.bind.tuple.StringBinding'
      # Create a JE Environment and Database.  Make them transactional.
      envConf = EnvironmentConfig.new()
      envConf.setAllowCreate(true)
      envConf.setTransactional(true)
      f = File.new('/export/home/cwl/work-jruby/JE')
      env = Environment.new(f, envConf);
      dbConf = DatabaseConfig.new()
      dbConf.setAllowCreate(true)
      dbConf.setSortedDuplicates(true)
      dbConf.setTransactional(true)
      db = env.openDatabase(nil, "fooDB", dbConf)
      # Create JE DatabaseEntry's for the key and data.
      key = DatabaseEntry.new()
      data = DatabaseEntry.new()
      # Begin a transaction
      txn = env.beginTransaction(nil, nil)
      # Write some simple marshaled strings to the database.  Use Ruby
      # Time just to demonstrate marshaling a random instance into JE.
      for i in (1..10)
        # For demonstration purposes, use JE's Binding for the key and
        # Ruby's Marshal package for the data.  There's no reason you
        # couldn't use JE's bindings for key and data or visa versa or
        # some other completely different binding.
        IntegerBinding.intToEntry(i, key)
        StringBinding.stringToEntry(Marshal.dump(Time.at(i * 3600 * 24)),
                                    data)
        status = db.put(txn, key, data)
        if (status != OperationStatus::SUCCESS)
          puts "Funky status on put #{status}"
        end
      end
      txn.commit()
      # Read back all of the records with a cursor scan.
      puts "Cursor Scan"
      c = db.openCursor(nil, nil)
      while (true) do
        status = c.getNext(key, data, nil)
        if (status != OperationStatus::SUCCESS)
          break
        end
        retKey = IntegerBinding.entryToInt(key)
        retData = Marshal.load(StringBinding.entryToString(data))
        dow =
        puts "#{retKey} => #{retData.strftime('%a %b %d')}"
      end
      c.close()
      # Read back the record with key 5.
      puts "\nSingle Record Retrieval"
      IntegerBinding.intToEntry(5, key)
      status = db.get(nil, key, data, nil)
      if (status != OperationStatus::SUCCESS)
        puts "Funky status on get #{status}"
      end
      retData = Marshal.load(StringBinding.entryToString(data))
      puts "5 => #{retData.strftime('%a %b %d')}"
      db.close
      env.close
    end
    Cursor Scan
    1 => Fri Jan 02
    2 => Sat Jan 03
    3 => Sun Jan 04
    4 => Mon Jan 05
    5 => Tue Jan 06
    6 => Wed Jan 07
    7 => Thu Jan 08
    8 => Fri Jan 09
    9 => Sat Jan 10
    10 => Sun Jan 11
    Single Record Retrieval
    5 => Tue Jan 06

    In my previous post (Berkeley DB Java Edition in JRuby), I showed an example of calling JE's base API layer and mentioned that Mark and I had been thinking about how to use the DPL from JRuby. Our ideal is to be able to define classes in Ruby, annotate those class definitions with DPL-like annotations, and have the JE DPL store them. There are a number of technical hurdles to overcome before we can do this. For instance, Ruby classes defined in JRuby do not map directly to underlying Java classes; instead they all appear as generic RubyObjects to a Java method. Granted, it would be possible for the DPL to fish out all of the fields from these classes using reflection, but presently it's just not set up to do that (hence the modification to the DPL that I spoke about in my previous blog entry). Furthermore, unlike Java, Ruby allows classes to change on the fly (add/remote new fields and methods) causing more heartburn for the DPL unless we required that only frozen Ruby classes could be stored persistently.
    On thinking about this some more, we realized that there may be a way to use the DPL from JRuby, albeit with some compromises. The key to this is that in JRuby, if a Java instance is passed back to the "Ruby side" (e.g. through a return value or by calling the constructor for a Java class), it remains a Java instance, even when passed around in JRuby (and eventually passed back into the "Java side"). So what if we require all persistent classes to be defined (i.e. annotated) on the Java side? That buys us the standard DPL annotations (effectively the DDL), freezes the classes that the DPL sees, and still lets us benefit from the POJO persistence of the DPL. All of this can be done without modification to JE or the DPL using the currently available release. I cooked up a quick example that builds on the standard "Person" example in the DPL doc and included the code below.
    require 'java'
    module DPL
      require 'date'
      # Include all the Java and JE classes that we need.
      include_class 'java.io.File'
      include_class 'com.sleepycat.je.Environment'
      include_class 'com.sleepycat.je.EnvironmentConfig'
      include_class 'com.sleepycat.persist.EntityCursor'
      include_class 'com.sleepycat.persist.EntityIndex'
      include_class 'com.sleepycat.persist.EntityStore'
      include_class 'com.sleepycat.persist.PrimaryIndex'
      include_class 'com.sleepycat.persist.SecondaryIndex'
      include_class 'com.sleepycat.persist.StoreConfig'
      include_class 'com.sleepycat.persist.model.Entity'
      include_class 'com.sleepycat.persist.model.Persistent'
      include_class 'com.sleepycat.persist.model.PrimaryKey'
      include_class 'com.sleepycat.persist.model.SecondaryKey'
      include_class 'com.sleepycat.persist.model.DeleteAction'
      include_class 'persist.Person'
      include_class 'persist.PersonExample'
      # Create a JE Environment and Database.  Make them transactional.
      envConf = EnvironmentConfig.new()
      envConf.setAllowCreate(true)
      envConf.setTransactional(true)
      f = File.new('/export/home/cwl/work-jruby/JE')
      env = Environment.new(f, envConf);
      # Open a transactional entity store.
      storeConfig = StoreConfig.new();
      storeConfig.setAllowCreate(true);
      storeConfig.setTransactional(true);
      store = EntityStore.new(env, "PersonStore", storeConfig);
      class PersonAccessor
        attr_accessor :personBySsn, :personByParentSsn
        def init(store)
          stringClass = java.lang.Class.forName('java.lang.String')
          personClass = java.lang.Class.forName('persist.Person')
          @personBySsn = store.getPrimaryIndex(stringClass, personClass)
          @personByParentSsn =
            store.getSecondaryIndex(@personBySsn, stringClass, "parentSsn");
        end
      end
      dao = PersonAccessor.new(store)
      dao.init(store)
      personBySsn = dao.personBySsn
      person = Person.new('Bob Smith', '111-11-1111', nil)
      personBySsn.put(person);
      person = Person.new('Mary Smith', '333-33-3333', '111-11-1111')
      personBySsn.put(person);
      person = Person.new('Jack Smith', '222-22-2222', '111-11-1111')
      personBySsn.put(person);
      # Get Bob by primary key using the primary index.
      bob = personBySsn.get("111-11-1111")
      puts "Lookup of Bob => #{bob.name}, #{bob.ssn}"
      children = dao.personByParentSsn.subIndex(bob.ssn).entities()
      puts "\nRetrieving children of Bob"
      while (true) do
        child = children.next()
        break if child == nil
        puts "#{child.name}, #{child.ssn}"
      end
      children.close()
      store.close
      env.close
    end

  • Mac/Windows interoperability

    Dear nice people
    I am attempting to optimise a small business environment (see specifications below) containing a mix of Macs and PCs currently using a Windows Server 2012 on a GBit ethernet (1000BASE-T) infrastructure. The Macs are not being backed up. I am considering locating a Firewire 800 supported RAID based mass storage device, such as the G-Technology 4TB G-RAID Professional High-Performance Dual-Drive Hard Drive, centrally to the three Macs and in addition to the Windows Server, because:
    TimeMachine would backup Applications, folders, files and settings on each Mac allowing simple full restoration from a clean OS install
    The storage device is a plug-and-play device for these Macs and the staff are not IT literate
    TimeMachine can be configured easily on their Macs to such a device by the staff
    These Macs do not have Thunderbolt
    Connectivity is easy via Firewire 800
    If the staff just use the device for TimeMachine backups only, that is a good result. They may find that FW800 is faster that GBit ethernet on what appears to me to be a slow Windows server anyway, so staff may use the storage device to store Mac critical business files instead of on the Windows server: their choice
    If I assume that the above is reasonable, I have some questions arising. If you are able to answer one but not all questions, please clearly state which question you are addressing
    Do I need to be concerned if the staff create an FW800 ring instead of a daisy chain when connecting to each other and/or the storage device?
    Is an FW800 ring actually desirable - i.e. faster?
    How can I address the backup needs of the two isolated Macs (see specifications below) which would be more than 5m from the storage device? (My understanding is that FW800 cables are generally 2m and that the specification supports 3m max).
    (in 3 above) I realise I could specify an FW800 hub. Can you recommend such a hub for this scenario?
    Each Mac would have a GBit ethernet connection to Windows and a FW800 connection to the storage device. Is multiple network connections such as this supported on OS X (specifically 10.8.4)?
    Do I have any hardware interoperability issues given the age of some of the Macs? Specifically, is the FW800 specification the same on the 2008 and 2012 Macs?
    Thank you in advance
    Grytr
    Specifications
    Hardware & OS
    8 x PC
    1 x Mac Pro 5,1 (mid 2012) 16GB 10.8.4
    1 x Mac Pro 3,1 (early 2008) 10GB 10.8.4
    1 x iMac 8,1 (20 inch early 2008) 2GB 10.8.4
    The above three Macs are within 3m of each other and all Macs and PCs connected to Windows Server using 1000BASE-T
    2 x iMac 8,1 (24 inch early 2008) 2GB 10.8.4 approx 5m or more from the above Macs connected to Windows Server using 1000BASE-T.
    Applications
    A mix of native Mac applications such as MS Office, Sketchup Pro 2013, Adobe CS6 and VectorWorks 2013 (with very large models) plus Windows 7 Professional 64-bit applications running under Parrallels Desktop 9 for Mac such as SAGE 50 Accounts Professional and Rental Desk NX. At the moment, business critical PC & Mac files are stored on the Windows Server but the Macs are not backed up.

    All options are easy to remove, so don't worry about that. Also, just in case you are worried about viruses, spyware, etc. they will not spread to OS X if you get infected.
    You really have 3 options:
    Boot Camp
    The installer does all the work for you, installs the drivers when Windows is finished installing, and it's generally very easy. The nice thing about this option is you have a real version of Windows. When you reboot into Windows, you ARE running Windows, just like any other laptop running Windows. The downside is that you have to reboot every time you need a Windows app.
    VMware Fusion
    This is a great option as well. Also easy to install. If you just use Fusion, it creates an entire "Windows machine" as a single file within OS X. If it gets infected or there is an issue, you can just delete the file. You can also make a backup of the "machine" and restore it if there are problems. Also, you can use Fusion WITH Boot camp, and get the best of both worlds. This is what I do. That way, if you need to boot into Windows, you restart and do it. If you just need to run a few programs, you can use Fusion and run them from inside of OSX, all on the same installation of Windows. Fusion just uses your Boot camp partition as its machine.
    Parallels
    Pretty much the same as Fusion, and you can use the Boot camp partition for this one as well.
    I would really recommend using Boot camp and Fusion together, but if you don't see any need to actually boot into Windows, and you only need a few programs now and then, Fusion will work fine.
    Updates are still necessary, by the way. The windows install is just as vulnerable as any other Windows machine, unfortunately, but again, it won't spread to OSX.
    All three options are very easy to remove. Boot camp has an uninstall routine that will wipe out Windows and repartition the hard drive back to full size in about 2 or 3 minutes!!
    And yes, you need a full version of Windows.

  • Issue with Java - PHP interoperability

    Hi,
    There are some converts written in PHP that can take raw wikipedia data and output a good HTML. I wanted to make use of these in my Java code.
    So in my Java Web App, I wanted to run the PHP parser, get hold of the InputStream and push it to my ServletOutputStream.
    Code Snippet:
       String command="php testparser.php Anarchism.wikimarkup";
       proc = Runtime.getRuntime().exec(command);
       InputStream in = proc.getInputStream();
       InputStreamReader isr = new InputStreamReader(in);
       BufferedReader br = new BufferedReader(isr);
       String line = null;
       while ((line = br.readLine()) != null) {
         System.out.println(line);
       }But the problem here is that the PHP Process never stops and hence the Buffer never ends. THe program is waiting in infinite loop in readLine().
    Please let me know if anyone has tried this and whats a better way to handle interoperability between PHP and Java.
    Thanks,
    Phani

    Phanikumar_Bhamidipati wrote:
    Yeah, I had a look at the document. But as per my understanding, the way the PHP engine runs is different from normal execs. I don't see how it can 'run different' and in my experience it doesn't. PHP sends output to stdout and stderr and reads from stdin. When PHP terminates it will close stdout and stderr and, if you have followed the recommendations in the reference, your readLine() will return 'null'.
    Because the same code ran fine when I automated unzipping a set of files using "bunzip2" command.If you read the article it explains a possible reason for this BUT until you implement the recommendations you will not know what is wrong.
    >
    I tried using Process.waitFor() method as well, but the result is same (Infinite Loop).This almost certainly is nothing to do with Process.waitFor() and probably everything to do with buffers filling (probably stderr).
    Until you post the code with the recommendations implemented that exhibits the same blocking problem it is a waste of time anyone responding further.

  • Shared Calendar Interoperability (2013 / 2007)

    Hello,
    We have two Exchange servers in our Organization, a 2007 where most mailboxes reside, and a 2013 where we are slowly doing a one off user migration here and there to test the waters so to speak.
    We have users that constantly have/had problems with Outlook freezing on send, or other connectivity issues.  These are our first target people to move to the new server.  I have one user who I moved yesterday who is trying to share a calendar
    she calls "Managers Schedules" to other managers.  She created this calendar and keep in mind she is on the 2013 server now.  She shared it with reviewer permissions to 4 other employees but those 4 others are still on the 2007 server.
     The other people are getting an email with links to view the calendar but they say that they cannot open it.  The person sending this is also getting a message that they should publish the calendar.  When they click yes to that, IE opens up
    to Outlook Web Access????  I said lets just see what happens if you log into OWA.  She did and it gave her some options to make the calendar public or private and it gave her two links to copy.   Now I was able to open the one link... it downloaded
    an ical file, however I am on the 2013 server.  No one else can open the links.
    So long story short, is there any kind of interoperability between shared calendars on Exchange 2007 and 2013 (vice versa)?  Or am I going to have to move these 4 other users to the new server, which in turn will likely cause a chain reaction since
    they are managers and have access to their employees email inboxes as additional mailboxes in Outlook.

    Hi,
    We can’t directly share the calendar which is not the default calendar for a user. Generally, we would share a custom calendar by the following steps:
    Right-click the custom Calendar, click Share > E-mail Calendar. Then the calendar would be added as an attachment in a message.
    When the recipients receive the message, click Open this Calendar, a window would be prompted out to ask if add this Internet Calendar to Outlook. You should only open Calendars from sources you know and trust.
    Please e-mail calendar in Outlook Online mode by the steps above. Then check whether the issue persists.
    Regards,
    Winnie Liang
    TechNet Community Support

  • MST PVSTP interoperation

    Hello,
    I've read the "understanding MSTP article" from cisco's website and I have several uncertainties.
    These have deepened even more after performing several experiments.
    I've setup a test scenario with the following configuration:
    C1 Cisco PVST ---- C2 MST ---- C3 MST non-cisco switch (doesn't know PVSTP)
    C1 has vlans 1-10 for which it must necessarily be root bridge
    C2 must be root bridge for all other vlans
    C3 will transport 1-10 + the other vlans
    C1 can not be migrated to MST while C2 if possible should interoperate with the non-cisco MST enabled switch.
    What I have done:
    Setup 1:
    C2 mst root bridge for all vlans
    C3 learnt of C2 being the root bridge
    C1 the PVST also learnt this (as far as I've read all communication with the PVST is done via the IST-CST instance)
    Although this worked just fine, unfortunately it wasn't what I was searching for.
    Setup 2:
    C1 -- C2
    C1 lower priority for vlans 1-10 (disabled spanning tree on the other vlans / or removed them from the trunk to C2 if not required to be present there)
    C2 reported:
    SPANTREE-2-ROOTGUARD_UNBLOCK: Root guard blocking port ...
    And the port was shown as blocked in both the IST0 and the other MSTIs.
    I've also tried the alternate configuration (not recommended) from:
    http://www.cisco.com/warp/public/473/147.html#alternate_configuration
    without any luck.
    Disabling PVSTP on the C1 interface to C2 of course made the C2 port to be removed from blocking as expected.
    I have several questions in regard to this:
    a) Why are both IST0 and the boundary ports for the MSTIs placed in Blocking ?
    b) According to that article shouldn't there be a way to have the PVST be root bridge for all the instances present on it ?
    c) What alternate setup could there be possible to achieve the redundancy desired while maintaining C1 root bridge for vlans 1-10
    C2 root bridge for the others
    C3 interoperability with C2 (C3 only knows MST and RSTP)
    Any advice would be greatly appreciated.
    Thanks,
    Mihai

    Hi Mihai,
    The code was designed especially to prevent what you are trying to do:-( The problem is that C2 is only running one instance at the boundary to C1, the CIST. So for each of its ports leading to C1 it can only block all vlans or forward all the vlans.
    If C1 is root for certain vlans, C2 will have to block one of its port to C1 for those vlans. This means that C2 can only block ALL its vlans to C1, considering the rule stated above.
    On the other hand, if C2 is the root for the CIST, it will need to put both its ports to C1 in forwarding, which means put ALL the vlans to forwarding on both ports.
    You clearly see the contradiction, and that's what the inconsistency you are getting is trying to show.
    Why do you need C1 to be root for some vlans? Is that for some load balancing issues? Because you can achieve load balancing without having C1 being the root.
    The only solution to your problem seems to have C2 run PVST considering your constraints. It would be much better if you could move C1 to MST and have it participate in the same region as C2 of course...
    Regards,
    Francois

  • Cost of  interoperations times

    hi expert,
    my condition is, i create one routing for FG and input standard queue time for each operation. when i create production order with this routing and calculate for cost. i think time i input should include in costing but  system not include standard queue time in process time , cost of each activity.
    (ex. my machine speed is 100 pcs per minuite and std queue time is 90 minute. when i create production order with quantities 100 pcs and calculate for cost. system show process time as 1 minute not 91 minute)
    as above, i want to know how to setting standard queue time or other interoperation time into costing.
    regards.
    kittisak.

    Hi,
    Queue times defined thid way are used only for scheduling and LT calculation, not costing.
    If you want to make these times relevant to costing, then you need to define them either as a standard value in a processing operation with it's proper formula, or as an operation by itself, depending on what suits best the situation in the shop.
    Regards,
    Mario

  • Have you heard about the latest addition to our SAP Microsoft Interoperability Suite?

    As you have seen from the coverage we did at the Microsoft SharePoint Conference earlier this month we introduced a new interoperability solution with the name Power BI Connectivity to SAP BusinessObjects BI.
    This is a great solution that makes it possible for Business users to continue to work in their familiar environment such as Microsoft Excel to
    access trusted, enterprise data  through a SAP BusinessObjects universe. Business users can access data coming from a variety of data sources
    including SAP systems.
    Read Deepa Sankars great blog on SCN introducing Power BI and "be in the know".
    Enjoy! and let us know what you think.

    Yes, Read Deepa Sankars blog on SCN and learn all about it.

  • Interoperability of Fusion Middleware Products.

    Hi all,
    I have following scenario where I am looking for some good suggestions.
    I have a cluster of 4 weblogic servers 10.3.5 where need to install Oracle Identity Manager 11.1.2 (11gR2) along with OBIEE 11.1.1.6 (11gR1) under the same Middleware Home.
    Following are the points to be considered for this installation.
    1- As per the documentation provided by oracle for interoperability
    http://docs.oracle.com/cd/E27559_01/doc.1112/e29569/interop_11g.htm#BCEJEFAF
    Section 3.3.1, “When installing Oracle Fusion Middleware products, be sure that each Middleware home you create contains only products that are at the same version or patch set.”
    The reason is given as *“Each product has its own maintenance schedule and it is possible that future interoperability issues could result.”*
    Further in section 3.4.2 it is mentioned *“When you configure a domain, ensure that all products configured within the domain are at the same patch set.* For example, do not configure Oracle Identity and Access Management 11g Release 1 (11.1.1.5.0) in the same domain with Oracle SOA Suite 11g Release 1 (11.1.1.6.0).”
    2- Since SOA is still having version 11.1.1.6.0(11gR1) and is required for the IDM it is mentioned that “One exception to this rule is the installation of Oracle SOA Suite 11g Release 1 (11.1.1.6.0) in the same Middleware home as Oracle Identity Manager 11g Release 2 (11.1.2). Oracle Identity Manager is one of the Oracle Identity and Access Management products. It requires Oracle SOA Suite.”
    3- Please also see section 3.4.3 of the same document, that stats ”Oracle often releases Oracle Identity Management and Oracle Identity and Access Management products on a schedule different from the schedule for the other Oracle Fusion Middleware products. As a result, it is common to use a different release or patch set of an Oracle Identity Management or Oracle Identity and Access Management product with your Oracle Fusion Middleware products, as long as they are not configured within the same domain. For example, you can use Oracle Identity and Access Management 11g Release 1 (11.1.1.5.0) products with your Oracle SOA Suite 11g Release 1 (11.1.1.6.0) products, if they are in separate domains.In these scenarios, the Oracle Identity and Access Management products are typically installed on a separate host and in a separate Middleware home.”
    4- OBIEE 11.1.1.6.0 also requires components from the SOA 11.1.1.6.0 e.g OWSM Policy Manager.
    Now, for the time being as per the document it is possible for me to install OBIEE 11.1.1.6.0 (11gR1), SOA 11.1.1.6.0(11gR1) and IDM 11.1.2(11gR2) under the same middleware home having separate domains , the issue is how to avoid the compatibility issues in future since the OBIEE, SOA and IDM are having independent upgrade cycle and patch release schedule. If any patch / upgrade is released and required to be installed for any of the above products it might put the remaining products in incompatible state hence rendering them dysfunctional?

    Will the 521 APs work with the 2112 controller?
    No.
    Are there any cheaper APs than the 1140AG that will work with the 2112 controller?
    You could try the 1130.  1140 supports 802.11n while 1130 is a/b/g only.
    Is the 1240AG a good choice for a non-climate controlled warehouse environment?
    The 1240, like the 1130, supports a/b/g only while the 1250 supports a/b/g/n.  The newer 1260 is controller-based only.  The 1240, 1250 and 1260 use external antennae that's an OPTION.

  • Windows Communcation Foundation - JSR 172 Web Stub - interoperability

    Hi! I am just playing with WCF -Windows Communication Foundation and JSR 172 Web Stub generation utility of WTK 2.5beta - is there a known problem with the import-expression i XML?
    WCF generates:
    ?wsdl -> web service descrption that has import- references to
      <?xml version="1.0" encoding="utf-8" ?>
    - <wsdl:definitions name="EchoService" targetNamespace="http://tempuri.org/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:tns="http://tempuri.org/" xmlns:wsa="http://schemas.xmlsoap.org/ws/2004/08/addressing" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:i0="http://schemas.microsoft.com/ws/2005/02/mex/bindings" xmlns:wsap="http://schemas.xmlsoap.org/ws/2004/08/addressing/policy" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:msc="http://schemas.microsoft.com/ws/2005/12/wsdl/contract" xmlns:wsaw="http://www.w3.org/2006/05/addressing/wsdl" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/" xmlns:wsa10="http://www.w3.org/2005/08/addressing" xmlns:wsx="http://schemas.xmlsoap.org/ws/2004/09/mex">
      <wsdl:import namespace="http://schemas.microsoft.com/ws/2005/02/mex/bindings" location="http://localhost:8080/echo?wsdl=wsdl0" />
    - <wsdl:types>
    - <xsd:schema targetNamespace="http://tempuri.org/Imports">
      <xsd:import schemaLocation="http://localhost:8080/echo?xsd=xsd0" namespace="http://tempuri.org/" />
      <xsd:import schemaLocation="http://localhost:8080/echo?xsd=xsd1" namespace="http://schemas.microsoft.com/2003/10/Serialization/" />
      </xsd:schema>
      </wsdl:types>
    + <wsdl:message name="IEchoService_Echo_InputMessage">
      <wsdl:part name="parameters" element="tns:Echo" />
      </wsdl:message>
    - <wsdl:message name="IEchoService_Echo_OutputMessage">
      <wsdl:part name="parameters" element="tns:EchoResponse" />
      </wsdl:message>
    - <wsdl:portType name="IEchoService">
    - <wsdl:operation name="Echo">
      <wsdl:input wsaw:Action="http://tempuri.org/IEchoService/Echo" message="tns:IEchoService_Echo_InputMessage" />
      <wsdl:output wsaw:Action="http://tempuri.org/IEchoService/EchoResponse" message="tns:IEchoService_Echo_OutputMessage" />
      </wsdl:operation>
      </wsdl:portType>
    - <wsdl:service name="EchoService">
    - <wsdl:port name="MetadataExchangeHttpBinding_IEchoService" binding="i0:MetadataExchangeHttpBinding_IEchoService">
      <soap12:address location="http://localhost:8080/echo" />
    - <wsa10:EndpointReference>
      <wsa10:Address>http://localhost:8080/echo</wsa10:Address>
      </wsa10:EndpointReference>
      </wsdl:port>
      </wsdl:service>
      </wsdl:definitions>?wsdl=wsdl0
    ?xsd=xsd0
    ?xsd=xsd1Anyone experienced the same problem and knows a solution ?
    Henning

    I have gotten a step futher! The emulator had to be configured to run in the "secure" domain (as mentioned by some other people here), otherwise a http response "400 bad request (invalid header name)" was produced. J2ME web service is interoperable with WCF basichttpbinding (without debugging becuase it inserts unparseable soap-code into the http response)

  • SOAP Attachments - Streaming interoperability

    Hi
    I am really interested with Streaming attachments in OC4J.
    But I assume it is not interoperable.
    When I try to use Streaming for my Web Service, i get an error message that my JAX-RPC handler cannot unmarhall the WS operation (I know weird error).
    So it seems that if I want to use Streaming - I have to forget about JAX-RPC handlers (for example logging handler)
    Additionally when you open WSDL (of WS + Streaming) file you will see that there is a xml part which is not a WS-I standard:
    <sa:stream-attachments xmlns:sa="http://oracle.com/schemas/webservices/streaming-attachments" name="attachments"/>
    - and using wsi-test-tools you will get a failure message that it is not
    a part of standard namespace: http://schemas.xmlsoap.org/wsdl/soap/
    So i guess that client of a Web Service with Streaming must be a JAVA client - and additionally with a usage of oracle WS jar libraries.
    So there is a question now, how to enable in OC4J sending big attachments with Interoperability.
    I think using AXIS2 module is not an answer, as it also uses JAX-RPC approach based on DOM xml parsers, which means loading whole attachments into memory (no matter if its MTOM, SwA or Base64Encoding).
    My question is: IS THERE ANY WAY TO SEND LARGE ATTACHMENTS IN OC4J WITH FULL INTEROPERABILITY (maybe some chunk options)?
    Thanks a lot for any answer
    Jerzy

    Hi, did you ever get a response to your question or figure it out on your own? I'm about to decide the same thing and was trying to find information on large attachments to SOAP messages and whether it's a good idea or not. Specifically, I'm wondering if these large files are read completely into memory at any point or if the API is smart enough to cache...

  • Interoperability issues between Nexus 5k and HP storageworks (8/20q)

    Hello community,
    I am trying to get a VM host and a windows server to connect to their storage across a nexus and HP (Qlogic) fabric switch. This is currently having issues with the VM host unable to see the datastores, possibly due to interoperability between Cisco and HP (Qlogic)
    I have configured and tested the connectivity using only the cisco nexus and this worked, I then tested it using only the HP fabric switch (HP 8/20q) and this also worked.
    However, when using the HP and Cisco Nexus as shown in the attached diagram, things stop working.
    The connection is using Native Fibre channel, On the Cisco side I performed the following steps
    Configured the Nexus with Domain ID 10 and the HP with Domain ID 20.
    Connected the 2 fabric switches on fc1/48 (Cisco) and port 0 (HP) and confirmed that the ISL came up (E_port 8G), I confirmed connectivity using fcping both ways.
    I connected the SAN to the Nexus and the servers to the HP
    Configured VSAN 10
    Added interfaces fc1/41 to 48 in VSAN 10
    Created 2 zones ( ESXI and Windows)
    Added the PWWN for the ESXI server and the MSA2040 to the ESXI zone
    Added the PWWN for the Windows 2k8 server and MS2040 to the Windows zones
    Created zoneset (Fabric-A) and added both the above zones in it
    Activated the FABRIC-A zoneset
    The result is that the zones and zoneset are synchronised to the HP switch .I confirmed that I was able to see the servers and SAN WWN in the correct zones on the HP.
    From the 8/20q switch I am able to fcping the SAN, Nexus and servers, however the Nexus is only able to fcping the SAN and the HP, it returns a “no response from destination”  when pinging the servers.
    I have added the FCID for all the units in the same zones to see if it makes any difference to no avail the result seem to be the same. I have gone through various Nexus/MDS/HP/Qlogic user guides and forums; unfortunately I have not come across any that shows this specific topology.
    source for HP user guide is here: http://h20565.www2.hp.com/hpsc/doc/public/display?docId=emr_na-c02256394
    I’m attaching the nexus config and partial view of the “show interface brief” showing the fibre channel port status
    Interface  Vsan   Admin  Admin   Status          SFP    Oper  Oper   Port
                      Mode   Trunk                          Mode  Speed  Channel
                             Mode                                 (Gbps)
    fc1/47     10     auto   on      up               swl    F       8    --
    fc1/48     10     auto   on      up               swl    E       8    --
    Any help and advice would be greatly appreciated. thanks in advance

    Hi all, after much reading, Walter Dey provided the hint to put me on the right track. 
    By default the Nexus 5k is in interop mode 1. However, one of the requirement for this to be "interoperable" with other vendor the FCDomain ID in the entire fabric needs to be between 97 and 127 as stated in the Cisco website.
    http://www.cisco.com/en/US/docs/storage/san_switches/mds9000/interoperability/guide/ICG_test.html
    Another issue that had me and my colleague scratching our heads, was that we were seeing high level of CRC errors on the ISL interfaces. This was caused by ARBFF settings mismatch between the Nexus and the HP. This was resolved by ensuring that the ARBFF setting on the HP was set to false and the command "switchport fill-pattern ARBFF speed 8000" is configured on the ISL interface linking the 2 switches. (note that Cisco's default setting for the ports is IDLE, until this is changed the link will not stabilise)
    Thanks for all your help guys.

  • VTP3 interoperability with VTP2 in the same domain

    I'm trying to determine the best way to implement extended vlans in a network where all of the devices will not immediately support
    vtp3.  We do not need extended vlans campus wide, but only on the head of stack device in each closet of each building. These head of
    stack switches will be 3850's used as wireless access controllers.  The extended vlans will be for the building wireless networks.
    (the purchase of a 5760 is deferred until next year, so this is an interim wireless configuration). The majority of the switches in
    the network are vtp2 capable, but all are running in vtp1 mode because of a handful (maybe 5) that are vtp 1 only capable.  These
    5 will be removed from the network in the near future. 
    The VTP3 docs seem counterdictory.  They seem to imply that VTP3 and VTP2 are interoperable, and that
    vtp 2 switches will accept configuration changes from a vtp 3 primary server. (I know the reverse is not true and do not need that). But then the same doc explicitly states that all devices in the same domain have to run the same vtp version.  So which is true?  Can the core switch and
    the connecting 3850's run vtp3 with the extended vlan ranges, while the rest of the network switches remain at version 2, with the
    version 2 switches able to accept configuration changes for vlans 1-1005 from the version 3 primary vtp switch?  Also, if the vtp
    version 2 core switch is converted to vtp3, will the configuration version be reset to 0? If  vtp version 2 and 3 are not interoperable as just described, then how do we get the extended range higher number vlans in the network? 
    Would vtp transparent be an option?  But then wouldn't the core switch, as well as the rest of the switches in the network, have to
    be configured as transparent as well? And if the core switch (a 6509E running 12.2(17r)SX7) is converted from vtp 2 to transparent,
    will it retain the existing version 2 vlan database, or will all of the vlans need to be reconfigured on the switch?
    What is the best way to implement extended vlans in the situation?
    Please ask questions if you need more information to respond.

    Will, did you find a solution for the above then?

  • Websphere EJB/JAAS interoperability failure

    Greetings all,
    I am attempting to deploy 2 applications on my Websphere server.
    The first contains a Servlet in a Web Module that performs JAAS authentication using Websphere's WSLogin class. The second contains an EJB in an EJB Module, which invokes the getCallerPrincipal method and performs some application based logic based on the result. The servlet code contacts the EJB after its JAAS authentication phase, and invokes the method in which getCallerPrincipal is called.
    My current status is: JAAS authentication is succeeding. I receive a valid Subject in which there is a valid Principal that corresponds to the username and password input WSLogin expects. EJB contact also succeeds, and my method is invoked. However, getCallerPrincipal, which I expect to return me the Principal successfully authenticated via JAAS, always and only returns Websphere's default 'UNAUTHENTICATED' principal.
    There is nothing helpful in the log files. I have spent a great deal of time in configuration, but I'm sure I could have made some mistake along the way.
    Does anyone have any clue about the above error?
    Has anyone successfully deployed a Websphere solution involving EJB-JAAS interoperability? If so, what were the critical elements to your solution/deployment?
    Thanks very much in advance,
    Peter

    hi peter,
    can u post a sample servlet-jaas example alongwith the policy files which you can deploy on websphere 5 version?
    it would be great to know how you did it...
    my email is [email protected]
    thanks,
    satyan

Maybe you are looking for

  • How to get internet mail into my server - any advice?

    To clear the pipeline and reduce the amount of configuration, I decided to get rid of my private router and plug my server directly into the modem, thus using the server as the gateway/router for other computers in my home. Hopefully this will elimin

  • Sorting files by file name rather than ID Tags

    Some of my files have incomplete or missing ID Tags ( a lot of mp3's from vinyl) making sorting more difficult and I typically find it more usefull to sort by file name rather than ID Tag anyway. Does anyone know how to sort by file name?

  • BDC to change MRP indicator in MMSC transaction

    Dear all, I want to set the MRP indicator '1' to exclude storage location stock during MRP run in 'MMSC' transaction . I want to do this for materials having no MRP indicator in MMSC for particular storage location. How to do this in BDC ?

  • Duplicate iCloud 'accounts' on contact lists - MacBook Pro

    I've been having difficulties first getting my contacts to synch with all devices. So I opened up the 'contacts' application on my MacBook pro, and found a number of iCloud accounts were being recognized on my computer. I have only ever signed in one

  • Web browser can not find my iweb page

    Hi there, I need some help. I published my website onto the internet,and for a while it was accessible. But since I have my own domain name, I am unable to view it on the web. Every-time I try to view my web-page an error message appears saying "safa