DWDM interoperability

Hello Sir,
try to use DWDM to extend IP network.
is it possible using Catalyst 6500 DWDM GBIC to connect to Lucent Wavestar OLS 400G DWDM ?
have any limitation or document for this event ?
Please kindly help to suggest.
thank you.
Jason

Cisco uses the ITU compliant wavelength grid for its DWDM GBICs. This wavelength assignment is very common amongst all telecommunications manufacturers. DWDM provides a completely transparent mechanism for concentrating signals over fiber. Furthermore, the only alteration that DWDM performs on the inputed signal is to amplify the optical power, the actual data contained within the transmission is never altered.. As a result, you can use the GBIC over another manufacturers DWDM gear with the following caveats...
1. Do not expect any power monitoring, in-band DCC, or network management feature sets of the DWDM equipment to function in a way consistant with using thier tranceivers across the DWDM network. You will ony be able to pass the traffic, any other functionality relating to the wavelength (GBIC) interaction with the DWDM equipment may or may not function correctly.
2. Power management may have to be performed manually. DWDM manufaturers tend to make thier equipment friendly to other product sets they offer but do not typically make any provisions for third party gear to be interoperable at the physical layer thus you must closely control the power input(s) from the GBIC(s) to and from the DWDM coupler/filters.
I have tested the interoperability of Nortel/Fujitsu, Nortel/Lucent, and Nortel/Cisco in lab environemts with much success so I am fairly confident you can make a Cisco/Lucent configuration work. I would reccomend that you trial the configuration in a lab before introducing it into a production enviroment or committing to a purchase from Lucent.
As far as documentation goes... You are most likely going to have to write your own. Every carrier I have worked for keeps that sort of documentation confidential and the vendors are not likely to give it to you because they want you to buy thier product.
Good Luck.
One last note: Interoperability only exists when using a single vendors DWDM Gear with and ITU grid compliant tranceivers/transponders/GBICs/etc. from any vendor. It is not possible to mix and match DWDM gear (i.e. amplifiers, boosters, etc) from differnet manufacturers within an optical link.

Similar Messages

  • Oracle 8.1.5 (6?) interoperability with Java/C++ clients and various ORBs

    Hi
    Machine/OS: SGI VWS 540, 256MB RAM, Win2000
    Oracle: Oracle 8i (8.1.6)/Enterprise Edition Release 2
    Problem:
    I want to bind to a CORBA object (pureCORBA bank example) from c++.
    I found in the FAQ an article about the interoperability with c++. I downloaded an archieve with all the necessary code. But....
    I won't compile. It has been tested on Visibroker 3.2, but now i get compilation errors... (GIOP::ObjectKey CORBA::OctetSequence convertion).
    Is there an update of this article that uses 8i 8.1.6 and Visibroker 3.4 (that ships with 8.1.6). I compile it using MS VC++ 6.0 SP 4.
    I can make the current files compile, but then get a link error (login.lib uses stuff already defined in msvcrt.lib ....)
    Thank you
    Bart De Lathouwer

    Hi
    Machine/OS: SGI VWS 540, 256MB RAM, Win2000
    Oracle: Oracle 8i (8.1.6)/Enterprise Edition Release 2
    Problem:
    I want to bind to a CORBA object (pureCORBA bank example) from c++.
    I found in the FAQ an article about the interoperability with c++. I downloaded an archieve with all the necessary code. But....
    I won't compile. It has been tested on Visibroker 3.2, but now i get compilation errors... (GIOP::ObjectKey CORBA::OctetSequence convertion).
    Is there an update of this article that uses 8i 8.1.6 and Visibroker 3.4 (that ships with 8.1.6). I compile it using MS VC++ 6.0 SP 4.
    I can make the current files compile, but then get a link error (login.lib uses stuff already defined in msvcrt.lib ....)
    Thank you
    Bart De Lathouwer

  • The difference between ONS-SC-2G and DWDM-SFP

    Our datacenter needs to summary 6 wavelength from the sub-datacenters. All sub-DCs need to communicate with our DC. We have 6509E,15216 MUX/DEMUX and 15454-SA-HD=.
    Wavelength list:
    DWDM SFP 1535.04 nm SFP (100 GHz ITU grid)
    DWDM SFP 1534.25 nm SFP (100 GHz ITU grid)
    DWDM SFP 1532.68 nm SFP (100 GHz ITU grid)
    DWDM SFP 1531.90 nm SFP (100 GHz ITU grid)
    DWDM SFP 1531.12 nm SFP (100 GHz ITU grid)
    DWDM SFP 1530.33 nm SFP (100 GHz ITU grid)
    Which SFP module do we need to use in our datacenter?
    For example DWDM-SFP-3033= and ONS-SC-2G-30.3=. Which should we choose?
    Which board should they be inserted and working on?

    Hi, 
    - Do you have any design CTP file?
    - Are you already have channels betwen DCs?
    This modules have same specification, but different hard-coded part number inside.
    Spec for DWDM-SFP-XXXX=:
    http://www.cisco.com/c/en/us/products/collateral/interfaces-modules/dwdm-transceiver-modules/product_data_sheet0900aecd80582763.html
    Table 2. Optical Parameters
    Spec for ONS-SC-2G-XX.X=:
    Table 29. CWDM SFP Modules: Optical Specifications
    Table 30. DWDM SFP Modules: Optical Specifications
    Table 31. DWDM SFP Modules: Optical Performance
    Now you can see that they are the same specification.
    - If you will use DWDM-SFP-XXXX= in ONS chassis it will work BUT in CTC you will see alarm "Provisioning mismatch".
    - If you will use ONS-SC-2G-XX.X= in ONS it will work fine. 
    - If use ONS-SC-2G-XX.X= in 6k it will work, but you should use DWDM-SFP-XXXX= in 6k Catalyst and connect it directly to 15216 MUX\DMX if distance short. Need CTP design file to check it.

  • Working on ETL tools interoperability using Common Warehouse Model (CWM)

    Hi All,
    Its just a piece of information and not a question.
    I have been working on proving the ETL tools interoperability using Common Warehouse Metamodel(CWM), an OMG standard. The whole concept is to take out the metadata from an ETL tool say OWB and put it into CWM Metadata Repository and this metadata can be used for building the same project in any other tool say Informatica or may be in the same ETL tool.
    The main thing in this process is to map each ETL tool with the CWM concepts and then using Model to Model Tranformations(Technologies like Xtend) one can set up a communication between different ETL tools.
    Till now I have worked with OWB only. I, with my team, have extracted all information from an OWB project (which has medium level of complexity like two oracle modules(schemas) and few tables, views and mappings with various operators) and put it in CWM repository and extracted it back from CWM MDR to OWB itself. We haven't worked with any other ETL tool because of the unavailablity of any other ETL tool with us. We will be working with Pentaho kettle in near future and try to prove the whole process as two way communication.
    The whole process can be described in steps as below :
    1. Creation of a manual OWB Ecore model(model representaion in Eclipse Modelling Framework) which gives all dependencies and reletionships in OWB objects like Project,OracleModule etc.
    2. Creation of CWM ecore model from Rational Rose mdl which has been provided by OMG on their site.
    3. Generation of Java code(Gen Model) from the above mentioned ecore model(It is needed to create an object from OWB).
    4. Extraction of project from OWB using public views which has been exposed by OWB itself. You can refer to [http://download.oracle.com/docs/cd/B31080_01/doc/owb.102/b28225/toc.htm|http://download.oracle.com/docs/cd/B31080_01/doc/owb.102/b28225/toc.htm] link for OWB public views and other APIs.
    5. (Actually Step 4 is a part of this step only )Writing a Java code which has JDBC connection for accessing OWB public views and Ecore model as imported java files(Step 3 has been done for this part only). This java code will return an OWB project object(instance of the Ecore model) which will be used in further steps.
    6. Writing an Xtend code to do a model to model tranformation from OWB to CWM.
    7. Writing an Open Architecture Workflow to combine all the steps in one step which will take the output of java code(step 5) and put it into Xtend Code(step 6) and then will take the output of Xtend code and give it to the XMIWriter(an OAW component) to write an XMI which is actually a CWM Ecore Model instance.
    8. Saving above XMI(CWM model instance) to the CWM MDR using Hibernate and Teneo.
    In the same way we can extract metadata from CWM MDR and put it into OWB. But the only problem with OWB is that we cannot persist OWB object in OWB repositories as OWB tables are very cryptic and tough to understand. So for that we have used TCL scripts(OMB Plus scripts) to create a project in OWB using OWB ecore instance. You can refer to the above oracle documentation link for TCL scripts.
    Let me know if I can assist you if you are working on the same.
    You can mail me for any queries. My email id is [email protected].
    Thanks,
    Deepak

    Hi
    1. Why do we need to install another standalone HTTP server in a separate home? Where do we use that server?
    DA: The separate HTTP server is for the Workflow Monitor, which is not necessary (it has some use cases mind you).
    2. To make the OWB work correctly while using ETL features, do we always need to run Workflow Configuration Assistant, because I wasn't able to generate code from OWB editor after building a mapping while Workflow Configuration Assistant wasn't running.
    DA: Not necessary, what error did you get? Mappings can be designed, deployed and executed without Workflow. Workflow can be used for orchestrating the the mappings (ie .running a bunch of them in a specific order with others tasks).
    3. Whenever I try to save my work in OWB, I get an error , message : Preference.properties (Access is denied). Though it saves my work but I don't understand why I am getting this error. It looks like OWB is trying to access some Property from the Preferences (Tools menu) but can't access.
    DA: It sounds like the directory where you have installed OWB does not have permissions for the OS user you are executing it. Is the install user different from the execution user? Either run using the installed user, or change the permissions of the directories (grant the executing user write permissions under all directories under owb).
    4. I also get error while closing the Mapping Editor :-
    DA. same issue as 3.
    Cheers
    David

  • Berkeley DB Java Edition (JE) and JRuby Interoperability

    I finally got around to doing a quick test of calling Berkeley DB Java Edition (JE) from JRuby (JRuby is a 100% pure-Java implementation of Ruby).
    Before we get to JE and JRuby you probably want to know the answer to this question: "Why you would want to run Ruby on a JVM?" The answer is threefold:
    1. Ruby Performance. A large amount of effort has been put into tuning contemporary JVMs (e.g. Hotspot, Java 6, etc.) and Ruby programmers (through JRuby) can benefit from these tuning efforts. The JRuby guys have set a goal to make JRuby the fastest Ruby implementation available and Sun is certainly throwing their weight behind that effort.
    2. Portability. JRuby is a Ruby interpreter that runs anywhere a Java 5 JVM runs. You download it as a single tar.gz and it will run pretty much anywhere.
    3. Legacy Code. JRuby makes legacy Java apps and libraries available to Ruby programmers (did you ever think you'd see the word "legacy" next to the word "Java"?).
    JE interoperability with JRuby is important because it means that Ruby programmers now have a simple, embeddable, ACID storage engine (JE) available to them.
    To test this interoperability, I cobbled together a simple Ruby test program which does the following:
    * Opens an Environment, Database, and Transaction
    * Creates 10 records with keys 1..10 and marshaled Ruby Time instances as the corresponding data. This uses the Ruby Marshal package for the data binding and the JE Integer binding on the key side. There's no reason why you couldn't use different marshaling packages or methods for keys and data.
    * Commits the transaction,
    * Performs a Cursor scan to read those 10 records and prints out the Time instances, and
    * Searches for and reads the record with key 5 (an arbitrary key) and prints out the Time instance that is the corresponding data
    By the way, hats off to the JRuby developers: all of this code "just worked", out of the box, and most of my two hour investment was spent learning enough basic Ruby to make it all work. If you already know Ruby and JE, then demonstrating this interoperability would take you all of about 10 minutes.
    This was all done at the "base API" level of JE and no modifications to JE were required. I used Transactions in my code, but there's no reason that you need to. Mark and I have been talking about how to integrate JE's Direct Persistence Layer (DPL) with JRuby and we think it can be done with some remodularization of some of the DPL code. This is exciting because it would provide POJO ACID persistence to Ruby programmers.
    Linda and I have been talking about whether it makes sense to possibly use Ruby as a scripting platform for JE in the future. Given how easy it was to bring up JE and JRuby, this certainly warrants some further thought.
    The Ruby code and corresponding output is shown below. By the way, if you see something that I didn't do "The Ruby Way", feel free to let me know.
    I'd love to hear about your experiences with JE and JRuby. Feel free to email me a charles.lamb at <theobviousdomain dot com>.
    require 'java'
    module JESimple
      require 'date'
      # Include all the Java and JE classes that we need.
      include_class 'java.io.File'
      include_class 'com.sleepycat.je.Cursor'
      include_class 'com.sleepycat.je.Database'
      include_class 'com.sleepycat.je.DatabaseConfig'
      include_class 'com.sleepycat.je.DatabaseEntry'
      include_class 'com.sleepycat.je.Environment'
      include_class 'com.sleepycat.je.EnvironmentConfig'
      include_class 'com.sleepycat.je.OperationStatus'
      include_class 'com.sleepycat.je.Transaction'
      include_class 'com.sleepycat.bind.tuple.IntegerBinding'
      include_class 'com.sleepycat.bind.tuple.StringBinding'
      # Create a JE Environment and Database.  Make them transactional.
      envConf = EnvironmentConfig.new()
      envConf.setAllowCreate(true)
      envConf.setTransactional(true)
      f = File.new('/export/home/cwl/work-jruby/JE')
      env = Environment.new(f, envConf);
      dbConf = DatabaseConfig.new()
      dbConf.setAllowCreate(true)
      dbConf.setSortedDuplicates(true)
      dbConf.setTransactional(true)
      db = env.openDatabase(nil, "fooDB", dbConf)
      # Create JE DatabaseEntry's for the key and data.
      key = DatabaseEntry.new()
      data = DatabaseEntry.new()
      # Begin a transaction
      txn = env.beginTransaction(nil, nil)
      # Write some simple marshaled strings to the database.  Use Ruby
      # Time just to demonstrate marshaling a random instance into JE.
      for i in (1..10)
        # For demonstration purposes, use JE's Binding for the key and
        # Ruby's Marshal package for the data.  There's no reason you
        # couldn't use JE's bindings for key and data or visa versa or
        # some other completely different binding.
        IntegerBinding.intToEntry(i, key)
        StringBinding.stringToEntry(Marshal.dump(Time.at(i * 3600 * 24)),
                                    data)
        status = db.put(txn, key, data)
        if (status != OperationStatus::SUCCESS)
          puts "Funky status on put #{status}"
        end
      end
      txn.commit()
      # Read back all of the records with a cursor scan.
      puts "Cursor Scan"
      c = db.openCursor(nil, nil)
      while (true) do
        status = c.getNext(key, data, nil)
        if (status != OperationStatus::SUCCESS)
          break
        end
        retKey = IntegerBinding.entryToInt(key)
        retData = Marshal.load(StringBinding.entryToString(data))
        dow =
        puts "#{retKey} => #{retData.strftime('%a %b %d')}"
      end
      c.close()
      # Read back the record with key 5.
      puts "\nSingle Record Retrieval"
      IntegerBinding.intToEntry(5, key)
      status = db.get(nil, key, data, nil)
      if (status != OperationStatus::SUCCESS)
        puts "Funky status on get #{status}"
      end
      retData = Marshal.load(StringBinding.entryToString(data))
      puts "5 => #{retData.strftime('%a %b %d')}"
      db.close
      env.close
    end
    Cursor Scan
    1 => Fri Jan 02
    2 => Sat Jan 03
    3 => Sun Jan 04
    4 => Mon Jan 05
    5 => Tue Jan 06
    6 => Wed Jan 07
    7 => Thu Jan 08
    8 => Fri Jan 09
    9 => Sat Jan 10
    10 => Sun Jan 11
    Single Record Retrieval
    5 => Tue Jan 06

    In my previous post (Berkeley DB Java Edition in JRuby), I showed an example of calling JE's base API layer and mentioned that Mark and I had been thinking about how to use the DPL from JRuby. Our ideal is to be able to define classes in Ruby, annotate those class definitions with DPL-like annotations, and have the JE DPL store them. There are a number of technical hurdles to overcome before we can do this. For instance, Ruby classes defined in JRuby do not map directly to underlying Java classes; instead they all appear as generic RubyObjects to a Java method. Granted, it would be possible for the DPL to fish out all of the fields from these classes using reflection, but presently it's just not set up to do that (hence the modification to the DPL that I spoke about in my previous blog entry). Furthermore, unlike Java, Ruby allows classes to change on the fly (add/remote new fields and methods) causing more heartburn for the DPL unless we required that only frozen Ruby classes could be stored persistently.
    On thinking about this some more, we realized that there may be a way to use the DPL from JRuby, albeit with some compromises. The key to this is that in JRuby, if a Java instance is passed back to the "Ruby side" (e.g. through a return value or by calling the constructor for a Java class), it remains a Java instance, even when passed around in JRuby (and eventually passed back into the "Java side"). So what if we require all persistent classes to be defined (i.e. annotated) on the Java side? That buys us the standard DPL annotations (effectively the DDL), freezes the classes that the DPL sees, and still lets us benefit from the POJO persistence of the DPL. All of this can be done without modification to JE or the DPL using the currently available release. I cooked up a quick example that builds on the standard "Person" example in the DPL doc and included the code below.
    require 'java'
    module DPL
      require 'date'
      # Include all the Java and JE classes that we need.
      include_class 'java.io.File'
      include_class 'com.sleepycat.je.Environment'
      include_class 'com.sleepycat.je.EnvironmentConfig'
      include_class 'com.sleepycat.persist.EntityCursor'
      include_class 'com.sleepycat.persist.EntityIndex'
      include_class 'com.sleepycat.persist.EntityStore'
      include_class 'com.sleepycat.persist.PrimaryIndex'
      include_class 'com.sleepycat.persist.SecondaryIndex'
      include_class 'com.sleepycat.persist.StoreConfig'
      include_class 'com.sleepycat.persist.model.Entity'
      include_class 'com.sleepycat.persist.model.Persistent'
      include_class 'com.sleepycat.persist.model.PrimaryKey'
      include_class 'com.sleepycat.persist.model.SecondaryKey'
      include_class 'com.sleepycat.persist.model.DeleteAction'
      include_class 'persist.Person'
      include_class 'persist.PersonExample'
      # Create a JE Environment and Database.  Make them transactional.
      envConf = EnvironmentConfig.new()
      envConf.setAllowCreate(true)
      envConf.setTransactional(true)
      f = File.new('/export/home/cwl/work-jruby/JE')
      env = Environment.new(f, envConf);
      # Open a transactional entity store.
      storeConfig = StoreConfig.new();
      storeConfig.setAllowCreate(true);
      storeConfig.setTransactional(true);
      store = EntityStore.new(env, "PersonStore", storeConfig);
      class PersonAccessor
        attr_accessor :personBySsn, :personByParentSsn
        def init(store)
          stringClass = java.lang.Class.forName('java.lang.String')
          personClass = java.lang.Class.forName('persist.Person')
          @personBySsn = store.getPrimaryIndex(stringClass, personClass)
          @personByParentSsn =
            store.getSecondaryIndex(@personBySsn, stringClass, "parentSsn");
        end
      end
      dao = PersonAccessor.new(store)
      dao.init(store)
      personBySsn = dao.personBySsn
      person = Person.new('Bob Smith', '111-11-1111', nil)
      personBySsn.put(person);
      person = Person.new('Mary Smith', '333-33-3333', '111-11-1111')
      personBySsn.put(person);
      person = Person.new('Jack Smith', '222-22-2222', '111-11-1111')
      personBySsn.put(person);
      # Get Bob by primary key using the primary index.
      bob = personBySsn.get("111-11-1111")
      puts "Lookup of Bob => #{bob.name}, #{bob.ssn}"
      children = dao.personByParentSsn.subIndex(bob.ssn).entities()
      puts "\nRetrieving children of Bob"
      while (true) do
        child = children.next()
        break if child == nil
        puts "#{child.name}, #{child.ssn}"
      end
      children.close()
      store.close
      env.close
    end

  • Mac/Windows interoperability

    Dear nice people
    I am attempting to optimise a small business environment (see specifications below) containing a mix of Macs and PCs currently using a Windows Server 2012 on a GBit ethernet (1000BASE-T) infrastructure. The Macs are not being backed up. I am considering locating a Firewire 800 supported RAID based mass storage device, such as the G-Technology 4TB G-RAID Professional High-Performance Dual-Drive Hard Drive, centrally to the three Macs and in addition to the Windows Server, because:
    TimeMachine would backup Applications, folders, files and settings on each Mac allowing simple full restoration from a clean OS install
    The storage device is a plug-and-play device for these Macs and the staff are not IT literate
    TimeMachine can be configured easily on their Macs to such a device by the staff
    These Macs do not have Thunderbolt
    Connectivity is easy via Firewire 800
    If the staff just use the device for TimeMachine backups only, that is a good result. They may find that FW800 is faster that GBit ethernet on what appears to me to be a slow Windows server anyway, so staff may use the storage device to store Mac critical business files instead of on the Windows server: their choice
    If I assume that the above is reasonable, I have some questions arising. If you are able to answer one but not all questions, please clearly state which question you are addressing
    Do I need to be concerned if the staff create an FW800 ring instead of a daisy chain when connecting to each other and/or the storage device?
    Is an FW800 ring actually desirable - i.e. faster?
    How can I address the backup needs of the two isolated Macs (see specifications below) which would be more than 5m from the storage device? (My understanding is that FW800 cables are generally 2m and that the specification supports 3m max).
    (in 3 above) I realise I could specify an FW800 hub. Can you recommend such a hub for this scenario?
    Each Mac would have a GBit ethernet connection to Windows and a FW800 connection to the storage device. Is multiple network connections such as this supported on OS X (specifically 10.8.4)?
    Do I have any hardware interoperability issues given the age of some of the Macs? Specifically, is the FW800 specification the same on the 2008 and 2012 Macs?
    Thank you in advance
    Grytr
    Specifications
    Hardware & OS
    8 x PC
    1 x Mac Pro 5,1 (mid 2012) 16GB 10.8.4
    1 x Mac Pro 3,1 (early 2008) 10GB 10.8.4
    1 x iMac 8,1 (20 inch early 2008) 2GB 10.8.4
    The above three Macs are within 3m of each other and all Macs and PCs connected to Windows Server using 1000BASE-T
    2 x iMac 8,1 (24 inch early 2008) 2GB 10.8.4 approx 5m or more from the above Macs connected to Windows Server using 1000BASE-T.
    Applications
    A mix of native Mac applications such as MS Office, Sketchup Pro 2013, Adobe CS6 and VectorWorks 2013 (with very large models) plus Windows 7 Professional 64-bit applications running under Parrallels Desktop 9 for Mac such as SAGE 50 Accounts Professional and Rental Desk NX. At the moment, business critical PC & Mac files are stored on the Windows Server but the Macs are not backed up.

    All options are easy to remove, so don't worry about that. Also, just in case you are worried about viruses, spyware, etc. they will not spread to OS X if you get infected.
    You really have 3 options:
    Boot Camp
    The installer does all the work for you, installs the drivers when Windows is finished installing, and it's generally very easy. The nice thing about this option is you have a real version of Windows. When you reboot into Windows, you ARE running Windows, just like any other laptop running Windows. The downside is that you have to reboot every time you need a Windows app.
    VMware Fusion
    This is a great option as well. Also easy to install. If you just use Fusion, it creates an entire "Windows machine" as a single file within OS X. If it gets infected or there is an issue, you can just delete the file. You can also make a backup of the "machine" and restore it if there are problems. Also, you can use Fusion WITH Boot camp, and get the best of both worlds. This is what I do. That way, if you need to boot into Windows, you restart and do it. If you just need to run a few programs, you can use Fusion and run them from inside of OSX, all on the same installation of Windows. Fusion just uses your Boot camp partition as its machine.
    Parallels
    Pretty much the same as Fusion, and you can use the Boot camp partition for this one as well.
    I would really recommend using Boot camp and Fusion together, but if you don't see any need to actually boot into Windows, and you only need a few programs now and then, Fusion will work fine.
    Updates are still necessary, by the way. The windows install is just as vulnerable as any other Windows machine, unfortunately, but again, it won't spread to OSX.
    All three options are very easy to remove. Boot camp has an uninstall routine that will wipe out Windows and repartition the hard drive back to full size in about 2 or 3 minutes!!
    And yes, you need a full version of Windows.

  • Issue with Java - PHP interoperability

    Hi,
    There are some converts written in PHP that can take raw wikipedia data and output a good HTML. I wanted to make use of these in my Java code.
    So in my Java Web App, I wanted to run the PHP parser, get hold of the InputStream and push it to my ServletOutputStream.
    Code Snippet:
       String command="php testparser.php Anarchism.wikimarkup";
       proc = Runtime.getRuntime().exec(command);
       InputStream in = proc.getInputStream();
       InputStreamReader isr = new InputStreamReader(in);
       BufferedReader br = new BufferedReader(isr);
       String line = null;
       while ((line = br.readLine()) != null) {
         System.out.println(line);
       }But the problem here is that the PHP Process never stops and hence the Buffer never ends. THe program is waiting in infinite loop in readLine().
    Please let me know if anyone has tried this and whats a better way to handle interoperability between PHP and Java.
    Thanks,
    Phani

    Phanikumar_Bhamidipati wrote:
    Yeah, I had a look at the document. But as per my understanding, the way the PHP engine runs is different from normal execs. I don't see how it can 'run different' and in my experience it doesn't. PHP sends output to stdout and stderr and reads from stdin. When PHP terminates it will close stdout and stderr and, if you have followed the recommendations in the reference, your readLine() will return 'null'.
    Because the same code ran fine when I automated unzipping a set of files using "bunzip2" command.If you read the article it explains a possible reason for this BUT until you implement the recommendations you will not know what is wrong.
    >
    I tried using Process.waitFor() method as well, but the result is same (Infinite Loop).This almost certainly is nothing to do with Process.waitFor() and probably everything to do with buffers filling (probably stderr).
    Until you post the code with the recommendations implemented that exhibits the same blocking problem it is a waste of time anyone responding further.

  • Shared Calendar Interoperability (2013 / 2007)

    Hello,
    We have two Exchange servers in our Organization, a 2007 where most mailboxes reside, and a 2013 where we are slowly doing a one off user migration here and there to test the waters so to speak.
    We have users that constantly have/had problems with Outlook freezing on send, or other connectivity issues.  These are our first target people to move to the new server.  I have one user who I moved yesterday who is trying to share a calendar
    she calls "Managers Schedules" to other managers.  She created this calendar and keep in mind she is on the 2013 server now.  She shared it with reviewer permissions to 4 other employees but those 4 others are still on the 2007 server.
     The other people are getting an email with links to view the calendar but they say that they cannot open it.  The person sending this is also getting a message that they should publish the calendar.  When they click yes to that, IE opens up
    to Outlook Web Access????  I said lets just see what happens if you log into OWA.  She did and it gave her some options to make the calendar public or private and it gave her two links to copy.   Now I was able to open the one link... it downloaded
    an ical file, however I am on the 2013 server.  No one else can open the links.
    So long story short, is there any kind of interoperability between shared calendars on Exchange 2007 and 2013 (vice versa)?  Or am I going to have to move these 4 other users to the new server, which in turn will likely cause a chain reaction since
    they are managers and have access to their employees email inboxes as additional mailboxes in Outlook.

    Hi,
    We can’t directly share the calendar which is not the default calendar for a user. Generally, we would share a custom calendar by the following steps:
    Right-click the custom Calendar, click Share > E-mail Calendar. Then the calendar would be added as an attachment in a message.
    When the recipients receive the message, click Open this Calendar, a window would be prompted out to ask if add this Internet Calendar to Outlook. You should only open Calendars from sources you know and trust.
    Please e-mail calendar in Outlook Online mode by the steps above. Then check whether the issue persists.
    Regards,
    Winnie Liang
    TechNet Community Support

  • What Gear do i need to enable WDM (DWDM) on a Dark Fiber?

    We have Dark Fiber between 2 DCs and would like to enable WDM on it. What Cisco Optical Gear do i need for this? We would also link to enable Encryption on this link as well. TIA

    You can do "simple" CWDM with the Cisco WDM series of CWDM Passive devices. Those can support 4 or 8 channels.
    They depend on you "feeding" them signals from special transceiver models that modulate the light on a wavelength compliant with the ITU-50GHz channels listed in the DWDM SFP+ data sheet Leo mentioned and linked to.
    The compatibility matrix tells you what Cisco gear is compatible with those transceivers. If you're using other 3rd party gear, they have equivalent support matrices. The bottom line is you need to match wavelengths to what the mux is expecting.
    If you move up to the ONS 15454 line you have more flexibility - transponder cards etc that will transform standard SFP+ (1550 nm 10 Gbps signalling on fiber) to the ITU channels etc. You're talking serious money there though - hundreds of thousands of US$. Cisco will be happy to send a sales engineer over and build a nice detailed bill of materials if that's in your budget. 

  • MST PVSTP interoperation

    Hello,
    I've read the "understanding MSTP article" from cisco's website and I have several uncertainties.
    These have deepened even more after performing several experiments.
    I've setup a test scenario with the following configuration:
    C1 Cisco PVST ---- C2 MST ---- C3 MST non-cisco switch (doesn't know PVSTP)
    C1 has vlans 1-10 for which it must necessarily be root bridge
    C2 must be root bridge for all other vlans
    C3 will transport 1-10 + the other vlans
    C1 can not be migrated to MST while C2 if possible should interoperate with the non-cisco MST enabled switch.
    What I have done:
    Setup 1:
    C2 mst root bridge for all vlans
    C3 learnt of C2 being the root bridge
    C1 the PVST also learnt this (as far as I've read all communication with the PVST is done via the IST-CST instance)
    Although this worked just fine, unfortunately it wasn't what I was searching for.
    Setup 2:
    C1 -- C2
    C1 lower priority for vlans 1-10 (disabled spanning tree on the other vlans / or removed them from the trunk to C2 if not required to be present there)
    C2 reported:
    SPANTREE-2-ROOTGUARD_UNBLOCK: Root guard blocking port ...
    And the port was shown as blocked in both the IST0 and the other MSTIs.
    I've also tried the alternate configuration (not recommended) from:
    http://www.cisco.com/warp/public/473/147.html#alternate_configuration
    without any luck.
    Disabling PVSTP on the C1 interface to C2 of course made the C2 port to be removed from blocking as expected.
    I have several questions in regard to this:
    a) Why are both IST0 and the boundary ports for the MSTIs placed in Blocking ?
    b) According to that article shouldn't there be a way to have the PVST be root bridge for all the instances present on it ?
    c) What alternate setup could there be possible to achieve the redundancy desired while maintaining C1 root bridge for vlans 1-10
    C2 root bridge for the others
    C3 interoperability with C2 (C3 only knows MST and RSTP)
    Any advice would be greatly appreciated.
    Thanks,
    Mihai

    Hi Mihai,
    The code was designed especially to prevent what you are trying to do:-( The problem is that C2 is only running one instance at the boundary to C1, the CIST. So for each of its ports leading to C1 it can only block all vlans or forward all the vlans.
    If C1 is root for certain vlans, C2 will have to block one of its port to C1 for those vlans. This means that C2 can only block ALL its vlans to C1, considering the rule stated above.
    On the other hand, if C2 is the root for the CIST, it will need to put both its ports to C1 in forwarding, which means put ALL the vlans to forwarding on both ports.
    You clearly see the contradiction, and that's what the inconsistency you are getting is trying to show.
    Why do you need C1 to be root for some vlans? Is that for some load balancing issues? Because you can achieve load balancing without having C1 being the root.
    The only solution to your problem seems to have C2 run PVST considering your constraints. It would be much better if you could move C1 to MST and have it participate in the same region as C2 of course...
    Regards,
    Francois

  • IP over OTN or IP over DWDM?

    these are two main trend in IP core Network construction.
    The core different between them is how to dealing with the TDM service or other small granularity service in electrical layer. in Router or in OTN?
    What is your opinion? I find many expert or communication fancier here, so beg your answers.

    The core different between them is how to dealing with the TDM service or other small granularity service in electrical layer. in Router or in OTN?
    As per my knowledge, difference between IP over DWDM or IP over OTN is which method can take all type of traffic (voice, data, video etc ) in efficient way not just TDM service or granularity (ROuter cannot beat SONET/SDH boxes in handling TDM traffic).
    Most of the traffic now a days is IP not TDM, whereas OTN are good for TDM
    Question is what can take packet effieciently without too many layers in between. Whether to add IP feature to OTN or Transport feature to Router.
    I have no answer who will win IP in router or OTN but IP over DWDM (In router ) will not be the best solution. This method is good for point to point High speed connection things will change if you have lot of traffic to crossconnect.
    In other post you have question that how can you separate traffic on different wavelngths, what do you think will your router will act as a crossconnect to segregate different traffic or what if you need only 5 Gig to point A and 5 Gig to point B will the router grrom that traffic?

  • Cost of  interoperations times

    hi expert,
    my condition is, i create one routing for FG and input standard queue time for each operation. when i create production order with this routing and calculate for cost. i think time i input should include in costing but  system not include standard queue time in process time , cost of each activity.
    (ex. my machine speed is 100 pcs per minuite and std queue time is 90 minute. when i create production order with quantities 100 pcs and calculate for cost. system show process time as 1 minute not 91 minute)
    as above, i want to know how to setting standard queue time or other interoperation time into costing.
    regards.
    kittisak.

    Hi,
    Queue times defined thid way are used only for scheduling and LT calculation, not costing.
    If you want to make these times relevant to costing, then you need to define them either as a standard value in a processing operation with it's proper formula, or as an operation by itself, depending on what suits best the situation in the shop.
    Regards,
    Mario

  • Have you heard about the latest addition to our SAP Microsoft Interoperability Suite?

    As you have seen from the coverage we did at the Microsoft SharePoint Conference earlier this month we introduced a new interoperability solution with the name Power BI Connectivity to SAP BusinessObjects BI.
    This is a great solution that makes it possible for Business users to continue to work in their familiar environment such as Microsoft Excel to
    access trusted, enterprise data  through a SAP BusinessObjects universe. Business users can access data coming from a variety of data sources
    including SAP systems.
    Read Deepa Sankars great blog on SCN introducing Power BI and "be in the know".
    Enjoy! and let us know what you think.

    Yes, Read Deepa Sankars blog on SCN and learn all about it.

  • Interoperability of Fusion Middleware Products.

    Hi all,
    I have following scenario where I am looking for some good suggestions.
    I have a cluster of 4 weblogic servers 10.3.5 where need to install Oracle Identity Manager 11.1.2 (11gR2) along with OBIEE 11.1.1.6 (11gR1) under the same Middleware Home.
    Following are the points to be considered for this installation.
    1- As per the documentation provided by oracle for interoperability
    http://docs.oracle.com/cd/E27559_01/doc.1112/e29569/interop_11g.htm#BCEJEFAF
    Section 3.3.1, “When installing Oracle Fusion Middleware products, be sure that each Middleware home you create contains only products that are at the same version or patch set.”
    The reason is given as *“Each product has its own maintenance schedule and it is possible that future interoperability issues could result.”*
    Further in section 3.4.2 it is mentioned *“When you configure a domain, ensure that all products configured within the domain are at the same patch set.* For example, do not configure Oracle Identity and Access Management 11g Release 1 (11.1.1.5.0) in the same domain with Oracle SOA Suite 11g Release 1 (11.1.1.6.0).”
    2- Since SOA is still having version 11.1.1.6.0(11gR1) and is required for the IDM it is mentioned that “One exception to this rule is the installation of Oracle SOA Suite 11g Release 1 (11.1.1.6.0) in the same Middleware home as Oracle Identity Manager 11g Release 2 (11.1.2). Oracle Identity Manager is one of the Oracle Identity and Access Management products. It requires Oracle SOA Suite.”
    3- Please also see section 3.4.3 of the same document, that stats ”Oracle often releases Oracle Identity Management and Oracle Identity and Access Management products on a schedule different from the schedule for the other Oracle Fusion Middleware products. As a result, it is common to use a different release or patch set of an Oracle Identity Management or Oracle Identity and Access Management product with your Oracle Fusion Middleware products, as long as they are not configured within the same domain. For example, you can use Oracle Identity and Access Management 11g Release 1 (11.1.1.5.0) products with your Oracle SOA Suite 11g Release 1 (11.1.1.6.0) products, if they are in separate domains.In these scenarios, the Oracle Identity and Access Management products are typically installed on a separate host and in a separate Middleware home.”
    4- OBIEE 11.1.1.6.0 also requires components from the SOA 11.1.1.6.0 e.g OWSM Policy Manager.
    Now, for the time being as per the document it is possible for me to install OBIEE 11.1.1.6.0 (11gR1), SOA 11.1.1.6.0(11gR1) and IDM 11.1.2(11gR2) under the same middleware home having separate domains , the issue is how to avoid the compatibility issues in future since the OBIEE, SOA and IDM are having independent upgrade cycle and patch release schedule. If any patch / upgrade is released and required to be installed for any of the above products it might put the remaining products in incompatible state hence rendering them dysfunctional?

    Will the 521 APs work with the 2112 controller?
    No.
    Are there any cheaper APs than the 1140AG that will work with the 2112 controller?
    You could try the 1130.  1140 supports 802.11n while 1130 is a/b/g only.
    Is the 1240AG a good choice for a non-climate controlled warehouse environment?
    The 1240, like the 1130, supports a/b/g only while the 1250 supports a/b/g/n.  The newer 1260 is controller-based only.  The 1240, 1250 and 1260 use external antennae that's an OPTION.

  • Windows Communcation Foundation - JSR 172 Web Stub - interoperability

    Hi! I am just playing with WCF -Windows Communication Foundation and JSR 172 Web Stub generation utility of WTK 2.5beta - is there a known problem with the import-expression i XML?
    WCF generates:
    ?wsdl -> web service descrption that has import- references to
      <?xml version="1.0" encoding="utf-8" ?>
    - <wsdl:definitions name="EchoService" targetNamespace="http://tempuri.org/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:tns="http://tempuri.org/" xmlns:wsa="http://schemas.xmlsoap.org/ws/2004/08/addressing" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:i0="http://schemas.microsoft.com/ws/2005/02/mex/bindings" xmlns:wsap="http://schemas.xmlsoap.org/ws/2004/08/addressing/policy" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:msc="http://schemas.microsoft.com/ws/2005/12/wsdl/contract" xmlns:wsaw="http://www.w3.org/2006/05/addressing/wsdl" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/" xmlns:wsa10="http://www.w3.org/2005/08/addressing" xmlns:wsx="http://schemas.xmlsoap.org/ws/2004/09/mex">
      <wsdl:import namespace="http://schemas.microsoft.com/ws/2005/02/mex/bindings" location="http://localhost:8080/echo?wsdl=wsdl0" />
    - <wsdl:types>
    - <xsd:schema targetNamespace="http://tempuri.org/Imports">
      <xsd:import schemaLocation="http://localhost:8080/echo?xsd=xsd0" namespace="http://tempuri.org/" />
      <xsd:import schemaLocation="http://localhost:8080/echo?xsd=xsd1" namespace="http://schemas.microsoft.com/2003/10/Serialization/" />
      </xsd:schema>
      </wsdl:types>
    + <wsdl:message name="IEchoService_Echo_InputMessage">
      <wsdl:part name="parameters" element="tns:Echo" />
      </wsdl:message>
    - <wsdl:message name="IEchoService_Echo_OutputMessage">
      <wsdl:part name="parameters" element="tns:EchoResponse" />
      </wsdl:message>
    - <wsdl:portType name="IEchoService">
    - <wsdl:operation name="Echo">
      <wsdl:input wsaw:Action="http://tempuri.org/IEchoService/Echo" message="tns:IEchoService_Echo_InputMessage" />
      <wsdl:output wsaw:Action="http://tempuri.org/IEchoService/EchoResponse" message="tns:IEchoService_Echo_OutputMessage" />
      </wsdl:operation>
      </wsdl:portType>
    - <wsdl:service name="EchoService">
    - <wsdl:port name="MetadataExchangeHttpBinding_IEchoService" binding="i0:MetadataExchangeHttpBinding_IEchoService">
      <soap12:address location="http://localhost:8080/echo" />
    - <wsa10:EndpointReference>
      <wsa10:Address>http://localhost:8080/echo</wsa10:Address>
      </wsa10:EndpointReference>
      </wsdl:port>
      </wsdl:service>
      </wsdl:definitions>?wsdl=wsdl0
    ?xsd=xsd0
    ?xsd=xsd1Anyone experienced the same problem and knows a solution ?
    Henning

    I have gotten a step futher! The emulator had to be configured to run in the "secure" domain (as mentioned by some other people here), otherwise a http response "400 bad request (invalid header name)" was produced. J2ME web service is interoperable with WCF basichttpbinding (without debugging becuase it inserts unparseable soap-code into the http response)

Maybe you are looking for