JClient and 2 or 3 tiers

Hello,
we plan a project with J2EE and have decided to use the JClient, because the look and feel is near to Oracle Forms.
My question is:
When i create a JClient application and run it, the iAS seems to be not needed! The business components seems to lay in the JClient itself. We completed several tutorials but none said, if we can use a 3 Tier-Architecture (1) DB, 2) ias with business components and 3) jclient
Can anyone help me?
Thanks
Lutz

Your ADF businees components could be deployed local or remote (into a j2ee container) depending on the configuration you have. You will find all necessary info into Oracle91 JDeveloper Handbook and Oracle JDeveloper10g Empowering J2EE development and in the otn also.

Similar Messages

  • JClient and JBoss

    Hi all
    Can anybody tell me what parameters I must put in my bc4j.xcfg file to connect standalone JClient form with bc4j application module, successfully deployed in JBoss server?
    JDeveloper version 9.0.3.1
    JBoss version 3.0.6

    Your ADF businees components could be deployed local or remote (into a j2ee container) depending on the configuration you have. You will find all necessary info into Oracle91 JDeveloper Handbook and Oracle JDeveloper10g Empowering J2EE development and in the otn also.

  • Jclient and ias

    I made a local jclient application with bc4j (no ejb) which works fine, now I want to deploy the bc4j (ejb) things to ias. this works too
    but now every time i try to compile the jclient project, jdeveloper makes some jar files of the bc4j components (this takes some minutes). Is this necessary?
    By the making of the ias deployment , jdev also make a application module client file ( i have some client methods). If i try to use this client file in my jclient file it gives an exception. How do i implement this client methods in my jclient project.
    the behavior of the jclient appl. is totally different in local mode and ias mode. I am getting errors like record modified by other user. Is there a whitepaper on this topic.
    Do i need an ias for jclient apps or can I run it on a several machine without the need of an ias/oc4j like the way you run it inside jdeveloper.
    thanks,
    edwin

    I made a local jclient application with bc4j (no ejb)
    which works fine, now I want to deploy the bc4j (ejb)
    things to ias. this works too
    but now every time i try to compile the jclient
    project, jdeveloper makes some jar files of the bc4j
    components (this takes some minutes). Is this
    necessary?When you build a client project for an application in 3 tier mode by default a dependency is setup betwen the client project and the common profile of the middle tier project. The common profile is used to generated the jar containing the files required by the client. Every time you compile the client project this jar is regenerated. As an alternative you can manage this dependency yourself:
    In the client project settings dialog uncheck the common profile dependency. Instead add a new library to the client project which points to the jar produced by the common profile.
    >
    By the making of the ias deployment , jdev also make
    a application module client file ( i have some client
    methods). If i try to use this client file in my
    jclient file it gives an exception. How do i
    implement this client methods in my jclient project.The client file is a proxy that implements the interface containing the exported appmodule methods. The class in internally instantiated by the framework. You should be able to call yor methods by casting the apmodule reference to the custom appmodule interface. See the OnlineOrders\BatchClient sample.
    >
    the behavior of the jclient appl. is totally
    different in local mode and ias mode. I am getting
    errors like record modified by other user. Is there a
    whitepaper on this topic.
    What kind of exceptions? Can you elaborate.
    Do i need an ias for jclient apps or can I run it on
    a several machine without the need of an ias/oc4j
    like the way you run it inside jdeveloper.
    You don't need to deploy the ejb (actually the .ear file containing the ejb) to an ias instance. You can either use the embedded oc4j instance or the external standalone instance. Make sure to use the right config in the JClient application. I don't have the help system handy but I believe these topics are covered in the doc.
    Dhiraj

  • JClient and SCROLLABLE problem.

    I'm running my JClient application against a large database.
    Starting the application takes a long time.
    I setted my RangePage to a limited size and I expected the SCROLLABLE (wich is the default) worked fast but it's very slow.
    Only setting the accessMode of the ViewObjects to RANGING_PAGE runs considerably faster.
    Is it a bug of SCROLLABLE access mode.
    Am I forced to use RANGING_PAGE ?
    Tks
    Tullio

    Tullio,
    can you file it as a bug so we can have development looking at it? You don't provide a JDeveloper version number, which makes it har to say if it is a bug or not. Of course, setting the correct fetch size to the model helps the initial screen startup performance.
    Frank

  • JClient and Toplink

    I created a simple JClient form with a Table and a Navigation Bar. Business service is being managed by TopLink.
    It works well with small demo table (EMP) but when I changed the table that containse thousand of rows, Table and naivgation bar is not displayed. ( I think it goes to retrieve all the rows)
    Any work around ? or JClient can only be implemented with BC4J ? ( Model/Business Service independance is the term only for marketing ?? )
    Thank you
    Aamir

    Muhammad,
    guess you are talking about JDeveloper 10g, are you ?Yes
    Is the table shown once all the records are retrieved
    ? Yes
    You may be right that JClient tries to fetch all
    rows at once since it is the business service that
    determines the number of rows fetched at a time. I'll
    follow up on this. However it may take some time
    because I'll be out on vacation for two weeks.Isn't there anyone else who can help ?
    Aamir

  • VSM-BC4J IN JDEV9042:compile error at JClient and run error at JSP

    compile error at JClient:
    C:\VSM-BC4J\src\oracle\otnsamples\vsm\client\admin\profilePanal.java
    --! Wraning(424,18):method setNextFocusableComponent(java.awt.Component) in class javax.swing.JComponent has been deprecated
    --! Wraning(425,20):method setNextFocusableComponent(java.awt.Component) in class javax.swing.JComponent has been deprecated
    --! Wraning(428,21):method setNextFocusableComponent(java.awt.Component) in class javax.swing.JComponent has been deprecated
    --! Wraning(430,13):method setNextFocusableComponent(java.awt.Component) in class javax.swing.JComponent has been deprecated
    run error at JSP:
    503 Service Unavailable
    Servlet error:Parsing error processing resource path

    Solution: don't use a runtime expression for the id attribute.
    The id attribute is used to declare
    - an attribute in scope
    - a scriptlet variable on the page
    These are only available for the duration of the <logic:iterate> tag
    The "id" is only used in your programming. You don't need to make it dynamic.

  • JClient and Table Handling?

    In a scenario where you have a table and you want to process based upon the user's selection of a row, what's the best way to accomplish this? I've been looking at the events, and there doesn't seem to be one for a row selection. Does this mean that I'm in the wrong place to be looking for this, or do I have to code my own listener to handle when a row is selected? What's best practice for this? And, regardless of where/how you handle it, how do you get information about the database row selected?

    Forgive my ignorance, but this is my first foray into the Swing side of JDev.
    I don't see a double click event. What I've done is clicked on the jTable object to focus it in the property inspector, then clicked on the Events tab on the property inspector. I see a mouseClicked event, but nothing about a double click. Am I doing this in the wrong place? Or is the event called something I don't recognize?

  • Document about BO Perfomance Management  Architecture and its tiers

    <font size="3"><font face="Arial Unicode MS"><span><em>Where Could I see a schema or chart of PM XI Architecture with its tiers integrated in BOXIr2 Structure?</em></span></font></font><font size="3"><font face="Arial Unicode MS"><span><em>Can you indicate or send me a document where I can see the PM services (AA...) and its BOXIr2 tiers of belonging? </em></span></font></font><p><strong><span style="font-size: 12pt; font-family: &#39;Times New Roman&#39;">Note:</span></strong><span style="font-size: 12pt; font-family: &#39;Times New Roman&#39;"> <em>I see the BOXIr2 Administrator's Guide and I see that only BO processes are present in Architecture's Schema. In the PM Administrator guide I did not found any Architecture's Schema.</em></span></p>

    Hi,
    I didn't fully understand your environment. You have 2 virtual nodes, right? Let's say node1 and node2. Now, where is your NFS storage and what do you plan it will contain?
    Regarding Oracle documentation, there are 2 NFS "vendors", one is standard NFS (exported by any linux/unix machine), the other one is vendor supported NFS (such as Netapp, EMC, etc.).
    Oracle RAC files (OCR, voting and database) are supported on the second one only. It is not supported to use standard NFS for database files. However, if you wish, you can put 1 voting on a standard NFS (this is for cases where you have 2 locations for the RAC and you want to have another voting in a different site).
    In any case, if I understand correctly you have node1 exporting NFS to node2 for the RAC. If this is true it is not supported, and generally a bad idea because you lose your entire high availability (if node1 fails, you got nothing).
    HTH
    Liron

  • Jclient application and Webstart

    Can i use WebStart to run a Jclient application?
    Will i meet any performance problem?
    Thanks.

    Yes you can - and in 9.0.3 we have made this easier.
    As to whether there will be a performance issue? - Well it depends what you mean. WebStart is only a way of deploying
    and accessing your application. Once it is running it should now behave any different if it was run using WebStart or
    is deployed manually.
    These is some information on the on line help - search for JClient and Web Start
    Regards
    Grant Ronald
    Product Management

  • Upgrading a 3-node Hyper-V clusters storage for £10k and getting the most bang for our money.

    Hi all, looking for some discussion and advice on a few questions I have regarding storage for our next cluster upgrade cycle.
    Our current system for a bit of background:
    3x Clustered Hyper-V Servers running Server 2008 R2 (72TB Ram, dual cpu etc...)
    1x Dell MD3220i iSCSI with dual 1GB connections to each server (24x 146GB 15k SAS drives in RAID 10) - Tier 1 storage
    1x Dell MD1200 Expansion Array with 12x 2TB 7.2K drives in RAID 10 - Tier 2 storage, large vm's, files etc...
    ~25 VM's running all manner of workloads, SQL, Exchange, WSUS, Linux web servers etc....
    1x DPM 2012 SP1 Backup server with its own storage.
    Reasons for upgrading:
    Storage though put is becoming an issue as we only get around 125MB/s over the dual 1GB iSCSI connections to each physical server.  (tried everything under the sun to improve bandwidth but I suspect the MD3220i Raid is the bottleneck here.
    Backup times for vm's (once every night) is now in the 5-6 hours range.
    Storage performance during backups and large file syncronisations (DPM)
    Tier 1 storage is running out of capacity and we would like to build in more IOPS for future expansion.
    Tier 2 storage is massively underused (6tb of 12tb Raid 10 space)
    Migrating to 10GB server links.
    Total budget for the upgrade is in the region of £10k so I have to make sure we get absolutely the most bang for our buck.  
    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks

    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks
    1) You can use direct connection to SAS with a 3-node cluster of course (4-node, 5-node etc). Sure it would be much faster then running with an additional SoFS layer (with SAS fed directly to your Hyper-V cluster nodes all reads and writes would be local
    travelling down the SAS fabric and with SoFS layer added you'll have the same amount of I/Os targeting SAS + Ethernet with its huge compared to SAS latency sitting in between a requestor and your data residing on SAS spindles, I/Os overwrapped into SMB-over-TCP-over-IP-over-Etherent
    requests at the hypervisor-SoFS layers). Reason why SoFS is recommended - final SoFS-based solution would be cheaper as SAS-only is a pain to scale beyond basic 2-node configs. Instead of getting SAS switches, adding redundant SAS controllers to every hypervisor
    node and / or looking for expensive multi-port SAS JBODs you'll have a pair (at least) of SoFS boxes doing a file level proxy in front of a SAS-controlled back end. So you'll compromise performance in favor of cost. See:
    http://davidzi.com/windows-server-2012/hyper-v-and-scale-out-file-cluster-home-lab-design/
    Used interconnect diagram within this design would actually scale beyond 2 hosts. But you'll have to get a SAS switch (actually at least two of them for redundancy as you don't want any component to become a single point of failure, don't you?)
    2) With 2012 R2 all I/O from a multiple hypervisor nodes is done thru the storage fabric (in your case that's SAS) and only metadata updates would be done thru the coordinator node and using Ethernet connectivity. Redirected I/O would be used in a two cases
    only a) no SAS connectivity from the hypervisor node (but Ethernet one is still present) and b) broken-by-implementation backup software would keep access to CSV using snapshot mechanism for too long. In a nutshell: you'll be fine :) See for references:
    http://www.petri.co.il/redirected-io-windows-server-2012r2-cluster-shared-volumes.htm
    http://www.aidanfinn.com/?p=12844
    3) These are independent things. CSV is not active/passive (see 2) so basically with an interconnection design you'll be using there's virtually no point to having one-CSV-per-hypervisor. There are cases when you'd still probably do this. For example if
    you'd have all-flash and combined spindle/flash LUNs and you know for sure you want some VMs to sit on flash and others (no so I/O hungry) to stay within "spinning rust". One more case is many-node cluster. With it multiple nodes basically fight for a single
    LUN and a lot of time is wasted for SCSI reservation conflicts resove (ODX has no reservation offload like VAAI has so even if ODX is present its not going to help). Again it's a place where SoFS "helps" as having intermediate proxy level turns block I/O into
    file I/O triggering SCSI reservation conflicts for a two SoFS nodes only instead of an evey node in a hypervisor cluster. One more good example is when you'll have a mix of a local I/O (SAS) and Ethernet with a Virtual SAN products. Virtual SAN runs directly
    as part of the hypervisor and emulates high performance SAN using cheap DAS. To increase performance it DOES make sense to create a  concept of a "local LUN" (and thus "local CSV") as reads targeting this LUN/CSV would be passed down the local storage
    stack instead of hitting the wire (Ethernet) and going to partner hypervisor nodes to fetch the VM data. See:
    http://www.starwindsoftware.com/starwind-native-san-on-two-physical-servers
    http://www.starwindsoftware.com/sw-configuring-ha-shared-storage-on-scale-out-file-servers
    (feeding basically DAS to Hyper-V and SoFS to avoid expensive SAS JBOD and SAS spindles). The same thing as VMware is doing with their VSAN on vSphere. But again that's NOT your case so it DOES NOT make sense to keep many CSVs with only 3 nodes present or
    SoFS possibly used. 
    4) DPM is going to put your cluster in redirected mode for a very short period of time. Microsoft says NEVER. See:
    http://technet.microsoft.com/en-us/library/hh758090.aspx
    Direct and Redirect I/O
    Each Hyper-V host has a direct path (direct I/O) to the CSV storage Logical Unit Number (LUN). However, in Windows Server 2008 R2 there are a couple of limitations:
    For some actions, including DPM backup, the CSV coordinator takes control of the volume and uses redirected instead of direct I/O. With redirection, storage operations are no longer through a host’s direct SAN connection, but are instead routed
    through the CSV coordinator. This has a direct impact on performance.
    CSV backup is serialized, so that only one virtual machine on a CSV is backed up at a time.
    In Windows Server 2012, these limitations were removed:
    Redirection is no longer used. 
    CSV backup is now parallel and not serialized.
    5) Yes, VSS and CBT would be used so data would be incremental after first initial "seed" backup. See:
    http://technet.microsoft.com/en-us/library/ff399619.aspx
    http://itsalllegit.wordpress.com/2013/08/05/dpm-2012-sp1-manually-copy-large-volume-to-secondary-dpm-server/
    I'd look at some other options. There are few good discussion you may want to read. See:
    http://arstechnica.com/civis/viewtopic.php?f=10&t=1209963
    http://community.spiceworks.com/topic/316868-server-2012-2-node-cluster-without-san
    Good luck :)
    StarWind iSCSI SAN & NAS

  • Why is it so hard to change my plan and upgrade my phone?

    My droid 2 has not been working properly for the last month (send button for text broke) so I thought I would upgrade my phone to the droid 4 and go with a new plan ( 2 years is way up)....Wow, I'm getting treated like crap...I have been with verizon over 10 years and this has been the worst experience ever. They used to be a really customer service friendly company that made me feel like i was with the best carrier but that is not the experience they leave you with now.. I don't use that much data so i don't really care about the unlimited data but it's been a big issue with Verizon and they seem hell bent on getting me to pay for data on two phones even though the flip top i never used did not have data...
    I have a flip phone that I added almost two years ago and pay about 15 a month for even though I have never used it. If I upgrade my plan which is over, they want me to pay an additional 30 for data access on a line I don't use? The first rep i talked to was pretty cool and he gave me some options to think about but he was also trying to get me to upgrade my second line which I really just want canceled. I have talked to three reps now and have been told three different things. Car salesman like to confuse customers too, it's an old but effective tool, but i really am losing patience with it. The last rep I talked to was not interested in helping me understand anything but was willing to help cancel my account. A store rep told me to stomp on my phone so it doesn't work at all and claim insurance so i won't have to pay the upgrade fee.  How is that ethical? I really just wanted to upgrade to the droid 4 and buy a car charger. I know i can't be the only one going through this. Any ideas?   

    Why did you get the "flip top" if you never used it? Seems like kind of a waste of money to pay $15/month for a phone you do not use.
    Did you happen to get this line so you could get an upgrade for your other line when it wasn't yet eligible for a discounted upgrade? Sounds as if this may be the case. If so, when is the contract over with this "flip top" line? If you just want to cancel the line but you have not yet fulfilled the contract, you would have an ETF which I assume would be for a smartphone.
    My idea would be to cancel the line you do not use if you want. If it is not under contract and you don't use the line, why would you keep it?  If it is under contract, you would have to weigh the cost of the ETF vs the cost of phone service over the remainder of the contract period.
    If you want to upgrade the line you DO use to a Droid 4, you have a couple of choices to make. If you currently have unlimited data, you could keep your current calling plan and switch to tiered data since you do not need that much data. The lowest tier of data available is 2 GB at a cost of $30/month, approximately what you currently pay for unlimited.
    Another option would be to switch to the Share Everything plan. The minimum cost for this with 1 smartphone would be $90/month($50/month for 1 GB of data and a $40/month line access fee). You haven't mentioned what you currently pay for service or what your current plan is, so you would have to compare this new charge with what you currently pay.

  • Install Windows Server 2012 R2 VM on Storage Spaces with Storage Tiers

    Hey guys
    In my small/medium sized company we will soon update to Windows Server 2012 R2. I would like to implement virtual servers using Hyper-V. I didn't find a lot of information about Hyper-V in combination with storages spaces and autoamted storage tiers.
    And this is very confusing to me as it seems to me that this would be the best practice as it is the most cost-efficient and most elegant solution.
    My ideal scenario:
    With Hyper-V I virtualize two Windows Server 2012 R2 instances. So two separate virtual machines.
    I use the following disk setup:
    1x cheap HDD  40GB for hyper-v server 2012 r2 core.
    2x SSD 200GB (enterprise-grade)
    2x HDD 4TB (7.2k, enterprise-grade)
    Step 1:
    I will install Hyper-V Server 2012 R2 Core on the 40GB HDD. Via command line, I will create a storage pool with automated tiered storage using the SSDs and the HDDs in mirrored mode the following way:
    With Tiered Storage, I create a storage pool containing the SSDs and the HDDs. Then I create storage space A (1TB) and B (3.2TB) with the SSDs in a mirrored setup and the HDDs in a mirrored setup. The SSDs for the „hot files“ and the HDDs for the „cold files“.
    Step2:
    Ontop of the storage space A I want to install the first Windows Server 2012 R2 instance with Active directory. On storage space B I want to install the second Windows Server 2012 R2 instance for a business application to run on it.
    Conclusion:
    The SSDs are mirrored and therefore one SSD can fail.
    The 4TB HDDs are mirrored and therefore one HDD can fail.
    I have a fast and easy scalable environment.
    But in the Internet I found many information that it’s not possible to install an operating system onto a storage tier.
    Question 1:
    Is this setup possible?
    Question 2:
    If this setup is possible, why is not everyone doing it?
    Question 3:
    Is it possible to do Step 1 over a GUI from a remote machine?
    Question 4:
    If the creation of Storage Tiers in the Hyper-V Server 2012 R2 is not possible. Would it work to use a Windows Server 2012 R2 as a parent system on the 40GB HDD? To do Step 1?
    I would gladly get some feedback of people knowing Storage Tiers well.
    Thanks a lot!

    I would absolutely prefer a GUI. But a Windows Server 2012 R2 Standard Licence allows you to run two VM machines.
    It also grants you a physical installation ("POSE" in the licensing documents). You can buy one copy of WS2012R2 Standard, install it on the hardware, enable Hyper-V, and then operate two virtual machines with WS2012R2 Standard ("VOSE"
    in the licensing documents). The only restriction is that the management operating system (POSE) can only run services and applications meant to manage the virtual machines and/or the management operating system. The Hyper-V Server license is the same way
    so it's not really any different.
    In short, given the benefits of the GUI at your stage of learning, you have no solid reason not to install the full system and take advantage of it. You can disable the GUI later once you get your footing. Or not. Whatever suits you. However, in response
    to your Question 3, you can do this all remotely. Once you get WS2012R2 installed in a guest, you can use it to manage the management operating system if you want. There are many options.
    But then I would also need to have redundancy on the 40GB HDD as if this HDD brakes, all others brake as well?
    Yes, you're going to want some redundancy for the management operating system. But, you've listed 5 drives in your original layout. You don't really have a 5-bay system, do you? Is there an empty sixth bay? Could you not get two 40 GB drives instead of one
    and use hardware RAID-1?
    Eric Siron
    Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.

  • An epic proposal for Adobe: We need an AdobeOS and AdobeTower

    Open letter to Adobe.
    Dear Adobe,
    Since May of the year 2000 I have been using your programs, first with with a much appreciated student discount, later as a working professional upgrading to a full license. In the last 13 years I personally have spent at least $20,000 on your software, from student licenses all the way to Master collection and now "creative cloud". Since that time I must also admit that 75% or more of my income has been a direct result of using your software for which I am grateful to your software developers. I fondly remember customizing the neat bouncing circle icons on my MAC, I remember when you bought Flash from Macromedia, I remember the joy of receiving my first boxed set of programs, and I remember the first time a color picker popped up in DWR on my PC when hand coding CSS.
    I want you to know that today I lost 10+ hours of my life, never to return, dealing with the CC and CC2014 problem created by your failure to live up to the promise, hype and allure of "Creative Cloud". As a direct result and for the first time in my life as a multimedia professional I truly considered switching back to CS6, never updating ever again and perhaps finding alternative programs to use.
    After my momentary lapse of reason reality set in.
    Rather than troll your forum in utter negativity:
    I would like to propose the following (which are based on observations I suspect your developers and many Adobe customers will agree with):
    Develop your own hardware and OS, complied complete with every piece of software that creative professionals need to produce digital work.
    Yes, that is what I am proposing: start making computers that are specifically built to run your software with the right RAM, Video cards, Monitors, and drivers. Of course, aptly named something like 'Adobe-IvoryTower' and 'AdobeWindow', on your own proprietary OS, most likely called AdobeOS [AOS]... and ship them to customers without them being loaded with garbage like 'Mcafee virus scan' and absurd things like the 'Vaio media browser' or 'iMovie'...
    When you do this put together a panel made up of actual customers who have used your products for at least 5 years.
    Listen to that panel and allow the democratic process to shape your product to be what it should be, what we all need it to be: proper, efficient, streamlined, and effective.
    Expand into the realm of smart phones, tablets, and mobile devices using a 'paired device' structure where as the software and hardware is always compatible and able to sync.Yes, I would buy an AdobePhone that can record conversations with clients and developers, link them to a project folder and update my task list log in AdobeTimeTracker, all with 1 tap.
    Compete with SkyDrive and Google Apps by releasing similar products.Yes, we want AdobeWrite where pure UTF-8 text pastes over properly between file-types when writers send over copy because cleaning incompatible formatting 'should be' extinct.
    Cheer loudly as you watch your NET worth Quadruple when other software manufacturers begin to produce Adobe OS versions of programs like Rhino, Solid Works, and Pro Engineer.
    Accept praise from your customers and finally forever close down the support forums because yearly updates simply auto-download onto systems which you know 100% are running properly.
    Most would agree that there are 3 primary types of devices, these are:
    Entertainment toys (computers and devices used in average homes simply to entertain, they come bundled with a bunch of software that every professional spends hours uninstalling)I only know about this because every time I buy a computer I spend too much time turning it into a workstation, valuable time I will want back on my deathbed.
    Business workstations (Anyone who has Master collection installed and uses at least 4 pieces of Adobe software on a daily basis, 7 during a month, and nearly all during a year)I know allot about this, this is what we need you to build.
    Government systems (I suspect the CIA has enjoyed Photoshop but see no place for a separate system here)I know nothing about these besides the fact they exist.
    As such the need for a complete system should be evident to you, clearly it is evident to at least one customer.
    For anyone reading this letter imagine the joy of the following:
    Complete Hardware and software compatibility, finally.
    A robust system engineered by and for creative professionals.
    A unified file standard which is truly useful to everyone.
    Affordable upgrades and systems, appropriately tiered to usage needs and production standards.
    Not having to read through the obscure install error codes or 3 month old un-resolved forum topics.
    The convenience of having a master system that really does include everything CC was supposed to.
    An OS with truly useful software and productivity tools, configured, pre-installed, and always working.
    Updates which we can actually click "yes" on and then go to sleep, trusting that nothing will go wrong.
    Even if Adobe did not break into a completely proprietary OS and Hardware set up, it still makes great sense to have an "official subsidiary/branch" that custom builds and delivers pre-reconfigured/bundled MAC and WINDOWS systems to customer's desktops and laps...
    Thank you for your consideration in this matter, sincerely,
    Christian Žagarskas
    - A grateful Adobe customer

    Adobe doesn't have bottomless pockets and infinite resources.

  • Design and professional opinion

    I am looking for some professional opinions on how to go about setting up secure connectivity to multiple remote sites.  I figured going to the professional forum is probably the best place to go to see what everyone thinks. 
    Here are the requirements.   
           1.  400 to 500 remote sites (some larger and some quite small)
           2.  Must be secure, AES-256 or above  FIPS compliant
           3.  Needs to be Hub-Spoke type connection.   All spokes need to come back to Headquarters for information. 
           4.  Need to be able to manage the connection by way of some sort of NPM
           5.  Call center will need restricted abilities in order to troubleshoot in off-hours(for instance if Lan2Lan, call center can't have full ASA admin access)
           6.  Traffic would need to be initiated from both remote and HQ side, bidirectional.
           7.  Remote site networks are managed by their separate agency.   We can place equipment there, to have them route to,
    These are the most important requirements I can think of at this moment.  Most likely, there will be some sort of broadband type connection to each remote site as we are trying to go away from costly dedicated slower circuits from the Telco. 
    We have toyed with putting an ASA 5505 at each remote site and creating lan2lan tunnels back to HQ.  Problem with this is,  our call center would require full access to the ASA to reset tunnels.  In addition, monitoring lan2lan tunnels with an NPM has come to be quite a chore and to the best of my knowledge, there's no real great way to do this without finding an IP of something else to ping.  
    I am looking forward to hearing your personal opinions as to what the best option would be regarding something of this nature. Again, this is an enterprise type setup and will need something that works and works well.    You guys are super smart and I know you will steer me in the right direction.    Thank you for your help and time in offering your solutions.

    You could use DMVPN or probably a better solution would be to use MPLS and DMVPN and create a tiered design (e.g. sites->regions->corporate). There are a million factors with a design and your applications that we can only give you ideas. You can work with your local PSE to come up with a good design.

  • Cannot select parity in new disk when choosing tiered storage

    Hello
    I am running a small lab experiment. I want to add 3 HDD and 1 SSD to a pool and create a tiered parity vDisk. But I cannot do this, the option is just not there. I figured it was because of only 1 SSD, so I added another, option still not
    there. So I added a 3. The parity option is still not there. I'm running this experiment in a virtual environment in hyper-v and the disk are added thru hyper-v. But I have set the disks up correctly like this:
    Get-StoragePool
    SATA_Pool | Get-PhysicalDisk | ? FriendlyName -eq "PhysicalDisk4" |
    Set-PhysicalDisk -Mediatype SSD
    Get-StoragePool
    SATA_Pool | Get-PhysicalDisk | ? FriendlyName -eq "PhysicalDisk1" |
    Set-PhysicalDisk -Mediatype HDD Get-StoragePool SATA_Pool | Get-PhysicalDisk |
    ? FriendlyName -eq "PhysicalDisk2" | Set-PhysicalDisk -Mediatype HDD
    Get-StoragePool SATA_Pool | Get-PhysicalDisk | ? FriendlyName -eq
    "PhysicalDisk3" | Set-PhysicalDisk -Mediatype HDD
    Is this a bug or what is the requirement to get this working?
    Thanks

    You cannot use parity or dual parity when using tiered storage. See:
    Automated Tiered Storage with Windows Server 2012 R2
    http://blogs.technet.com/b/keithmayer/archive/2013/09/13/step-by-step-build-an-automated-storage-tiers-lab-with-windows-server-2012-r2-and-powershell.aspx
    Note: When
    using Storage Tiers, Parity layouts are not supported. 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Maybe you are looking for

  • Outline agreements(contracts) - consignment

    Hello Experts! I have following problem and need your help. I have run the report ME80RN (General Analysis of Purchasing documents) for quantity contracts. We have normal and consignment contracts. Is there any chance that the report shows the values

  • Display a odt File in a Swing Panel?

    Hello I need a library to display odt files in a swing panel. I know that open office can be embedded in panels but this does not work for mac and i want my application to be cross plattform. It should also have a commercial license. best regards

  • Managing files in CQ components folder...

    Hi, I am a newbee in CQ development. I start to develope many components in "components" folder. Now, the folder has too many files inside and I realized that I need to sort related components and put them in a folder. If I create a folder under the 

  • Creating iView for WD ABAP application - Handling URL parameters

    Hi, I have a Web Dynpro ABAP application with me for which I have to create a iView in portal. This application has certain parameters to be passed via application URL. How do I take care of these URL parameters when iView is called? Thanks and regar

  • Failed Validation in Popup

    Hi All, Jdev Version : 11.1.1.6.0 In our case, we have table, with Create Button, which opens popup. Popup contains af:inputFile. af:inputDate, af:inputText, selectManyChoice UI components. Then we have Submit button, Reset Link and Cancel Link. Thes