ELOM Remote Console Scriptability and Remote Storage Questions

Hello,
I have a few questions reguarding scriptability of the Java Web Start eLOM Remote Console and the use of this application for redirecting remote storage.
Setup:
- x6250s in 6048 chasis
- x6250 Bios version 1ADPI040 & SP version 4.0.52
- Sun eLOM Remote Console version 2.53.05
- CentOS 5.2
Questions:
- I'd like to write a wrapper script that would allow me to start a remote console on the Linux command line. Then, a command line like "myjavaconsole bladexyz" would give me a java remote KVM without me having to click through the web interface. Is something like this possible? Hints?
- The [Sun Blade X6250 Server Module Embedded Lights Out Manager Administration Guide|http://docs.sun.com/source/820-1253-14/remote_con.html#0_66586] says that you can use the eLOM Remote Console GUI to redirect storage devices including CD/DVD drives, Flash, DVD-ROM or diskette disk drives, hard drives, or NFS. These seem very instersting options, but I've only been able to sucessfully redirect an ISO image. Are these other options really possible?
- Is it possible to script the mounting/unmounting of remote ISO images or other storage? I would love to be able to control blade boot processes by having this functionality.
Thank you,
-Matthew

It seems the problem is related somehow to the setup of my Windows box. I did try a couple of other Windows boxes with the same result, but everything worked perfectly when using a Linux/ubuntu system to run the remote console. The blade saw the CD, booted from it, and is now happily running ESX Server.
The wierd thing is, the ubuntu system was running inside VMWare Workstation on the same Windows PC that has the problems, and was accessing the same physical CD drive. Sometimes you have to think out of the box, or in this case into a box inside the box:-)
I guess if these things we all straightforward I'd be out of a job, so I shouldn't complain!!
Steve.

Similar Messages

  • Seed_pool and general storage question

    I am trying to deploy EBS 12.1.3 Prod and Apps VM templates on a VM Server using just internal disk and I am running out of space.
    So do I need the seed_pool files after I have successfully imported the templates as Virtual Machnes?
    I was thinking I could also maybe create an NFS share from another server and mount the /OVS/seed_pool directory that way if needed.
    Yes/No?
    Edited by: user6445925 on May 13, 2011 9:33 PM

    I don't know if there is a better way to manage this but what we are doing is NFS mounting multiple file systems from a NetApp then move the files we want to the file systems we want and connect it all back to the file system that /OVS is pointed to with symbolic links
    e.g.
    10.53.252.2:/vol/cos1_tier03_rms_testdev_os_ovm_nfs
    158G 87G 72G 55% /var/ovs/mount/1A57047A2ABE4210B37DEEF62C33CF1F
    10.53.252.2:/vol/cos1_tier03_rms_testdev_asm_ovm_nfs
    2.0T 356G 1.7T 18% /var/ovs/mount/0465D5417DAB4F9F9D4EFDB141940AE6
    10.53.252.2:/vol/cos1_tier03_rms_testdev_app_ovm_nfs
    450G 417G 34G 93% /var/ovs/mount/7B543EC95C0B40648377E762F2F04713
    [root@ovs-tst-01 /]# ls -l /OVS
    lrwxrwxrwx 1 root root 47 Feb 9 16:00 /OVS -> /var/ovs/mount/1A57047A2ABE4210B37DEEF62C33CF1F
    so copy the file you want into the 2tb file system then link them back to the same location in the 158gb file system where the /OVS link points to. This is the way that we have figured out to put different storage needs onto different volumes on the NetApp - specifically moving +ASM to a different storage pool on the NetApp.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Macbook Pro and iCloud storage questions/help

    Hi all, Im needing help here as my Macbook Pro has become extremely slow and sluggish, is this because there is too much stored on it? Having iCloud should help take extra room up on the Macbook should it not?
    Just general help and advice regarding these matters would be appreciated.

    iCloud is a syncing facility, not a separate storage one, and any data on iCloud is duplicated on your Mac. You would do better to buy an external hard disk and use it to store bulky data such as media files.

  • IPhoto and Images Storage Question

    I've started a task of scanning in a bunch of old negitive strips. The strips are not organized so I never know whats really on them until I've scanned them. What I have been doing is scanning all the negitives to a scan folded on the desktop. When I've scanned them all in I create other folders on the desktop that are named by the event. After I put all the pics into the appropiate folders I drag the folder from my desktop into iPhoto. Normally this works fine if but when I try to drag a single or set of pics into an existing event in iPhoto it creates a new "unnamed event". How can I move these pics over to an existing filder. Also, can I put the folder that I created on the desktop into the trash once the pics have been imported into iPhoto so I dont have two copies laying around?
    Thanks,
    Jim

    Jim
    You cannot import to a specific event. You can however move pics between events in the iPhoto Window to consolidate them - drag and drop will do it.
    Assuming that you’re using iPhoto in the default mode, then the pics are copied into the iPhoto Library, therefore yes, you can delete the versions on the desktop.
    Regards
    TD

  • Firefox freezes if I try to initiate a remote console session with the Netware 6.5 server I am connected to. I can force quit, re-initiate Firefox, re-authenticate and I'm able to successfully start a remote console session and acquire the server console.

    Firefox freezes if I try to initiate a remote console session with the Netware 6.5 server I am connected to. I can force quit, re-initiate Firefox, re-authenticate and I'm able to successfully start a remote console session and acquire the server console.

    Hi Mac Attack,
    My computer will not disconnect from the internet.  It seems to find a clone router and continues even when I shut down and unplug my my own home iy
    Your main question was 'chopped' in the title. Please reply in the body of a reply box with the full question and anything you have tried. And no, the long report was not helpful .
    If the same website is opening each time you launch a browser (Safari?) hold down the shift key as you launch to prevent previous pages from opening.
    Have a look at your settings in Safari > Preferences. Especially General and Privacy.
    Reset Safari to remove cookies and other stored data.
    System Preferences > General
    Have a look at your settings in System Preferences >  Security & Privacy.
    Call back with more questions.
    Regards,
    Ian

  • Can not deploy  a remote storage node

    Hello,
    I have started successfully two SNA at two node, because we can ping to each other. But when I use admin console to to deploy the first remote storage node, it always shows the error "Connection refused to host: localhost". I use the command line to deploy a remote storage node. There is the same problem.
    Dose some one know how to solve it? Thank you advanced.
    Here is the error log:
    in createPlan: Oracle NoSQL DB 11gR2.1.2.123 oracle.kv.impl.fault.OperationFaultException: Plan 16[Plan-16] finished in state ERROR. Problem during plan execution: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused
    caused by:
    oracle.kv.impl.admin.AdminFaultException
    Oracle NoSQL DB 11gR2.1.2.123 oracle.kv.impl.fault.OperationFaultException: Plan 16[Plan-16] finished in state ERROR. Problem during plan execution: Connection refused to host: localhost; nested exception is:
         java.net.ConnectException: Connection refused
    Oracle NoSQL DB 11gR2.1.2.123 oracle.kv.impl.fault.OperationFaultException: Plan 16[Plan-16] finished in state ERROR. Problem during plan execution: Connection refused to host: localhost; nested exception is:
         java.net.ConnectException: Connection refused oracle.kv.impl.fault.OperationFaultException: Plan 16[Plan-16] finished in state ERROR. Problem during plan execution: Connection refused to host: localhost; nested exception is:
         java.net.ConnectException: Connection refused
         at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:601)
         at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:198)
         at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184)
         at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:110)
         at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:178)
         at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:132)
         at $Proxy8.getSerialVersion(Unknown Source)
         at oracle.kv.impl.util.registry.RemoteAPI.(RemoteAPI.java:32)
         at oracle.kv.impl.sna.StorageNodeAgentAPI.(StorageNodeAgentAPI.java:51)
         at oracle.kv.impl.sna.StorageNodeAgentAPI.wrap(StorageNodeAgentAPI.java:58)
         at oracle.kv.impl.util.registry.RegistryUtils.getStorageNodeAgent(RegistryUtils.java:243)
         at oracle.kv.impl.admin.plan.task.DeploySN.call(DeploySN.java:94)
         at oracle.kv.impl.admin.plan.task.DeploySN.call(DeploySN.java:35)
         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
         at java.lang.Thread.run(Thread.java:662)
    Caused by: java.rmi.ConnectException: Connection refused to host: localhost; nested exception is:
         java.net.ConnectException: Connection refused
         ... 18 more
    Caused by: java.net.ConnectException: Connection refused
         at java.net.PlainSocketImpl.socketConnect(Native Method)
         at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
         at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
         at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
         at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
         at java.net.Socket.connect(Socket.java:529)
         at java.net.Socket.connect(Socket.java:478)
         at java.net.Socket.(Socket.java:375)
         at java.net.Socket.(Socket.java:189)
         at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:22)
         at sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:128)
         at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:595)
         ... 17 more
         at oracle.kv.impl.admin.AdminServiceFaultHandler.getThrowException(AdminServiceFaultHandler.java:84)
         at oracle.kv.impl.fault.ProcessFaultHandler.rethrow(ProcessFaultHandler.java:205)
         at oracle.kv.impl.fault.ProcessFaultHandler.execute(ProcessFaultHandler.java:144)
         at oracle.kv.impl.admin.CommandServiceImpl.executePlan(CommandServiceImpl.java:234)
         at oracle.kv.impl.admin.CommandServiceAPI.executePlan(CommandServiceAPI.java:162)
         at oracle.kv.impl.admin.webapp.server.AdminUIServiceImpl.createPlan(AdminUIServiceImpl.java:554)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:569)
         at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:208)
         at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:248)
         at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
         at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:538)
         at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:476)
         at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119)
         at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:517)
         at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:225)
         at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:934)
         at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:404)
         at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:183)
         at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:869)
         at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
         at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:114)
         at org.eclipse.jetty.server.Server.handle(Server.java:341)
         at org.eclipse.jetty.server.HttpConnection.handleRequest(HttpConnection.java:589)
         at org.eclipse.jetty.server.HttpConnection$RequestHandler.content(HttpConnection.java:1065)
         at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:823)
         at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:220)
         at org.eclipse.jetty.server.HttpConnection.handle(HttpConnection.java:411)
         at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:515)
         at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:40)
         at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:529)
         at java.lang.Thread.run(Thread.java:662)
    (from Plans.PlanCallback)

    920601 wrote:
    Thanks, Charles. I use network host name or network host IP address instead of localhost and it well resolved my former problem. I can deploy the remote SN. But there is a new problem when I deploy the store. It looks like that these Replication Node at different SN can not communicate with each other, because of mis-match IP address format.
    Note that the node "community01/10.193.128.223" is the first deployed node who holds the administration process, "mi-desktop/127.0.1.1:5011" is another node. I don't understand why the second node uses its loopback adress. Can you help me to find out the solution?
    Here is the original error log.
    "mi-desktop/127.0.1.1:5011 the address associated with this node, is a loopback address. It conflicts with an existing use, by a different node of the address:community01/10.193.128.223:5011 which is not a loopback address. Such mixing of addresses within a group is not allowed, since the nodes will not be able to communicate with each other."I'm not sure I have any good advice, but I would check your /etc/hosts to see if mi-desktop maps to the loopback address (127.0.0.1). If it does, then you need to use a name that does not map to the loopback.
    Charles Lamb

  • Remote Storage Solution - PC as network drive?

    Has anyone out there tried a remote storage solution for consolidating very large libraries?
    I am currently having difficulties as I have over 2TB of iTunes content now, stored in multiple libraries (using iTunes Library Manager) across several remote hard drives (USB). My problem is that if I don't access one of the libraries for a while, often the software updates break links to the files, and I have various other problems organizing the disparate libraries (especially when I outgrow a USB hard drive).
    I'm still expanding my storage needs rapidly as I encode more home video and import old records. I'm not entirely sold on the idea of an ever-increasing number of remote libraries and dozens of USB drives humming away. Any suggestions on a good high-capacity (and hopefully expandable) solution to consolidate my whole iTunes library in one place?
    I have contemplated building a network PC and putting a bunch of 500GB/1TB drives in it, but would my iMac/iTunes still see those each as separate drives? Is there any solution out there to give me >3TB of space (affordably - eg, without having to buy a high-end multi-TB HDD) whilst still allowing me to "keep iTunes library organized"?

    Never answered

  • MDM Console - security and views

    Hi,
    I access MDM console remotely on the Windows 2003 server that hosts the MDM system and locally on my own laptop. Both machines run the same version of MDM Console 5.5 SP6 (5.5.63.87) and I have Admin access.
    When I view the repository on the Windows 2003 server I can see detailed information (individual fields and detail) when I view the repository from my laptop I can only see the detail.
    If I use PORTS as an example. On the Windows 2003 server I can see a list of the ports and when I click on them I can see the port definition. On my local machine all I can see is the port detail (eg, the table definition).
    As I have the same version of MDM Console installted on both machines (I have even checked the executable size and date) I'm at a loss to understand why the repository looks different when accessed from the server and my machine.
    I have checked the MDM Console Guide and that doesn't offer an insight. I am still searching SDN without any luck.
    Your advice is appreciated.

    Rob
    This might sound silly but are you sure the detail panel in MDM Console has not been pulled up to the top?
    Can you attach a screenshot with the Port example?
    -Varun

  • Workflow on-board and external storage

    Workflow on-board and external storage
    My photography is expanding rapidly, such that I have quickly run out of space on my harddrive, although my MacBook Pro is under 1 year old and when I bought it, was over dimensioned.
    I'm looking for an updated storage / availability / backup solution.
    I have recently purchased Aperture but I still use iPhoto in parallel. I import all photos into iPhoto and mostly sort them there, because I find it easier, not yet having had the time to discover whether Aperture can do it better. I use Aperture primarily for editing.
    I also use iPhoto Library Manager, to organize a dozen mostly unimportant libraries on an external drive, available when I'm in the office.
    My dilema is that I am traveling extensively and have even been buying small 1TB drives on the road to keep up with my need to back-up my SDHC cards.
    I like the convenience of having everything "with me", but with the exponential need for more space I might have to change my thinking. Bandwidth constraints on-the-road and also the "waste of time aspect" suggest that small portable drives are possibly the best solution.
    When back in the office however, I would also like to have "everything with me", daisy-chained or however, but I'm primarily looking for a professional workflow solution, which will also accommodate my travel requirements. Exchangeable drives which can be use in an "ice-box" type solution are OK for project based assignments which often can be archived "for ever". However, the (occasional*) need for access in remote countries creates problems and I'm not a friend of Cloud solutions. *Often when I'm away I have more peace and quiet for archiving and sorting old photo shoots then when I'm in the office.
    Aperture will possibly become my preferred editor, but I am open.
    Sorry if this is long, but I tried to cover everything which occurred to me to avoid excess "ping-pong."
    Many thanks in advance!

    Another topic has recently been addressing this issue.  See:
    http://discussions.apple.com/message/23287496#23287496
    Ernie

  • Difference between console port and dedicated management port

    Can someone explain what the difference is between the console port and the dedicated management port(fa0) , on a Cisco 2960s switch.
    Thank You.

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    A console port is special for two reasons.  First, it's "known" to the system as its console port which means the system will send status information to it, and often treat it special when accepting input.  Second, the console port is generally wired as a serial port.  (It also normally doesn't have an IP address.)
    The console port was intended for where the system operator controls the system from, usually nearby (physically).  (Console ports are/were used for any computer based system.)
    Management ports are generally for remote management using an Ethernet port.  On older switches/routers, a device might be configured to use an ordinary Ethernet port for this purpose.  On newer switches/routers, a Ethernet port is provided for this purpose. For these, the device may actually use different hardware for port and might treat it internally differently.  For example, often the Ethernet management port is only FE, it may not have ASIC support for high speed switching, and it might be in it's own predefined VRF.  Generally, a management port will have an IP address, but different from  IP address spaces used by other hosts.
    Out-of-the-box, a console port will allow you to configure the device, but a management port will often require some additional configuration.

  • Adding a RAID card to help speed up export (and other drive question) in Premiere Pro CC

    First of all, I have read Tweakers Page exporting section because that is where my primary concern is. First my questions, then background and my current and proposed configurations:
    Question 1: Will adding a hardware RAID controller, such as an LSI MegaRAID remove enough burden from the CPU managing parity on my software RAID 5 that the CPU will jump for joy and export faster?
    Question 2: If true to above, then compare thoughts on adding more smaller SSDs for either a one volume RAID 0 or smaller two volume RAID 0 to complement existing HDD RAID 5. That is, I'm thinking of buying four Samsung 850 Pro 128 GB SSDs to put in a four disk volume to handle everything (media/projects, media cache, previews, exports), or split it up into two volumes of two disks each and split the duties, or keep the four disk volume idea and put the previews & exports on my HDD RAID 5 array.
    The 850's are rated at SEQ read/write: 550/470 MB/s thus I could get around 2000/1500 MB/s read write in a four disk RAID 0 or 1/2 that if I split into two volumes to minimize volumes from reading/writing at the same time, if that really matters with these SSDs?
    The Tweaker's page made a few comments. One is splitting duties among different disks, rather than a large efficient RAID may actually slow things down. Since the SSDs are much faster than a single HDD, I'm thinking that is no longer accurate, thus I'm leaning toward the Four disk configuration putting OS & Programs on C drive, Media & Projects on D (HDD RAID 5), Pagefile & Media Cache on SSD (2-disk RAID 0) and Previews &Exports on 2nd SSD RAID 0 (or combine the two RAID 0's and their duties).
    Just trying to get a perspective here, since I haven't purchased anything yet. Any experience/stories, I would appreciate.
    My current drive configuration:
    My D drive is software RAID 5 consisting of four 1 TB Western Digital RE4 (RED) 7200 RPM HDDs with a CrystalDiskMark SEQ Read/Write of 339/252 MB/s.
    The C drive is SSD 500 GB (Samsung 840 (not Pro) and does 531/330 MB/s. My OS, Program Files and Page File are on C, and data/media files/project, etc all are on the RAID drive.
    Problem:
    Current setup allows for smooth editing, only the exporting seems slow, often taking between two and two and a half times the video length to export. Thus a 10 minute video takes 20-30 minutes to export. 15 minute video can take 30-40 minutes to export. The first 10% of the two-pass export takes under a minute (seems fast), but it gets slower where the final 10 or 20% can hang for many minutes like my system is running out of steam. So where is the waste?
    I have enabled hardware acceleration (did the GPU hack since my GPU isn't listed) and it may spike at 25% usage a few times and eat up 600 MB of VRAM (I have 2 GB of VRAM), otherwise it is idle the whole export. The CPU may spike at 50% but it doesn't seem overly busy either.
    Our timeline is simple with two video streams and two audio streams (a little music and mostly voice) with simple transitions (jump cuts or cross dissolves). We sometimes fast color correct, so that might use the GPU? Also, since we film in 1080 60P and export 1080 29.97 frames/sec, I think that is scaling and uses the GPU. I know without the GPU, it does take a lot longer. I have ruled out buying a faster GPU since it doesn't appear to be breaking a sweat. I just need to know if my system is bottlenecked at the hard drive level because I'm using software RAID and my disks are slow and will hardware RAID significantly reduce the CPU load so it can export faster.
    Our files are not huge in nature. Most our clips are several MBs each. Total project files are between 5 GBs and 10 GBs for each video with Windows Media File export being 500 MB to 1.2 GB on average. We shoot using Panasonic camcorders so the original files are AVCHD, I believe (.MTS files?).
    Considerations:
    1. I'm thinking of buying (and future proofing) an LSI Logic MegaRAID 9361-8i that is 12Gb/s SAS and SATA (because some current SSDs can exceed the 6Gb/s standard).
    2. I'm not replacing my current RAID 5 HDDs because not in my budget to upgrade to 6 or more large SSDs. These drives are more important to me for temporary storage because I remove the files once backed up. I don't mind a few inexpensive smaller SSDs if they can make a significant difference for editing and exporting.
    I can only guess my HDD RAID is slow but the CPU is burdened with parity. I would imagine running RAID 10 would not help much.
    My setup:
    my setup:
    CPU - i7-3930K CPU @4.5 GHz
    RAM - G.SKILL Ripjaws Z Series 32GB (4 x 8GB) DDR3 2133 @2000
    Motherboard - ASUS P9X79 WS LGA 2011
    GPU - Gigabyte GeForce GTX 660 OC 2GB (performed the compatibility list hack to enable hardware acceleration).
    C drive - 500 GB Samsung 840 SSD (Windows 7 Pro 64 bit and programs).
    D drive - four 1 TB WD RE4 Enterprise HDDs 7200 RPMs in software RAID 5
    Case - Cooler Master HAF X
    CPU Fan - Cooler Master Hyper 212 EVO with 120 mm fan
    Power Supply - Corsair Pro Series AX 850 Watt 80 Plus Gold
    Optical Drive - Pioneer BDR - 208DBK
    thanks in advance,
    Eric

    ........software RAID 5 off the motherboard ??????......NOT a good idea, from what I have read here on this forum from experts like Harm Millard and others. They have mentioned a LARGE overhead on the CPU doing this....causing sub-par performance. RAID 0 off the motherboard will NOT do this, however.....RAID 0 would provide optimum speed, but, with the risk of total data loss if ANY drive fails. You may wish to reconfigure your RAID to be RAID 0...BUT...you would need to DILIGENTLY back up its entire volume onto perhaps a quality 4TB drive very frequently.
         A lot depends on the nature of your current and FUTURE codecs you plan to edit. You may not want to sink a lot of money into an older setup that may have trouble with more demanding future codecs. For now, in the 1080p realm, your rig should be OK....the read/write performance on your CURRENT RAID 5 setup is not great, and a definite drag on the performance. The rest of your components appear to be fine.....the Samsung SSD, though not ideal, is OK.....it's write speed is WAY lower than the Pro model,but, the drive is used mainly for reading operations. Since you have Windows 7 Pro, and NOT Windows 8.......you CAN put the entire windows page file onto the RAID 0 you might create.....this will take that frequent read/write load OFF the SSD. Read the "tweakers Page" to see how to best TUNE your machine. To use your current setup most efficiently, without investing much money, you would :a. create the RAID 0 off the motherboard, ( putting all media and project files on it )  b. install a quality 7200rpm 4TB HDD to serve as a BACKUP of the RAID array. Then, install a Crucial M550 256GB or larger SSD, ( close in performance to Samsung 850 Pro...much cheaper), to put all previews, cache , and media cache files on....AND to use as " global performance cache" for After Effects...if you use that program. Exporting can be done to ANOTHER Crucial M550 for best speed...or, just to the either the FIRST Crucial or, the 4TB drive. Your current GPU will accelerate exports on any video containing scaling and any GPU accelerated effects. Your CPU is STILL important in SERVING the data to and from the GPU AND for decoding and encoding non-GPU handled video....your high CPU clock speed helps performance there ! You may want to check out possibly overclocking your video card, using MSI Afterburner.or, similar free program. Increasing the "memory clock speed" can RAISE performance and cut export times on GPU effects loaded timelines,or, scaling operations. On my laptop, I export 25% faster doing this. With my NEW  i7 4700 HQ laptop, I export in the range of your CURRENT machine....about 2 to 3 times the length of the original video. PROPERLY SET UP...your desktop machine should BLOW THIS AWAY !!
        Visit the PPBM7 website and test your current setup to possibly identify current bottlenecks,or, performance issues. THEN, RE-TEST it again, after making improvements to your machine to see how it does. Be aware that new codecs are coming (H.265 and HEVC,etc.) which may demand more computer horsepower to edit, as they are even MORE compressed and engineered for "streaming" high quality at a lower bandwidth on the internet. The new Haswell E...with its quad-channel memory, 8 core option, large number of PCI gen. 3 lanes, goes farther in being prepared for 4K and more. Testing by Eric Bowen has shown the newer PPro versions provide MUCH better processing of 4K than older versions.

  • Coherence and iptable firewall Question

    We have Coherence deployment on 3 linux virtual servers running behind firewall. The deployment is as follows..
    Server 1 - 2 WKA Nodes (Cache Servers) and 7 Storage disabled application Nodes
    Server 2 - 1 Storage Disabled application Node
    Server 3 - 2 WKA Nodes (Cache Servers) and 1 Storage disable application node
    Now the Question is.. do we need to open up firewall for all the local ports. Is there a way to avoid opening up these many ports?

    my say on this one is if the router is working fine dont upgrade the firmware, because whenever you upgrade the firmware of a router there is a itty bitty chance of bricking the router and since you told me that it is about 3 years old its already out of warranty. but if you want to upgrade the firmware of the router you can get the firmware at linksys.com/download, if you are just using the router for basic internet access and you are not changing any advanced configuration i say stick with your current firmware esp if you are not having problems with the router.
    "Love your job but never love your company. Because you never know when your company stops loving you"

  • What is the definition of cookies, cache, and local storage?

    Hello all, my first post but I've been Mac'd for about two years now. 
    My question is, how do cookies, cache, and local storage function, and other than where they're stored, what makes them different.

    Thanks Sig, actualy I brushed up on the definitions prior to posting the question. Was hoping for more of a dialog on the subject as it seems there is some confusion. At least my conversasion with Support would suggest. So I'm curious what other members of the Apple community have to say.
    Local Storage are flash cookies yet I was told not to be concerned as I am blocking cookies just fine. So are we talking hot dogs or franks?...  

  • Is "sp_purge_data" available only to datawarehouses or can it be used with both a normal database and Azure storage as well ?

    Is "sp_purge_data" available only to datawarehouses or can it be used with both a normal database  and Azure storage as well ?

    Thank you for the reply Qiuyun , the article was really helpful!
    I do have couple of other questions for you :
    How do we execute our SQL queries on Windows Azure tables and create horizontal partitions ? (I know that we have our SQL Server management Studio to execute normal queries on a SQL database , do we have a similar platform for Azure or do we have to get a local
    copy of the database to execute our queries and the  publish everything back to Azure)? I am looking to partition data on one of our databases and would like to know if it can be done in Azure directly or if we have to bring a local copy down ,write the
    partition function and
    and partition scheme or create a partition key and a row key - do the needful and publish it back to Azure?
    Also, how do I create a partition key and row key in Windows Azure?
    I am in the process of designing data archiving strategy for my team and would like to know more about the questions I just mentioned.
    Hoping to hear back from you soon.
    Thanks in advance for all the help!
    -Lalitha.

  • Scratch disk and other setup questions

    Hello, I am a long time Premiere user but have never been involved with the setup. I have decided to start doing personal video editing on my gaming computer and have run into some questions.
    The biggest one I have is with the scratch disk setup. I've read the help doc about 10 times and googled every combination of terms I can think of but am still unsure how I should set these for my setup.
    I have Premiere Pro CS5 installed on my C:, which is a 128 GB SSD
    For storage I have 4 1TB drives in Raid 5, so 2.72 TB total storage.
    How do I want to set up my scratch disks for best possible performance?
    Question 2: This might be related to scratch disk setup, I'm not sure, but something I have always wondered. Which disk should I save my projects to in this configuration? Does it matter? Then, do I want to render my final video to that same folder or to another disk?
    Question 3: I currently have a GTX 460 graphics card and am enjoying the benefits of CUDA acceleration. I have been looking at adding a second in SLI for gaming purposes, but have read that the Mercury Playback Engine is not compatible with SLI. Does this mean that it will still just work with one card and I will see no benefit in premiere, or that all CUDA acceration will cease?
    Thanks guys, I really appreciate the help.

    Alright, so I have the hdd's in the case and am setting up where to place everything, but each of my drives is different, so I am hoping someone can suggest best practice on which drives should hold what. I can rearrange any drive to suit any purpose you guys would suggest, but here is what I currently have set up:
    C: - 128 GB SATA III SSD
    D: - 150 GB 10k VelociRaptor
    E: - 250 GB Caviar Black
    F: - 4x 1 TB Spinpoint F4 7200 RPM in RAID 5
    Here is how I am currently planning on using these drives:
    C: - OS, Programs
    D: - Media, Projects
    E: - Previews, Exports
    F: - Pagefile, Media Cache
    When I am finished with a project I move the exported file onto my media server, so that is why I set the export onto the smaller drive and media cache onto the RAID.
    Does this look good or would you guys suggest a different orientation for better performance?
    And one other question, this time regarding RAM. I currently have 6 GB of DDR3, 3 x 2 GB. I have been reading all about how moving up to 24 GB would be very beneficial, and since it really isn't all that expensive, I am planning on doing so. My question is that, I have watched the task manager while rendering and exporting my videos and I've only ever seen RAM usage hit 1.8 GB. I wondered why it didn't get close to using all 6, but didn't worry about it too much. But now that I am hearing 24 would be helpful, I am wondering if I have something set up wrong that is stopping Premiere from using all my RAM?
    Thank you guys so much for the help.

Maybe you are looking for