Poor performance with Archived projects on Time Capsule

I work in an educational environment where multiple students share four iMacs (all 2008 models or newer, with 2 or 4 GB of RAM on each machine). When a student is done working on a project for the day they save it as an Archive to a Time Capsule, which is shared by every computer on the network, so that they can access it on any of the computers in the building The problem is that often times, even on projects with only 4 or 6 tracks, the load time is extremely slow from the Time Capsule (10 or more minutes for a 11 MB project yesterday) and once loaded the same projects frequently give the "too many tracks" error and will not play and are thus useless. I tried temporarily moving the project to the HD of the computer in use, but this also took a long time and didn't really solve the problem.
Is it possible we're having bandwidth issues? I could see how this would slow the load time, but not playback once loaded. Our ISP plan is rather slow... probably no more than 5 Mbps and it's shared between 6 or 7 computers at a time.
Any other thoughts or solutions? It's slowed some of our students progress to a halt.

1 megabyte per minute is extremely slow in terms of transferring or opening a file.
it is, but assuming you have more than one machine connected to it, the bandwidth is being split between every machine that's accessing it. as well, it's not designed for what you seem to have purposed it... it's a backup drive, not a working drive (which would also mean you have no backups)
once the Archived project is loaded, shouldn't it no longer be referring wirelessly back to the TC
i've never studied how much disk access GB utilizes, but OS X in general does a lot. the latency alone across that network would have to be huge, and to add to the problem, i doubt there's a 7200RPM drive in the TC.
looking at it anotehr way, we always recommend firewire drives due to the bandwidth problems associated with the file sizes of audio or video data (even USB 2.0 does not have the sustained throughput that FW does). as a quick off-the-top-of-my-head calculation, firewire 400 offers around 50MB/s throughput. 802.11n, the fastest of the current wireless specs, offers around 6.75MB/s (theoretically up to around 15MB/s, but i think that requires multi-channel usage). that's a HUGE difference, and when you're moving lots of data, it's very relevant.

Similar Messages

  • Performance with Referenced Master on Time Capsule vs. Attached Hard Drive

    My managed master library on my MBP is getting too big and I am running out of hard drive space.  I am considering going to a "hybrid" situation, where all projects that have already been edited and stored in folders can be relocated to referenced.
    Considering my options, I am wondering how Aperture will perform with referenced masters stored on the networked Time Capsule drive, as opposed to having to plug in a USB3 or Thunderbolt hard drive every time I want to re-edit those old photos (which is not very often).
    I came across this article, which says to use a locally mounted drive:
    http://support.apple.com/kb/TS3252
    However, I am not sure exactly what that means.  When I connect to the Time Capsule, it does show up as a mounted drive with a "data" folder that I can interact and store things on.  Not sure if this counts as "local", but from my experience it seems to move pretty quick.
    Does anyone have any experience storing the referenced masters on a Time Capsule.  My plan would then be to periodically back that one Time Capsule folder where all masters will be placed to a harddrive stored off-site and also set CrashPlan up to backup that one folder.  Thanks for any input... otherwise, I guess I will test it and see how it goes, since I rarely/never go back and re-edit images.

    It can be done, but you are "saving" the cost of an inexpensive external drive by accepting the cost of a convoluted administrative set-up.  IME, for one person or any organization small enough to _not_ have a dedicated IT person/staff, that is a false economy.  Typically, one of the admin tasks won't get done or won't get done right (you might need more space for back-ups on your Time Capsule, or you might forget exactly how you set up the TC drive to hold your Originals and put off moving more to it).
    Drives are inexpensive, and bargains.  For rarely loaded referenced Originals you don't _need_ anything faster than USB 2 (FW400 or any faster connection rec'd).  Additionally, if you have room on your Time Capsule for your referenced Originals, then you have room for their backup.  Put the referenced Originals on a new external drive, and back it up to Time Capsule.
    And then you don't have to worry about pulling your Originals through a network.  They will be locally mounted.
    My 2¢.
    --Kirby.

  • Kernel panic with wifi backup to Time Capsule

    I have had no problem with wireless backup to Time Capsule until recently when I would consistently get a kernel panic during the backup. Is it a corrupt file? How would I isolate it? Or could it be a directory problem that DiskWarrior would fix? I can mount the backup image on my desktop OK.
    I suppose I could erase my backup image on the Time Capsule, but I dont want to lose all that data. And if I did, what is the best way to archive that TIme Capsule backup image?
    Thanks for any help.

    Arthur Simonsen wrote:
    I have had no problem with wireless backup to Time Capsule until recently when I would consistently get a kernel panic during the backup. Is it a corrupt file? How would I isolate it? Or could it be a directory problem that DiskWarrior would fix? I can mount the backup image on my desktop OK.
    First, Repair your backups per #A5 in the Time Machine - Troubleshooting *User Tip,* also at the top of the +Time Machine+ forum.
    If that doesn't help, here's a bunch of kernel panic info:
    Apple Support - About kernel panic messages
    Mac OS X Kernel Panic FAQ
    The X Lab - Resolving Kernel Panics
    Apple Developer - Technical Note TN2063: Understanding and Debugging Kernel Panics
    Tutorial: Avoiding and eliminating Kernel panics
    I suppose I could erase my backup image on the Time Capsule, but I dont want to lose all that data. And if I did, what is the best way to archive that TIme Capsule backup image?
    If your backups are ok, erasing the drive probably won't help. But to "archive" the backups, see this Apple article: http://support.apple.com/kb/HT1281

  • Non jdriver poor performance with oracle cluster

    Hi,
    we decided to implement batch input and went from Weblogic Jdriver to Oracle Thin 9.2.0.6.
    Our system are a Weblogic 6.1 cluster and an Oracle 8.1.7 cluster.
    Problem is .. with the new Oracle drivers our actions on the webapp takes twice as long as with Jdriver. We also tried OCI .. same problem. We switched to a single Oracle 8.1.7 database .. and it worked again with all thick or thin drivers.
    So .. new Oracle drivers with oracle cluster result in bad performance, but with Jdriver it works perfectly. Does sb. see some connection?
    I mean .. it works with Jdriver .. so it cant be the database, huh? But we really tried with every JDBC possibility! In fact .. we need batch input. Advise is very appreciated =].
    Thanx for help!!
    Message was edited by mindchild at Jan 27, 2005 10:50 AM
    Message was edited by mindchild at Jan 27, 2005 10:51 AM

    Thx for quick replys. I forget to mention .. we also tried 10g v10.1.0.3 from instantclient yesterday.
    I have to agree with Joe. It was really fast on the single machine database .. but we had same poor performance with cluster-db. It is frustrating. Specially if u consider that the Jdriver (which works perfectly in every combination) is 4 years old!
    Ok .. we got this scenario, with our appPage CustomerOverview (intensiv db-loading) (sorry.. no real profiling, time is taken with pc watch) (Oracle is 8.1.7 OPS patch level1) ...
    WL6.1_Cluster + Jdriver6.1 + DB_cluster => 4sec
    WL6.1_Cluster + Jdriver6.1 + DB_single => 4sec
    WL6.1_Cluster + Ora8.1.7 OCI + DB_single => 4sec
    WL6.1_Cluster + Ora8.1.7 OCI + DB_cluster => 8-10sec
    WL6.1_Cluster + Ora9.2.0.5/6 thin + DB_single => 4sec
    WL6.1_Cluster + Ora9.2.0.5/6 thin + DB_cluster => 8sec
    WL6.1_Cluster + Ora10.1.0.3 thin + DB_single => 2-4sec (awesome fast!!)
    WL6.1_Cluster + Ora10.1.0.3 thin + DB_cluster => 6-8sec
    Customers rough us up, because they cannot mass order via batch input. Any suggestions how to solve this issue is very appreciated.
    TIA
    >
    >
    Markus Schaeffer wrote:
    Hi,
    we decided to implement batch input and went fromWeblogic Jdriver to Oracle Thin 9.2.0.6.
    Our system are an Weblogic 6.1 cluster and a Oracle8.1.7 cluster.
    Problem is .. with the new Oracle drivers ouractions on the webapp takes twice as long
    as with Jdriver. We also tried OCI .. same problem.We switched to a single Oracle 8.1.7
    database .. and it worked again with all thick orthin drivers.
    So .. new Oracle drivers with oracle cluster
    result in bad performance, but with
    Jdriver it works perfectly. Does sb. see someconnection?Odd. The jDriver is OCI-based, so it's something
    else. I would try the latest
    10g driver if it will work with your DBMS version.
    It's much faster than any 9.X
    thin driver.
    Joe
    I mean .. it works with Jdriver .. so it cant bethe database, huh? But we really
    tried with every JDBC possibility!
    Thanx for help!!

  • Poor Performance with Fairpoint DSL

    I started using Verizon DSL for my internet connection and had no problems. When Fairpoint Communications purchased Verizon (this is in Vermont), they took over the DSL (about May 2009). Since then, I have had very poor performance with all applications as soon as I start a browser. The performance problems occur regardless of the browser - I've tried Firefox (3.5.4), Safari (4.0.3) and Opera (10.0). I've been around and around with Fairpoint for 6 months with no resolution. I have not changed any software or hardware on my Mac during that time, except for updating the browsers and Apple updates to the OS, iTunes, etc. The performance problems continued right through these updates. I've run tests to check my internet speed and get times of 2.76Mbps (download) and 0.58Mbps (upload) which are within the specified limits for the DSL service. My Mac is a 2GHz PowerPC G5 runnning OSX 10.4.11. It has 512MB DDR SDRAM. I use a Westell Model 6100 modem for the DSL provided by Verizon.
    Some of the specific problems I see are:
    1. very long waits of more than a minute after a click on an item in the menu bar
    2. very long waits of more than two minutes after a click on an item on a browser page
    3. frequent pinwheels in response to a click on a menu item/browser page item
    4. frequent pinwheels if I just move the mouse without a click
    5. frequent messages for stopped/unresponsive scripts
    6. videos (like YouTube) stop frequently for no reason; after several minutes, I'll get a little audio but no new video; eventually after several more minutes it will get going again (both video and audio)
    7. response in non-browser applications is also very slow
    8. sometimes will get no response at all to a mouse click
    9. trying to run more than one browser at a time will bring the Mac to its knees
    10. browser pages frequently take several minutes to load
    These are just some of the problems I have.
    These problems all go away and everything runs fine as soon as I stop the browser. If I start the browser, they immediately surface again. I've trying clearing the cache, etc with no improvements.
    What I would like to do is find a way to determine if the problem is in my Mac or with the Fairpoint service. Since I had no problems with Verizon and have made no changes to my Mac, I really suspect the problem lies with Fairpoint. Can anyone help me out? Thanks.

    1) Another thing that you could try it is deleting the preference files for networking. Mac OS will regenerate these files. You would then need to reconfigure your network settings.
    The list of files comes from Mac OS X 10.4.
    http://discussions.apple.com/message.jspa?messageID=8185915#8185915
    http://discussions.apple.com/message.jspa?messageID=10718694#10718694
    2) I think it is time to do a clean install of your system.
    3) It's either the software or an intermittent hardware problem.
    If money isn't an issue, I suggest an external harddrive for re-installing Mac OS.
    You need an external Firewire drive to boot a PowerPC Mac computer.
    I recommend you do a google search on any external harddrive you are looking at.
    I bought a low cost external drive enclosure. When I started having trouble with it, I did a google search and found a lot of complaints about the drive enclosure. I ended up buying a new drive enclosure. On my second go around, I decided to buy a drive enclosure with a good history of working with Macs. The chip set seems to be the key ingredient. The Oxford line of chips seems to be good. I got the Oxford 911.
    The latest the hard drive enclosures support the newer serial ata drives. The drive and closure that I list supports only older parallel ata.
    Has everything interface:
    FireWire 800/400 + USB2, + eSATA 'Quad Interface'
    save a little money interface:
    FireWire 400 + USB 2.0
    This web page lists both external harddrive types. You may need to scroll to the right to see both.
    http://eshop.macsales.com/shop/firewire/1394/USB/EliteAL/eSATAFW800_FW400USB
    Here is an external hd enclosure.
    http://eshop.macsales.com/item/Other%20World%20Computing/MEFW91UAL1K/
    Here is what one contributor recommended:
    http://discussions.apple.com/message.jspa?messageID=10452917#10452917
    Folks in these Mac forums recommend LaCie, OWC or G-Tech.
    Here is a list of recommended drives:
    http://discussions.apple.com/thread.jspa?messageID=5564509#5564509
    FireWire compared to USB. You will find that FireWire 400 is faster than USB 2.0 when used for a external harddrive connection.
    http://en.wikipedia.org/wiki/UniversalSerial_Bus#USB_compared_toFireWire
    http://www23.tomshardware.com/storageexternal.html

  • Can you add new AirPort Express Base Station Simultaneous dual-band 802.11n with a 1st Gen time capsule network

    Hi
    Has anyone tried using the AirPort Express Base Station with a first generation Time Capsule and used it to extend their network, while still being able to use the Simultaneous dual-band 802.11n on their network?
    Thanks in advance

    The first generation Time Capsule was not a simultaneous dual band device.
    If you add the "new" AirPort Express to the network and configure it to "Extend a wireless network", it will extend the one single band signal that it is receiving from the Time Capsule.
    In other words, the AirPort Express will not extend simultaneous dual bands in this type of setup...since it can only "extend" what it receives. You would need a simultaneous dual band Time Capsule if you want that type of functionality.

  • Will Mountain Lion work with old models of Time Capsule?

    I was using an online backup service but it slowed down my machine so I want to go to an external drive backup. I have upgraded to Mountain Lion and I am thinking of purchasing an older version of Time capsule for my backups. My question is will mountain lion work with older models of Time Capsule? thanks so much.....

    Yes. I have a first generation Time Capsule and it works fine.
    Just make sure you buy at least 3 times the GB you are trying to backup. More is better.

  • I have replaced my airport extreme with a new airport time capsule and want to know how I can set up my extreme to extend/strengthen my Wi-Fi?

    I have replaced my airport extreme with a new airport time capsule and want to know how I can set up my extreme to extend/strengthen my Wi-Fi?

    Start with a "hard reset" (see below) then use AirPort Utility to configure it.
    When asked what you want to do with it, indicate that you want it to add to an existing network, similar to the following screenshot.
    Follow the prompts that will eventually lead you to the words "wirelessly extend..." as shown.
    To "hard reset" an AirPort Base Station: make sure it's powered up, then press and hold its tiny reset button and keep it depressed for five to ten seconds, long enough for its LED to flash amber rapidly. Release the reset button. Then, the LED will glow amber steadily for about a minute. Then, it will flash amber, slowly, about once every second or two, waiting for you to configure it with AirPort Utility.

  • Poor performance with WebI and BW hierarchy drill-down...

    Hi
    We are currently implementing a large HR solution with BW as backend
    and WebI and Xcelcius as frontend. As part of this we are experiencing
    very poor performance when doing drill-down in WebI on a BW hierarchy.
    In general we are experiencing ok performance during selection of data
    and traditional WebI filtering - however when using the BW hierarchy
    for navigation within WebI, response times are significantly increasing.
    The general solution setup are as follows:
    1) Business Content version of the personnel administration
    infoprovider - 0PA_C01. The Infoprovider contains 30.000 records
    2) Multiprovider to act as semantic Data Mart layer in BW.
    3) Bex Query to act as Data Mart Query and metadata exchange for BOE.
    All key figure restrictions and calculations are done in this Data Mart
    Query.
    4) Traditionel BO OLAP universe 1:1 mapped to Bex Data Mart query. No
    calculations etc. are done in the universe.
    5) WebI report with limited objects included in the WebI query.
    As we are aware that performance is an very subjective issues we have
    created several case scenarios with different dataset sizes, various
    filter criteria's and modeling techniques in BW.
    Furthermore we have tried to apply various traditional BW performance
    tuning techniques including aggregates, physical partitioning and pre-
    calculation - all without any luck (pre-calculation doesn't seem to
    work at all as WebI apparently isn't using the BW OLAP cache).
    In general the best result we can get is with a completely stripped WebI report without any variables etc.
    and a total dataset of 1000 records transferred to WebI. Even in this scenario we can't get
    each navigational step (when using drill-down on Organizational Unit
    hierarchy - 0ORGUNIT) to perform faster than minimum 15-20 seconds per.
    navigational step.
    That is each navigational step takes 15-20 seconds
    with only 1000 records in the WebI cache when using drill-down on org.
    unit hierachy !!.
    Running the same Bex query from Bex Analyzer with a full dataset of
    30.000 records on lowest level of detail returns a threshold of 1-2
    seconds pr. navigational step thus eliminating that this should be a BW
    modeling issue.
    As our productive scenario obviously involves a far larger dataset as
    well as separate data from CATS and PT infoproviders we are very
    worried if we will ever be able to utilize hierarchy drill-down from
    WebI ?.
    The question is as such if there are any known performance issues
    related to the use of BW hierarchy drill-down from WebI and if so are
    there any ways to get around them ?.
    As an alternative we are currently considering changing our reporting
    strategy by creating several higher aggregated reports to avoid
    hierarchy navigation at all. However we still need to support specific
    division and their need to navigate the WebI dataset without
    limitations which makes this issue critical.
    Hope that you are able to help.
    Thanks in advance
    /Frank
    Edited by: Mads Frank on Feb 1, 2010 9:41 PM

    Hi Henry, thank you for your suggestions although i´m not agree with you that 20 seconds is pretty good for that navigation step. The same query executed with BEx Analyzer takes only 1-2 seconds to do the drill down.
    Actions
    suppress unassigned nodes in RSH1: Magic!! This was the main problem!!
    tick use structure elements in RSRT: Done it.
    enable query stripping in WebI: Done it.
    upgrade your BW to SP09: Has the SP09 some inprovements in relation to this point ?
    use more runtime query filters. : Not possible. Very simple query.
    Others:
    RSRT combination H-1-3-3-1 (Expand nodes/Permanent Cache BLOB)
    Uncheck prelimirary Hierarchy presentation in Query. only selected.
    Check "Use query drill" in webi properties.
    Sorry for this mixed message but when i was answering i tryied what you suggest in relation with supress unassigned nodes and it works perfectly. This is what is cusing the bottleneck!! incredible...
    Thanks a lot
    J.Casas

  • Backup a dual boot iMac with OSX + Vista to Time capsule and USB hard drive

    I have just bought a Time Capsule which is now working as my WiFi Network base station and backs up an iMac and a Macbook. The iMac has both OSX and Vista (using Boot Camp) and I was using a USB Hard Drive that has 2 partitions (HFS+ for the Mac; NTFS for Vista) to back up the iMac. Rather than leave this attached to the iMac just to back up the Vista partition I would like to move it to the Time Capsule's USB port and use it as NAS. The idea being that it becomes a NAS with one partition used to back up the Windows data when I run the iMac with Vista and the other partition to archive the Time Machine backups from both the iMac and Macbook. I tried to connect it as it was but got an error and reading the forum it seems that it won't work unless both partitions are HFS+ or FAT. If I reformat the NTFS partition to HFS+ will I be able to use the built in backup function of Vista to back up the Vista partition to the USB hard drive? I read somewhere else that Time Capsule takes care of the formatting for an external NAS drive and Windows will be able to read and write to it even though it's not NTFS. Any ideas comments appreciated.

    This is crude but if you need a work around, just plug the USB drive directly into the iMac.. copy the files to it.. then plug it into the TC.
    You also do not need to use the TC as an intermediary between your iMac and MBP.. just turn on sharing in the computer.. so you can directly copy files from one to the other.. Macs have public access directory preconfigured for each user account.
    As far as mounting the TC..
    In finder use Go, Connect to server.. and type.
    AFP://TCname or TCipaddress
    Where TCname is the actual name of the TC.. I strongly recommend you follow SMB network rules.. ie if your TC has a name like
    Fred Blog's Time Capsule 2445566.
    It is too long.. it has spaces and it has non-alphanumeric characters.
    Shorten it to FredTC
    No spaces no characters that are NOT alphanumeric.
    TCIPaddress is simply the standard IP.
    You can also use CIFS://TCname which according to the article forces the connection back to SMB1 rather than SMB2 which as usual is broken.
    But I would definitely use AFP if possible. I cannot understand the decision to move to SMB as standard.

  • Problems with Full restore using Time Capsule backup???

    For some reason my MacBook Pro recently crapped out completely, and I am trying to do a complete restore of the OS via a full backup I made with using Time Machine onto my Time Capsule disk, but I am running into some problems doing so and I was hoping someone might have some insight as to how I should deal with things at this point...
    Here's where things get messy...
    - I insert my Mac OS X 10.5 Installation DVD into my Macbook and boot from the drive, and I immediately choose the menu option to 'Restore from Time Capsule Backup'...
    - After logging into my Time Capsule and then choosing the proper .sparseimage file to restore from, I click 'continue' and it brings me to the screen where I choose where I want to restore to (which in my case, is the one and only internal HD inside my Macbook Pro)
    - The bottom of this window tells me that it's "Calculating space required to restore data..." but it simply hangs at the point with a spinning wheel indicating its still doing the calculating, but never actually presents me with the space required, not does it allow me to click continue to perform the restore?? I've allowed it over an hour to calculate the space required and it still just spins its status wheel...
    This is where I am stuck at this point, and I have no idea how to get it to proceed to the next step to allow me to perform the restore??
    Any suggestions would be greatly appreciated, as I will have no choice but to do a complete reinstall (including all apps and data) if I can't get the backup restored... Thanks!

    I had the same problem when installing a new HDD in my Macbook. The solution I found was to reinstall Leopard onto the new HDD from the install CD, then reboot off the install CD once I had done this. I was then able to choose the 'Restore from Time Capsule' option and get my HDD to show up in the window.
    A word of warning, once you have reinstalled Leopard and it starts asking you for all you info, as if for the first time to set up your computer, there is an option at the end of that process to restore your data from a Time Capsule backup. While this did restore all my files and so forth, it did not update any of the 'Apple' apps, i.e. iTunes, iPhoto, Quicktime, or the operating system itself, suggesting a 500Mb download via 'Software Update' to update the system. It was faster for me to reboot of the install CD and restore via the process outlines above. That way you get all your system updates included.
    Not sure if this is the 'approved' method but it worked for me! The restore process took about 75 minutes for about 100Gb of data from my external backup drive connected to my Macbook via a Firewire cable.
    Hope this helps!

  • Poor performance with Oracle Spatial when spatial query invoked remotely

    Is anyone aware of any problems with Oracle Spatial (10.2.0.4 with patches 6989483 and 7003151 on Red Hat Linux 4) which might explain why a spatial query (SDO_WITHIN_DISTANCE) would perform 20 times worse when it was invoked remotely from another computer (using SQLplus) vs. invoking the very same query from the database server itself (also using SQLplus)?
    Does Oracle Spatial have any known problems with servers which use SAN disk storage? That is the primary difference between a server in which I see this poor performance and another server where the performance is fine.
    Thank you in advance for any thoughts you might share.

    OK, that's clearer.
    Are you sure it is the SQL inside the procedure that is causing the problem? To check, try extracting the SQL from inside the procedure and run it in SQLPLUS with
    set autotrace on
    set timing on
    SELECT ....If the plans and performance are the same then it may be something inside the procedure itself.
    Have you profiled the procedure? Here is an example of how to do it:
    Prompt Firstly, create PL/SQL profiler table
    @$ORACLE_HOME/rdbms/admin/proftab.sql
    Prompt Secondly, use the profiler to gather stats on execution characteristics
    DECLARE
      l_run_num PLS_INTEGER := 1;
      l_max_num PLS_INTEGER := 1;
      v_geom    mdsys.sdo_geometry := mdsys.sdo_geometry(2002,null,null,sdo_elem_info_array(1,2,1),sdo_ordinate_array(0,0,45,45,90,0,135,45,180,0,180,-45,45,-45,0,0));
    BEGIN
      dbms_output.put_line('Start Profiler Result = ' || DBMS_PROFILER.START_PROFILER(run_comment => 'PARALLEL PROFILE'));  -- The comment name can be anything: here it is related to the Parallel procedure I am testing.
      v_geom := Parallel(v_geom,10,0.05,1);  -- Put your procedure call here
      dbms_output.put_line('Stop Profiler Result = ' || DBMS_PROFILER.STOP_PROFILER );
    END;
    SHOW ERRORS
    Prompt Finally, report activity
    COLUMN runid FORMAT 99999
    COLUMN run_comment FORMAT A40
    SELECT runid || ',' || run_date || ',' || run_comment || ',' || run_total_time
      FROM plsql_profiler_runs
      ORDER BY runid;
    COLUMN runid       FORMAT 99999
    COLUMN unit_number FORMAT 99999
    COLUMN unit_type   FORMAT A20
    COLUMN unit_owner  FORMAT A20
    COLUMN text        FORMAT A100
    compute sum label 'Total_Time' of total_time on runid
    break on runid skip 1
    set linesize 200
    SELECT u.runid || ',' ||
           u.unit_name,
           d.line#,
           d.total_occur,
           d.total_time,
           text
    FROM   plsql_profiler_units u
           JOIN plsql_profiler_data d ON u.runid = d.runid
                                         AND
                                         u.unit_number = d.unit_number
           JOIN all_source als ON ( als.owner = 'CODESYS'
                                   AND als.type = u.unit_type
                                   AND als.name = u.unit_name
                                AND als.line = d.line# )
    WHERE  u.runid = (SELECT max(runid) FROM plsql_profiler_runs)
    ORDER BY d.total_time desc;Run the profiler in both environments and see if you can see where the slowdown exists.
    regards
    Simon

  • Poor Performance with Converged Fabrics

    Hi Guys,
    I'm having some serious performance issues with Converged Fabrics in my Windows Server 2012 R2 lab. I'm planning on creating a Hyper-V cluster with 3 nodes. I've built the first node, building and installing/configuring OS and Hyper-V pretty straight forward.
    My issue is with Converged Fabrics, I'm absolutely getting very slow performance in the sense of managing the OS, Remote Desktop connections taking very long and eventually times out. Server unable to find a writable domain controller due to slow performance.
    If I remove the converged fabric everything is awesome, works as expected. Please note that the cluster hasn't even been built yet and experiencing this poor performance.
    Here is my server configuration:
    OS: Windows Server 2012 R2
    RAM: 64GB
    Processor: Intel I7 Gen 3
    NICS: 2 X Intel I350-T2 Adapters, supporting SRIOV/VMQ
    Updates: All the latest updates applied
    Storage:
    Windows Server 2012 R2 Storage Spaces
    Synology DS1813+
    Updates: All the latest updates applied
    Below is the script I've written to automate the entire process.
    # Script: Configure Hyper-V
    # Version: 1.0.2
    # Description: Configures the Hyper-V Virtual Switch and
    #              Creates a Converged Fabric
    # Version 1.0.0: Initial Script
    # Version 1.0.1: Added the creation of SrIOV based VM Switches
    # Version 1.0.2: Added parameters to give the NLB a name, as well as the Hyper-V Switch
    param
        [Parameter(Mandatory=$true)]
        [string]$TeamAdapterName="",
        [Parameter(Mandatory=$true)]
        [string]$SwitchName="",
        [Parameter(Mandatory=$true)]
        [bool]$SrIOV=$false
    #Variables
    $CurrentDate = Get-Date -Format d
    $LogPath = "C:\CreateConvergedNetworkLog.txt"
    $ManagmentOSIPv4="10.150.250.5"
    $ManagmentOS2IPv4="10.250.251.5"
    #$CommanGatewayIPv4="10.10.11.254"
    $ManagmentDNS1="10.150.250.1"
    $ManagmentDNS2="10.150.250.3"
    $ManagmentDNS3="10.250.251.1"
    $ManagmentDNS4="10.250.251.3"
    $ClusterIPv4="10.253.251.1"
    $LiveMigrationIPv4="10.253.250.1"
    $CSVIPv4="10.100.250.1"
    $CSV2IPv4="10.250.100.1"
    #Set Excution Policy
    Write-Host "Setting policy settings..."
    Set-ExecutionPolicy UnRestricted
    try
        # Get existing network adapters that are online
        if($SrIOV)
            #$sriov_adapters = Get-NetAdapterSriov | ? Status -eq Up | % Name # Get SRIOV Adapters
            $adapters = Get-NetAdapterSriov | ? Status -eq Up | % Name # Get SRIOV Adapters
            Enable-NetAdapterSriov $adapters # Enable SRIOV on the adapters
        else
            $adapters = Get-NetAdapterSriov | % Name
            #$adapters = Get-NetAdapter | ? Status -eq Up | % Name
        # Create NIC team
        if ($adapters.length -gt 1)
            Write-Host "$CurrentDate --> Creating NIC team $TeamAdapterName..."
            Write-Output "$CurrentDate --> Creating NIC team $TeamAdapterName..." | Add-Content $LogPath
            #New-NetLbfoTeam -Name "ConvergedNetTeam" -TeamMembers $adapters -Confirm:$false | Add-Content $LogPath
            New-NetLbfoTeam -Name $TeamAdapterName -TeamMembers $adapters -TeamingMode SwitchIndependent -LoadBalancingAlgorithm Dynamic -Confirm:$false | Add-Content $LogPath
        else
            Write-Host "$CurrentDate --> Check to ensure that at least 2 NICs are available for teaming"
            throw "$CurrentDate --> Check to ensure that at least 2 NICs are available for teaming" | Add-Content $LogPath
        # Wait for team to come online for 60 seconds
        Start-Sleep -s 60
        if ((Get-NetLbfoTeam).Status -ne "Up")
            Write-Host "$CurrentDate --> The ConvergedNetTeam NIC team is not online. Troubleshooting required"
            throw "$CurrentDate --> The ConvergedNetTeam NIC team is not online. Troubleshooting required" | Add-Content $LogPath
        # Create a new Virtual Switch
        if($SrIOV) #SRIOV based VM Switch
            Write-Host "$CurrentDate --> Configuring converged fabric $SwitchName with SRIOV..."
            Write-Output "$CurrentDate --> Configuring converged fabric $SwitchName with SRIOV..." | Add-Content $LogPath
            #New-VMSwitch "ConvergedNetSwitch" -MinimumBandwidthMode Weight -NetAdapterName "ConvergedNetTeam" -EnableIov $true -AllowManagementOS 0
            New-VMSwitch $SwitchName -MinimumBandwidthMode Weight -NetAdapterName $TeamAdapterName -EnableIov $true -AllowManagementOS 0
            $CreatedSwitch = $true
        else #Standard VM Switch
            Write-Host "$CurrentDate --> Configuring converged fabric $SwitchName..."
            Write-Output "$CurrentDate --> Configuring converged fabric $SwitchName..." | Add-Content $LogPath
            #New-VMSwitch "ConvergedNetSwitch"-MinimumBandwidthMode Weight -NetAdapterName "ConvergedNetTeam" -AllowManagementOS 0
            New-VMSwitch $SwitchName -MinimumBandwidthMode Weight -NetAdapterName $TeamAdapterName -AllowManagementOS $false
            $CreatedSwitch = $true
        if($CreatedSwitch)
            #Set Default QoS
            Write-Host "$CurrentDate --> Setting default QoS policy on $SwitchName..."
            Write-Output "$CurrentDate --> Setting default QoS policy $SwitchName..." | Add-Content $LogPath
            #Set-VMSwitch "ConvergedNetSwitch"-DefaultFlowMinimumBandwidthWeight 30
            Set-VMSwitch $SwitchName -DefaultFlowMinimumBandwidthWeight 20
            #Creating Management OS Adapters (SYD-MGMT)
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for Management OS"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for Management OS" | Add-Content $LogPath
            Add-VMNetworkAdapter -ManagementOS -Name "SYD-MGMT" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "SYD-MGMT" -MinimumBandwidthWeight 30 -VmqWeight 80
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "SYD-MGMT" -Access -VlanId 0
            #Creating Management OS Adapters (MEL-MGMT)
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for Management OS"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for Management OS" | Add-Content $LogPath
            Add-VMNetworkAdapter -ManagementOS -Name "MEL-MGMT" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "MEL-MGMT" -MinimumBandwidthWeight 30 -VmqWeight 80
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "MEL-MGMT" -Access -VlanId 0
            #Creating Cluster Adapters
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for Cluster"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for Cluster" | Add-Content $LogPath
            #Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName "ConvergedNetSwitch"
            Add-VMNetworkAdapter -ManagementOS -Name "HV-Cluster" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "HV-Cluster" -MinimumBandwidthWeight 20 -VmqWeight 80
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "HV-Cluster" -Access -VlanId 0
            #Creating LiveMigration Adapters
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for LiveMigration"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for LiveMigration" | Add-Content $LogPath
            Add-VMNetworkAdapter -ManagementOS -Name "HV-MIG" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "HV-MIG" -MinimumBandwidthWeight 40 -VmqWeight 90
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "HV-MIG" -Access -VlanId 0
            #Creating iSCSI-A Adapters
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for iSCSI-A"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for iSCSI-A" | Add-Content $LogPath
            Add-VMNetworkAdapter -ManagementOS -Name "iSCSI-A" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "iSCSI-A" -MinimumBandwidthWeight 40 -VmqWeight 100
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "iSCSI-A" -Access -VlanId 0
            #Creating iSCSI-B Adapters
            Write-Host "$CurrentDate --> Creating and configuring virtual NIC for iSCSI-B"
            Write-Output "$CurrentDate --> Creating and configuring virtual NIC for iSCSI-B" | Add-Content $LogPath
            Add-VMNetworkAdapter -ManagementOS -Name "iSCSI-B" -SwitchName $SwitchName
            Set-VMNetworkAdapter -ManagementOS -Name "iSCSI-B" -MinimumBandwidthWeight 40 -VmqWeight 100
            Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "iSCSI-B" -Access -VlanId 0
            Write-Host "Waiting 40 seconds for virtual devices to initialise"
            Start-Sleep -Seconds 40
            #Configure the IP's for the Virtual Adapters
            Write-Host "$CurrentDate --> Configuring IPv4 address for the Management OS virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the Management OS virtual NIC" | Add-Content $LogPath
            #New-NetIPAddress -InterfaceAlias "vEthernet (SYD-MGMT)" -IPAddress $ManagmentOSIPv4 -PrefixLength 24 -DefaultGateway $CommanGatewayIPv4
            New-NetIPAddress -InterfaceAlias "vEthernet (SYD-MGMT)" -IPAddress $ManagmentOSIPv4 -PrefixLength 24
            Set-DnsClientServerAddress -InterfaceAlias "vEthernet (SYD-MGMT)" -ServerAddresses ($ManagmentDNS1, $ManagmentDNS2)
            Write-Host "$CurrentDate --> Configuring IPv4 address for the Management OS virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the Management OS virtual NIC" | Add-Content $LogPath
            New-NetIPAddress -InterfaceAlias "vEthernet (MEL-MGMT)" -IPAddress $ManagmentOS2IPv4 -PrefixLength 24
            Set-DnsClientServerAddress -InterfaceAlias "vEthernet (MEL-MGMT)" -ServerAddresses ($ManagmentDNS3, $ManagmentDNS4)
            Write-Host "$CurrentDate --> Configuring IPv4 address for the Cluster virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the Cluster virtual NIC" | Add-Content $LogPath
            New-NetIPAddress -InterfaceAlias "vEthernet (HV-Cluster)" -IPAddress $ClusterIPv4 -PrefixLength 24
            #Set-DnsClientServerAddress -InterfaceAlias "vEthernet (HV-Cluster)" -ServerAddresses $ManagmentDNS1
            Write-Host "$CurrentDate --> Configuring IPv4 address for the LiveMigration virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the LiveMigration virtual NIC" | Add-Content $LogPath
            New-NetIPAddress -InterfaceAlias "vEthernet (HV-MIG)" -IPAddress $LiveMigrationIPv4 -PrefixLength 24
            #Set-DnsClientServerAddress -InterfaceAlias "vEthernet (LiveMigration)" -ServerAddresses $ManagmentDNS1
            Write-Host "$CurrentDate --> Configuring IPv4 address for the iSCSI-A virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the iSCSI-A virtual NIC" | Add-Content $LogPath
            New-NetIPAddress -InterfaceAlias "vEthernet (iSCSI-A)" -IPAddress $CSVIPv4 -PrefixLength 24
            #Set-DnsClientServerAddress -InterfaceAlias "vEthernet (iSCSI-A)" -ServerAddresses $ManagmentDNS1
            Write-Host "$CurrentDate --> Configuring IPv4 address for the iSCSI-B virtual NIC"
            Write-Output "$CurrentDate --> Configuring IPv4 address for the iSCSI-B virtual NIC" | Add-Content $LogPath
            New-NetIPAddress -InterfaceAlias "vEthernet (iSCSI-B)" -IPAddress $CSV2IPv4 -PrefixLength 24
            #Set-DnsClientServerAddress -InterfaceAlias "vEthernet (CSV2)" -ServerAddresses $ManagmentDNS1
            #Write-Host "$CurrentDate --> Configuring IPv4 address for the VMNet virtual NIC"
            #Write-Output "$CurrentDate --> Configuring IPv4 address for the VMNet virtual NIC" | Add-Content $LogPath
            #New-NetIPAddress -InterfaceAlias "vEthernet (VMNet)" -IPAddress $VMNetIPv4 -PrefixLength 24
            #Set-DnsClientServerAddress -InterfaceAlias "vEthernet (VMNet)" -ServerAddresses $ManagmentDNS1
            Write-Host "$CurrentDate --> Hyper-V Configuration is Complete"
            Write-Output "$CurrentDate --> Hyper-V Configuration is Complete" | Add-Content $LogPath
    catch [Exception]
        throw "$_" | Add-Content $LogPath
    I would really like to know why I'm getting absolutely poor performance. Any help on this would be most appreciated.

    I didn't parse the entire script, but a few things stand out.
    SR-IOV and teaming don't mix. The purpose of SR-IOV is to go straight from the virtual machine into the physical adapter and back, completely bypassing the entire Hyper-V virtual switch and everything that goes with it. Team or SR-IOV.
    You're adding DNS servers to adapters that don't need them. Inbound traffic is going to be confused, to say the least. The only adapter that should have DNS addresses is the management adapter. For all others, you should run Set-DnsClient -RegisterThisConnectionsAddress
    $false.
    I don't know that I'm reading your script correctly, but it appears you have multiple adapters set up for management. That won't end well.
    It also looks like you have QoS weights that total over 100. That also won't end well.
    I don't know that these explain poor performance like you're describing, though. It could just be that you're a victim of network adapters/drivers that have poor support for VMQ. Bad VMQ is worse than no VMQ. But, VMQ+teaming+SR-IOV sounds like recipe for
    heartache to me, so I'd start with that.
    Eric Siron Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
    "Every relationship you have is in worse shape than you think."

  • Setting up Time Capsule as an archive for a time capsule?

    Hello,
    I have a Time Capsule that I use as an external drive where I save and share photos/videos files across multiple Macs. It's great for moving files off of the Macs since we have thousands of photos/files and they are not small files. I'm on MacBook Pro with Mountain Lion OS X 10.8.5.
    Since the primary Time Capsule (1.9 TB) has data I don't want to lose, I want to archive the Time Capsule content to another Time Capsule (2TB).
    I have AirPort Utility Version 6.3.1. I've connected the archive Time Capsule to the primary Time Capsule via a USB connector, which is per the AirPort instructions.
    In AirPort Utility, I go into the primary Time Capsule and select Edit. I select Disks within the Editor. I then click on the Archive Disks… button. "Destination", however, says "No AirPort disks available." So, I'm not able to select the secondary Time Capsule as the archive drive. I've read some pages about disk formatting and the disk needing to be unencrypted, but I connected the secondary Time Capsule to my Mac and did not find a way to identify or change the disk format or encryption option.
    What do I need to do to get the second Time Capsule set up to be the archive drive for the primary Time Capsule that has the photos and other files?
    Thanks for your assistance.

    I have AirPort Utility Version 6.3.1. I've connected the archive Time Capsule to the primary Time Capsule via a USB connector, which is per the AirPort instructions.
    Sorry but no.. the airport instructions are for a USB drive.. the TC is not now or ever accessible over USB.. it is a network drive.. you can plug a plain USB drive into it.. nothing more.
    What do I need to do to get the second Time Capsule set up to be the archive drive for the primary Time Capsule that has the photos and other files?
    You cannot change anything in the drive of a TC.. it is a network drive.. it is fully controlled by the firmware of the TC.
    The only way you backup the TC is to use a software on the computer.. eg Carbon Copy Cloner will do it..
    Have everything plugged in by ethernet.. mount both TC and you can make a backup from one TC to the other.
    Archive will not work. that is only for a USB directly plugged into the TC.

  • I BEG of you:  How do I replace an Airport Extreme base station with a 4th-generation Time Capsule as the base station?

    I'll be damned if I can figure this out.  I have a new late-2012 iMac (OS 10.8.3) connecting to my LAN via Airport wi-fi.  (It will be Ethernet-connected, as soon as a hole is drilled into my new work table, but right now the Ethernet cable doesn't reach.)  My wife uses a 2011 MacBook Air.
    The LAN consists of a flying-saucer-looking Airport Extreme connected by Ethernet to the modem, plus 2 old Airport Expresses.  I have an Ethernet switch too, through which I've got a TV connected and a printer connected. 
    I bought a 4th-generation Time Capsule about a year ago, but couldn't figure out for the life of me how to replace the Airport Extreme with it.  I became so incredibly frustrated that I hid it away in a closet until now.  I  just tried again, with the same results.  I've got both Airport Utility 5.6 (to use with the old Extreme and Expresses) and Airport Utility 6.2.  I was using version 6.2 to set up the Time Capsule... or to try to.  Once again, I cannot for the life of me figure out how to do it!  I tried so many different Ethernet wiring connections that I now cannot remember them well enough to describe them here.
    I distinctly remember that, when I tried this a year ago with my wife's MacBook Air and Airport Utility 6.2, I was presented at one point with the question whether I wanted to replace an existing base station with the Time Capsule I was trying to set up, but this time I never was presented with that option.
    Is there anyone who can, in essence, hold my hand and walk me through this process step-by-step?  At this point I even need to be instructed what to wire to what.  I feel like a total incompetent.
    My MOST sincere thanks in advance for help!
    (I hope Time Capsule is the correct forum for this question.  I considered the Airport forum as well, but I don't want to cross-post.)
    (And, just to complicate matters, after I get this done-- assuming I ever do-- I want to use the old Airport Extreme on the outside of the house to extend the range of my LAN to the back yard... but that's for another day entirely.)

    apikoros wrote:
    The Utility transferred all of the AE's settings, so I still have to change the password, which leaves me with only 2 other questions, I think:
    1)  I assume it's just a matter of using the Utility, entering a stronger password and checking for it to be remembered in Keychain Access.  But do I have to  change the password for each individual unit-- the TC, the Extreme and both Expresses-- or will changing it just for the TC alone work for the entire network?
    Resetting the password you will need to do for each device... the utility cannot even see those old units.
    So you will have to do it for each one.. think it through.. because as you change passwords the others will lose connection.. so start from the express which are wireless extending .. change those first.. and go back up the chain.. as each one changes it will drop off the network.. until you reach extreme and change that. Then you might need to reboot the whole network to get everything talking again. If something goes wrong.. just pluck that one out of the mix and plug in ethernet.. reset and redo the setup. That is my preferred method anyway.. do everything in isolation one by one. By ethernet and then nothing goes wrong.
    2)  Who's the treasonous SOB who spilled the beans to you about the ICBM in my back yard?!?
    N.Korean hackers.
    [Edit] Whoops-- one more question:  I want to partition the TC's disk, but Disk Utility doesn't see it.  What do I need to do?
    You cannot partition a network disk. And apple provided no tools for it in the TC itself. You can pull the disk out and partition it but that voids your warranty. (although done with care who is to know).
    Look at Q3 here.
    http://pondini.org/TM/Time_Capsule.html
    Mixing TM and data on the TC is worth planning carefully. They don't necessarily sit happily together.

Maybe you are looking for