Poor *First Launch* Performance with AppV 5

I'm having an issue where the first time I launch a rather large AppV 5 application the performance I get is very slow.  Subsequent launches, however, are near native.  We reboot our Citrix server's daily so this is causing some issues for us.
 I have noticed that the applications that launch slower tend to be the larger applications we sequence.  Performance is slow regardless whether the package is fully mounted or streamed.
I've detailed it here with some animated gifs showing the performance slowness.
http://trentent.blogspot.ca/2014/11/appv-5-first-launch-application.html
The short of it:
When AppV is "launching" the application for the first time after a net stop/net start it starts consuming memory and CPU for the 196 seconds that it's launching, peaking at nearly 600MB RAM and 50% CPU (though most of the time it's peaked at 25%
CPU).
The AppV Debug logs do not give a whole lot of info as to what AppVClient.exe is doing during this time.  Most of the logs show the application "start" as they setup their components, and when the application has launched.  Almost all
the logs show the first second or two of application launch and the last second or two before the GUI.
When the application is launched after a reboot complete with add-appvpackage/publish-appvpackage (global) the first launch is ~40s.
Subsequent launches of the application are consistently between 11-15s (near native).
Does anyone have any thoughts about what I can do to make it so the application will launch at (near) native speed for the first launch?  I am fresh out of ideas.

Let me share my test results with your provided test package:
Test Case 1
Default Package (without any modifications):
- Add-Package: ~23.5s
- Publish-Package: ~2.0s
- First start registry.exe: Nearly instant, but Registry staging takes 35s with high CPU usage (>=30%)
- Second start registry.exe: Instant
Test Case 2
Removed all ActiveX exts., COM Object exts. and Filetype assosciations in AppxManifest.xml & renamed HKLM\Classes to HKLM\Classes_TEST
(I developed a tool that allows me to modify the .appv package directly including all files in it):
- Add-Package: ~3s
- Publish-Package: ~1.6s
- First start registry.exe: Instant, but registry staging takes 33s with high CPU usage (30%)
- Second start registry.exe: Instant
Test Case 3
Removed all COM object extensions in AppxManifest.xml:
- Add-Package: ~2.6s
- Publish-Package: ~1s
- First start registry.exe: Not completely instant, takes 6s to start and Registry staging takes 35s with high CPU usage (>=30%)
- Second start registry.exe: Instant
These are my findings:
1) The long time needed to add the package is due to the amount of COM object extensions. The size of the manifest file is 71MB, so that's really big. When I remove the COM objects from the manifest, the file size is 209KB.
2) The high CPU usage is caused by registry staging. The registry file is ~58MB. With the first start, registry staging starts, the .dat hive gets loaded and the content will be copied to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AppV\Client\Packages\<Package-GUID>\Versions\<Version-GUID>\REGISTRY.
In the same tree as the REGISTRY, you will find a key named "RegistryStagingFinished" when the staging has been finished. I observed that the high CPU load decreases as soon as registry staging has been finished (this is the case when RegistryStagingFinished
is there). So the staging took ~35s on my test System to do the registry staging for a 58MB registry hive.
And now the intersting part. If you close the virtualized app / the app you start in the VE (in your case registry.exe) before registry staging is finished (so before RegistryStagingFinished is set), it will re-start the registry staging from the
beginning the next time you start the app. To me it looks like it doesn't continue from the last point it stopped the registry staging. This might be something that could be improved in a next release so that it continues registy staging from the point it
previously stopped.
What does it mean for you:
If you need all integration points like COM, ActiveX, etc., you won't be able to reduce the add-package time. Otherwise if you remove them from the package, the add-package time can be reduced to <5s.
Your registry is as well quite big (58MB). Loading that into registry needs time. There is no way around, but when deploying the package globally, you could start a process within the VE of your application right after publishing it so that the registry
staging executes once and is already finished when the first user starts the app.
Due to the big registry size and the amount of integration points of your package, I believe there won't be a lot of possibilities to improve the experienced performance issues except reducing the size and amount of integration points.
Regards, Michael - http://blog.notmyfault.ch -

Similar Messages

  • Poor play/pause performance with iPhone mic button clicker

    I am experiencing poor performance with my headset button on my 3G iPhone. The pause / play / skip track functionality works very intermittently for me. I have tried all different kinds of click speeds with no improvement.
    It seems to be worst when the iPhone is locked and I click once to resume playing.
    I am running 2.1 software and doing a hard restart does not help.
    Faulty headphones?

    the iphone clicker wears out really quick...
    you might want to invest in some non apple headphones, or perhaps just a new microphone and clicker, like the shure one that's just come out...
    http://www.iphoneuserguide.com/apple/2008/09/19/iphone3g/convert-your-headphones -to-a-mobile-headset-with-the-shure-music-phone-adapter-for-iphone/

  • CS4 poor dialog window performance with huge screen dual 30 inch

    Hi, I am running dual 30 inch monitors each one at 2560x1600 resolution, on windows xp sp3 on an nvidia quadro FX 3700M (dell precision M6400 laptop)
    If I use CS4 from the single laptop screen, everything is fine. However, whenever Im on the huge dual screen desktop any dialog box activity has a very poor performance. For example, I select File>New and the dialog window takes some two to three seconds to draw on the screen... it might not seem like a big problem but it is very very annoying since I am used to the snappy response of the user interface from years of PS use. It really affects my workflow since I work with many files at the same time.
    Also, the bigger the dialog window the slower it takes to appear. If I select File > Save for Web and Devices, that dialog window will take up to five seconds to appear on the screen.
    The problem seems to get worse the longer the OS has been working, ie. if the PC is freshly booted the problem seems a bit lighter. So it might be related to video card memory or something.
    Here's what I have tried so far to solve this:
    Since what I'm using is a laptop, the driver choices are limited, however I've been able to force install the 181.20 version of the nvidia driver, but it made no difference. All the official driver versions (176 series) from Dell also have the problem.
    The problem doesn't seem related to Photoshop's use of the GPU as the problem remains while the GPU acceleration disabled.
    This problem is also present in PS CS3 as well as CS2 CS1 and version 7 which are all I am able to test.
    I was able to reproduce this problem on a desktop computer with the same setup on an nvidia quadro FX 5500 card. The dialog box window performance is really terrible.
    I did not notice this problem in any other application that spans the two screens, even high end 3d ones.
    Ok well it seems from my testing that Photoshop is simply bogged down by having such a huge desktop area to work with. It is really disappointing since PS was my only reason to spend so much money on this dual 30 inch screen setup.
    Has anyone else noticed this? Are there any other suggestions? Any adobe support on this?
    Thank you

    I have tried with both open gl enabled and disabled (edit > preferences > performance)
    The problem is present even in Photoshop version 7, I think that one didnt even notice the presence of a 3d accelerator chip.
    Both video cards tested have 1GB of video ram.
    Thanks.

  • Help!  Poor network (LAN) performance with T2000, Solaris 10, GB

    Hi-
    I'm having a problem that I'm hoping someone can help me with.
    We've recently purchased a few of the T2000 boxes, with the ipge (GB) cards in them.
    The problem we're having is that we can't seem to get more than 500 mbit/s performance out of them, at least the way we're measuring them.
    So, I have 2 questions... (a) how can we validate that we are in fact getting GB performance for file downloads, etc. and (b) is there something that we have to do in order to accomplish this if our readings are correct?
    Background:
    What we've done so far is as follows:
    Validate that the connection is indeed GB, full duplex.
    root@conn1 # dladm show-dev
    ipge0 link: unknown speed: 1000 Mbps duplex: full
    ipge1 link: unknown speed: 0 Mbps duplex: unknown
    ipge2 link: unknown speed: 0 Mbps duplex: unknown
    ipge3 link: unknown speed: 0 Mbps duplex: unknown
    root@conn1 #
    We've run a crossover cable between 2 of them to eliminate any "switch" issues from the equation.
    We then ran iperf, as a server on one of them, as a client on the other.
    Server Output:
    root@conn2 # iperf -s
    Server listening on TCP port 5001
    TCP window size: 48.0 KByte (default)
    [  4] local 192.168.1.215 port 5001 connected with 192.168.1.71 port 32795
    [ ID] Interval Transfer Bandwidth
    [  4] 0.0-10.0 sec 598 MBytes 502 Mbits/sec
    Client output:
    root@conn1 # iperf -c 192.168.1.215 -i 1
    Client connecting to 192.168.1.215, TCP port 5001
    TCP window size: 48.0 KByte (default)
    [  3] local 192.168.1.71 port 32795 connected with 192.168.1.215 port 5001
    [ ID] Interval Transfer Bandwidth
    [  3] 0.0- 1.0 sec 57.8 MBytes 485 Mbits/sec
    [  3] 1.0- 2.0 sec 59.9 MBytes 502 Mbits/sec
    [  3] 2.0- 3.0 sec 60.1 MBytes 504 Mbits/sec
    [  3] 3.0- 4.0 sec 59.7 MBytes 501 Mbits/sec
    [  3] 4.0- 5.0 sec 59.9 MBytes 502 Mbits/sec
    [  3] 5.0- 6.0 sec 60.7 MBytes 509 Mbits/sec
    [  3] 6.0- 7.0 sec 60.2 MBytes 505 Mbits/sec
    [  3] 7.0- 8.0 sec 59.9 MBytes 503 Mbits/sec
    [  3] 8.0- 9.0 sec 59.9 MBytes 503 Mbits/sec
    [  3] 9.0-10.0 sec 59.5 MBytes 499 Mbits/sec
    [  3] 0.0-10.0 sec 598 MBytes 501 Mbits/sec
    root@conn1 #
    Thoughts? Help!?
    ...jeff

    Hi,
    We are having issues with our T2000s as well. Same setup. t2000s back to back with a crossover connection. Best we are getting with a single iperf test is..
    # iperf -c 192.168.1.2 -i 1
    Client connecting to 192.168.1.2, TCP port 5001
    TCP window size: 2.00 MByte (default)
    [  3] local 192.168.1.1 port 52643 connected with 192.168.1.2 port 5001
    [ ID] Interval Transfer Bandwidth
    [  3] 0.0- 1.0 sec 27.7 MBytes 232 Mbits/sec
    [  3] 1.0- 2.0 sec 28.8 MBytes 242 Mbits/sec
    [  3] 2.0- 3.0 sec 30.3 MBytes 254 Mbits/sec
    [  3] 3.0- 4.0 sec 26.5 MBytes 222 Mbits/sec
    [  3] 4.0- 5.0 sec 24.3 MBytes 204 Mbits/sec
    [  3] 5.0- 6.0 sec 28.9 MBytes 242 Mbits/sec
    [  3] 6.0- 7.0 sec 28.7 MBytes 241 Mbits/sec
    [  3] 7.0- 8.0 sec 30.6 MBytes 257 Mbits/sec
    [  3] 8.0- 9.0 sec 30.8 MBytes 259 Mbits/sec
    [  3] 9.0-10.0 sec 26.8 MBytes 225 Mbits/sec
    [  3] 0.0-10.0 sec 283 MBytes 237 Mbits/sec
    [  4] local 192.168.1.2 port 5001 connected with 192.168.1.1 port 52643
    [ ID] Interval Transfer Bandwidth
    [  4] 0.0-10.0 sec 283 MBytes 238 Mbits/sec
    Did you ever get an answer to your questions below? Would you be willing to share what you found?
    What settings do you have to get the 500 Mbits/sec? Our servers are fairly well loaded so we think it's just fighting for CPU with everything else. If you run many streams your performance improves a great deal. We've gotten ~800Mbits/sec running 5 or more iperf runs at the same time.
    If you run "intrstat" while you do the single or many iperf runs what does your CPU peg out at? We get about %11 cpu useage from a single iperf run and %35 from 2 CPUs when we do many iperf runs at the same time.
    Thanks,
    -Matt

  • Incredibly poor iPod sync performance with music library on NAS

    I have recently purchased a 300 GB Buffalko Linkstation NAS with the express intention of storing my 23k MP3s on it and then using iTunes to listen to them from either my main PC or the Media Centre.
    This has all worked out just fine.
    What has not is the fact that trying to sync my iPod(s) is now unbelievably slow. When the MP3s were on the local hard drive syncing the iPod would entail i) 30 seconds or so of "Updating iPod" with no sign of syncing activity then ii) constantly updated "x of y files" as they are synced and finally c) a few seconds of "Updating iPod" before everything is done.
    Now that the MP3s are on the NAS step i) above takes ... 70 minutes. That does not include copying time ... just to "evaluate" what is to be synced. I obviously expected it to be slower than before but this is obviously unusable.
    I have done some investigation and in these 70 minutes there is pretty constant 20-40 KB/s traffic going to and from the NAS. In an attempt to find out what this consisted of, I ran filemon from sysinternals (http://www.sysinternals.com/) only to find out that every single file in the iTunes library is accessed during this phase!
    Over the 70 minutes the traffic from the NAS is around 180MB and to it is about 120MB, however the problem seems to be with the number of files that need accessing rather than the absolute size.
    Some other background that may be useful:
    * the MP3s themselves are on the NAS but the library files (.itl and .xml) are on the local hard drive (I moved them back there to rule it out as the problem)
    * routine copy or write operations to the NAS are carried out at 2+ MB/s (so it is not a bandwidth limitation)
    * I have tried syncing with my 40GB 3G, 4GB Mini and 4GB Nano ... same problem with all 3
    Has anyone else come up against this? I have searched some of the obvious NAS posts on this forum and no-one seems to have brought this up. This is surprising as I do not really see how it can be a problem specific to this model of NAS.
    Any help greatly received.
      Windows XP  

    Were you ever able to solve this problem? If so,
    how? If not, did you see the same slowness copying
    the files in Windows Explorer from the NAS to the
    iPod?
    No I wasn't. To be honest I have essentially given up as even if I can fix it, disk space is cheaper than life is long!
    Using the NAS in general is no problem. In fact, once iTunes got beyond the initial working out of what needed to be removed from and copied to the iPod, having all my MP3s on the NAS and copying direct to the iPod was no issue.
    Following some sage (and frankly scary!) advice at ilounge (http://forums.ilounge.com/showthread.php?t=163611), I've decided to keep two copies of my MP3s. The fact that my fiendish NAS plan has failed is hurting more than the not-all-that-traumatic reality of duplicating 140GB of files if truth be told!

  • I am using a new laptop with bridge and when i try to edit it tells me that i need to first launch a qualifying product?

    When i try to edit with a new version of Bridge on a new laptop i get an error message that i need to first launch a qualifying product. does anyone know what this means?

    close bridge>open a qualifying product (like photoshop)>open bridge from photoshop (file>browse in bridge).
    retry opening bridge.

  • Non jdriver poor performance with oracle cluster

    Hi,
    we decided to implement batch input and went from Weblogic Jdriver to Oracle Thin 9.2.0.6.
    Our system are a Weblogic 6.1 cluster and an Oracle 8.1.7 cluster.
    Problem is .. with the new Oracle drivers our actions on the webapp takes twice as long as with Jdriver. We also tried OCI .. same problem. We switched to a single Oracle 8.1.7 database .. and it worked again with all thick or thin drivers.
    So .. new Oracle drivers with oracle cluster result in bad performance, but with Jdriver it works perfectly. Does sb. see some connection?
    I mean .. it works with Jdriver .. so it cant be the database, huh? But we really tried with every JDBC possibility! In fact .. we need batch input. Advise is very appreciated =].
    Thanx for help!!
    Message was edited by mindchild at Jan 27, 2005 10:50 AM
    Message was edited by mindchild at Jan 27, 2005 10:51 AM

    Thx for quick replys. I forget to mention .. we also tried 10g v10.1.0.3 from instantclient yesterday.
    I have to agree with Joe. It was really fast on the single machine database .. but we had same poor performance with cluster-db. It is frustrating. Specially if u consider that the Jdriver (which works perfectly in every combination) is 4 years old!
    Ok .. we got this scenario, with our appPage CustomerOverview (intensiv db-loading) (sorry.. no real profiling, time is taken with pc watch) (Oracle is 8.1.7 OPS patch level1) ...
    WL6.1_Cluster + Jdriver6.1 + DB_cluster => 4sec
    WL6.1_Cluster + Jdriver6.1 + DB_single => 4sec
    WL6.1_Cluster + Ora8.1.7 OCI + DB_single => 4sec
    WL6.1_Cluster + Ora8.1.7 OCI + DB_cluster => 8-10sec
    WL6.1_Cluster + Ora9.2.0.5/6 thin + DB_single => 4sec
    WL6.1_Cluster + Ora9.2.0.5/6 thin + DB_cluster => 8sec
    WL6.1_Cluster + Ora10.1.0.3 thin + DB_single => 2-4sec (awesome fast!!)
    WL6.1_Cluster + Ora10.1.0.3 thin + DB_cluster => 6-8sec
    Customers rough us up, because they cannot mass order via batch input. Any suggestions how to solve this issue is very appreciated.
    TIA
    >
    >
    Markus Schaeffer wrote:
    Hi,
    we decided to implement batch input and went fromWeblogic Jdriver to Oracle Thin 9.2.0.6.
    Our system are an Weblogic 6.1 cluster and a Oracle8.1.7 cluster.
    Problem is .. with the new Oracle drivers ouractions on the webapp takes twice as long
    as with Jdriver. We also tried OCI .. same problem.We switched to a single Oracle 8.1.7
    database .. and it worked again with all thick orthin drivers.
    So .. new Oracle drivers with oracle cluster
    result in bad performance, but with
    Jdriver it works perfectly. Does sb. see someconnection?Odd. The jDriver is OCI-based, so it's something
    else. I would try the latest
    10g driver if it will work with your DBMS version.
    It's much faster than any 9.X
    thin driver.
    Joe
    I mean .. it works with Jdriver .. so it cant bethe database, huh? But we really
    tried with every JDBC possibility!
    Thanx for help!!

  • Poor Performance with Fairpoint DSL

    I started using Verizon DSL for my internet connection and had no problems. When Fairpoint Communications purchased Verizon (this is in Vermont), they took over the DSL (about May 2009). Since then, I have had very poor performance with all applications as soon as I start a browser. The performance problems occur regardless of the browser - I've tried Firefox (3.5.4), Safari (4.0.3) and Opera (10.0). I've been around and around with Fairpoint for 6 months with no resolution. I have not changed any software or hardware on my Mac during that time, except for updating the browsers and Apple updates to the OS, iTunes, etc. The performance problems continued right through these updates. I've run tests to check my internet speed and get times of 2.76Mbps (download) and 0.58Mbps (upload) which are within the specified limits for the DSL service. My Mac is a 2GHz PowerPC G5 runnning OSX 10.4.11. It has 512MB DDR SDRAM. I use a Westell Model 6100 modem for the DSL provided by Verizon.
    Some of the specific problems I see are:
    1. very long waits of more than a minute after a click on an item in the menu bar
    2. very long waits of more than two minutes after a click on an item on a browser page
    3. frequent pinwheels in response to a click on a menu item/browser page item
    4. frequent pinwheels if I just move the mouse without a click
    5. frequent messages for stopped/unresponsive scripts
    6. videos (like YouTube) stop frequently for no reason; after several minutes, I'll get a little audio but no new video; eventually after several more minutes it will get going again (both video and audio)
    7. response in non-browser applications is also very slow
    8. sometimes will get no response at all to a mouse click
    9. trying to run more than one browser at a time will bring the Mac to its knees
    10. browser pages frequently take several minutes to load
    These are just some of the problems I have.
    These problems all go away and everything runs fine as soon as I stop the browser. If I start the browser, they immediately surface again. I've trying clearing the cache, etc with no improvements.
    What I would like to do is find a way to determine if the problem is in my Mac or with the Fairpoint service. Since I had no problems with Verizon and have made no changes to my Mac, I really suspect the problem lies with Fairpoint. Can anyone help me out? Thanks.

    1) Another thing that you could try it is deleting the preference files for networking. Mac OS will regenerate these files. You would then need to reconfigure your network settings.
    The list of files comes from Mac OS X 10.4.
    http://discussions.apple.com/message.jspa?messageID=8185915#8185915
    http://discussions.apple.com/message.jspa?messageID=10718694#10718694
    2) I think it is time to do a clean install of your system.
    3) It's either the software or an intermittent hardware problem.
    If money isn't an issue, I suggest an external harddrive for re-installing Mac OS.
    You need an external Firewire drive to boot a PowerPC Mac computer.
    I recommend you do a google search on any external harddrive you are looking at.
    I bought a low cost external drive enclosure. When I started having trouble with it, I did a google search and found a lot of complaints about the drive enclosure. I ended up buying a new drive enclosure. On my second go around, I decided to buy a drive enclosure with a good history of working with Macs. The chip set seems to be the key ingredient. The Oxford line of chips seems to be good. I got the Oxford 911.
    The latest the hard drive enclosures support the newer serial ata drives. The drive and closure that I list supports only older parallel ata.
    Has everything interface:
    FireWire 800/400 + USB2, + eSATA 'Quad Interface'
    save a little money interface:
    FireWire 400 + USB 2.0
    This web page lists both external harddrive types. You may need to scroll to the right to see both.
    http://eshop.macsales.com/shop/firewire/1394/USB/EliteAL/eSATAFW800_FW400USB
    Here is an external hd enclosure.
    http://eshop.macsales.com/item/Other%20World%20Computing/MEFW91UAL1K/
    Here is what one contributor recommended:
    http://discussions.apple.com/message.jspa?messageID=10452917#10452917
    Folks in these Mac forums recommend LaCie, OWC or G-Tech.
    Here is a list of recommended drives:
    http://discussions.apple.com/thread.jspa?messageID=5564509#5564509
    FireWire compared to USB. You will find that FireWire 400 is faster than USB 2.0 when used for a external harddrive connection.
    http://en.wikipedia.org/wiki/UniversalSerial_Bus#USB_compared_toFireWire
    http://www23.tomshardware.com/storageexternal.html

  • Poor performance with SVM mirrored stripes

    Problem: poor write performance with configuration included below. Before attaching striped subdevices to mirror, they write/read very fast (>20,000 kw/s according to iostat), but after mirror attachment and sync performance is <2,000 kw/s. I've got standard SVM-mirrored root disks that perform at > 20,000 kw/s, so something seems odd. Any help would be greatly appreciated.
    Config follows:
    Running 5.9 Generic_117171-17 on sun4u sparc SUNW,Sun-Fire-V890. Configuration is 4 72GB 10K disks with the following layout:
    d0: Mirror
    Submirror 0: d3
    State: Okay
    Submirror 1: d2
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 286607040 blocks (136 GB)
    d3: Submirror of d0
    State: Okay
    Size: 286607040 blocks (136 GB)
    Stripe 0: (interlace: 4096 blocks)
    Device Start Block Dbase State Reloc Hot Spare
    c1t4d0s0 0 No Okay Yes
    c1t5d0s0 10176 No Okay Yes
    d2: Submirror of d0
    State: Okay
    Size: 286607040 blocks (136 GB)
    Stripe 0: (interlace: 4096 blocks)
    Device Start Block Dbase State Reloc Hot Spare
    c1t1d0s0 0 No Okay Yes
    c1t2d0s0 10176 No Okay Yes

    I have the same issue and it is killing our iCal Server performance.
    My question would be, can you edit an XSAN Volume without destroying the data?
    There is one setting that I think might ease the pressure... in the Cache Settings (Volumes -> <Right-Click Volume> -> Edit Volume Settings)
    Would increasing the amount of Cache help? Would it destroy the data?!?!?

  • InDesign CC is performing poorly, slowly when working with text.

    InDesign CC is performing poorly, slowly when working with text. I tried updating and it seems like everything installed except for Extension Manager 6.0.7 which failed to instal. Any fix for this?

    Thanks, I think the problem is solved. Previously, the CC Extension Manager failed to install when I did an update via the InDesign Help menu. InDesign CC updated successfully, however, InDesign was still acting wonky. Since downloading a new full CC Extension Manager a little while ago, InDesign seems to be working better. Is there a correlation between the two apps for future reference?

  • Are there Issues with poor performance with Maverick OS,

    Are there issues with slow and reduced performance with Mavericks OS

    check this
    http://apple.stackexchange.com/questions/126081/10-9-2-slows-down-processes
    or
    this:
    https://discussions.apple.com/message/25341556#25341556
    I am doing a lot of analyses with 10.9.2 on a late 2013 MBP, and these analyses generally last hours or days. I observed that Maverick is slowing them down considerably for some reasons after few hours of computations, making it impossible for me to work with this computer...

  • Adobe Illustrator CC crashes at first launch with plugin

    When I try to launch AI CC without my plugin, it doesn't crash. Then, I add .aip file in ../Program Files/Adobe/Adobe Illustrator CC 2014/Plugins/%plugin_name%/ , open illustrator, trying to close it and it crashes. But when I try to launch it with plugin at second time it works fine.
    After debugging I found, that it successfully passes through ShutdownPlugin() and main(), and then crashes in ai::UnicodeString::~UnicodeString (void), line "result = sAIUnicodeString->Destroy(*this);"
    I've checked my code on "empty" ai::UnicodeString's, tried to delete every generated at launch files(such as passwords etc.) but haven't succeed .

    Sounds like you have an ai::UnicodeString as a global variable (or a static varaible within a function). The exit cleanup code is deleting the object and calling ai::UnicodeString::~UnicodeString, but by this point all the suites have been deallocated so sAIUnicodeString is NULL.

  • Poor Wi-fi Performance

    Hi all,
    I have just installed Lion into my early 2011 MacBook Pro and I noticed my Wi-fi seems slower than usual. I looked at the System Information and I noticed the Wi-fi Card Type shows "Third-Party Wireless Card". Shouldn't it show "AirPort Extreme"? Could this be the problem to my poor Wi-fi Performance?
    Thanks

    SOLUTION FOUND to MacBook Pro poor Wi-Fi wireless signal.
    First, let's talk about the actual problem: too much signal noise. There is a hidden app in OSX called "Wi-Fi Diagnostics" This will tell you everything you need to know why your Wi-Fi is not working!
    http://support.apple.com/kb/HT5606
    HINT: It is included in 10.7, just look for it.
    The solution!
    Turn off your wi-fi and run your internet through your home's electrical wires. Power-line adapters are devices that turn a home's electrical wiring into network cables for a computer network.
    http://reviews.cnet.com/2733-3243_7-568-8.html
    I purchased TP-LINK TL-PA511 and for the first time I'm downloading with ALL my internet speed (30mb down / 5mb up) tested via www.speedtest.net. I'M SOOOOO HAPPY!!!!
    http://www.amazon.com/TP-LINK-TL-PA511-Powerline-Starter-Kit/dp/B0081FLFQE/ref=s r_1_1?ie=UTF8&qid=1385395374&sr=8-1&keywords=TP-LINK+TL-PA511

  • Sequencing question: app requires IE settings change before first launch

    hi.
    I need to sequence an app that will open in IE.
    I did  a quick try, got some errors... Found later that it requires IE not above 9. I had 11.
    Sure I will try it again with right browser version and if necessary few tries. So in case of problem I can ask providing the errors
    But as preliminary question I want to ask an advice for the sequencing in the next installation scenario:
    1. Run msi (55MB package, so it is not just a shortcut)
    2. Before first launch, installation instruction requires to open IE and set "use username and password" for Internet and Intranet zones to YES.
    The question is would IE settings be captured during the installation and monitoring process if I will do it as required.
    Or it worth to set IE before starting sequencer, or I can set a registry for IE after editing Appv package after saving.
    Also, domain GPO can be used probably. But I know that I need to enter the app during monitoring.
    What will be appropriate to try?
    Thanks.
    "When you hit a wrong note it's the next note that makes it good or bad". Miles Davis

    I suggest that when you re-sequence you have the correct version of IE on your sequencing VM. Also, ensure IE has been launched before attempting to sequence to ensure any first run user settings are not going to be captured in your virtual package.
    Also, please take a look at this excellent article posted by Dan Gough around overriding Group Policy Settings and consider it's impact for what you may be trying to do:
    http://packageology.com/2014/02/overriding-group-policy-settings-app-v/
    PLEASE MARK ANY ANSWERS TO HELP OTHERS Blog:
    rorymon.com Twitter: @Rorymon

  • Performance with the new Mac Pros?

    I sold my old Mac Pro (first generation) a few months ago in anticipation of the new line-up. In the meantime, I purchased a i7 iMac and 12GB of RAM. This machine is faster than my old Mac for most Aperture operations (except disk-intensive stuff that I only do occasionally).
    I am ready to purchase a "real" Mac, but I'm hesitating because the improvements just don't seem that great. I have two questions:
    1. Has anyone evaluated qualitative performance with the new ATI 5870 or 5770? Long ago, Aperture seemed pretty much GPU-constrained. I'm confused about whether that's the case anymore.
    2. Has anyone evaluated any of the new Mac Pro chips for general day-to-day use? I'm interested in processing through my images as quickly as possible, so the actual latency to demosaic and render from the raw originals (Canon 1-series) is the most important metric. The second thing is having reasonable performance for multiple brushed-in effect bricks.
    I'm mostly curious if anyone has any experience to point to whether it's worth it -- disregarding the other advantages like expandability and nicer (matte) displays.
    Thanks.
    Ben

    Thanks for writing. Please don't mind if I pick apart your statements.
    "For an extra $200 the 5870 is a no brainer." I agree on a pure cost basis that it's not a hard decision. But I have a very quiet environment, and I understand this card can make a lot of noise. To pay money, end up with a louder machine, and on top of that realize no significant benefit would be a minor disaster.
    So, the more interesting question is: has anyone actually used the 5870 and can compare it to previous cards? A 16-bit 60 megapixel image won't require even .5GB of VRAM if fully tiled into it, for example, so I have no ability, a priori, to prove to myself that it will matter. I guess I'm really hoping for real-world data. Perhaps you speak from this experience, Matthew? (I can't tell.)
    Background work and exporting are helpful, but not as critical for my primary daily use. I know the CPU is also used for demosaicing or at least some subset of the render pipeline, because I have two computers that demonstrate vastly different render-from-raw response times with the same graphics card. Indeed, it is this lag that would be the most valuable of all for me to reduce. I want to be able to flip through a large shoot and see each image at 100% as instantaneously as possible. On my 2.8 i7 that process takes about 1 second on average (when Aperture doesn't get confused and mysteriously stop rendering 100% images).
    Ben

Maybe you are looking for