JavaScript Performance Slow – Not Utilizing Full CPU?

Hello all. I have a script performance question I was hoping someone could help with.
I’ve developed a folder level JavaScript for Acrobat that does a great detail of linking and annotation for a certain kind of PDF book that we produce where I work.   The script works, however the performance is quite slower than I would expect.  On some of the larger books I run the script on it takes hours to complete.    I did suspect my code for a while and optimized it as much as possible.  I noticed later though that while the script is running Acrobat only utilizes a maximum of about 12 percent of the CPU (12 percent of one core).   I would think this number would be much higher.  Has anyone else experienced this? Is there a way around it?  
Craig

JavaScript is an interpretive language or each line of  code is reprocessed each time it is asked to be rerun. When a PDF form is opened all of the JavaScript is processed from plain text through a syntax check, and common actions being converted into tokens for faster running. Also functions are converted into tokens for reuse.
So gains in processing times are achieved by determining the best place to place code like using the on blur action for processing a field process that only has to occur when a field value changes  like testing a value and issuing a waring versus running a full calculation for that same action.
When any field used within a calculation is modified, all calculations are reprocessed when ever one of those fields is changed. Another way to improve performance is to create a function  or a predefined process that may have some input values, performs a specific task. and may return a result. So if I have a calculation like computing elapsed time between 2 time values on a time sheet rather than coding the conversion of the time strings to a number, performing calculation and formatting the result for each day, I could define a function that uses the 2 time values, computes the difference and returns the formatted result. Since the function has been syntax checked and tokenized it will run faster on each call for each day.
It is also possible to combine the calculation for many fields into one larger calculation in one form field and it is even possible to call a user defined function within another user defined function so one eliminates the repeated initialization for the code for each field.
If one is adding fields with JavaScript and adding calculation actions, it is best to turn off the the doc's calculate property while adding the fields since each new added field will trigger a recalculation as each field is added. Just remember to turn it back on at the end. This should have no affect when adding annotations or searching for words.
If you are having to search through the PDF for each word, then there is a lot of overhead. I have noticed with such scripts that the first use takes significantly longer than the next call as long as Acrobat is not closed. This maybe due to the use of virtual memory by the computer and the code I use is in the RAM from the first call and not the virtual swap file for the second call of the process.
You might also get faster processing by using the Action/Batch processing of Acrobat Professional.

Similar Messages

  • Performance Issue: Central Instance not utilizing full network resources.

    Hi All,
    I have experienced a problem regarding network usage by central instance.
    At random times, even at peak load, SAP application server utilizing not more than 0.30%-0.35% of network resources, resulting in very poor performance.
    As Primary observation, i have checked at operating system level that no other service/application is consuming network at that time.
    Please help, how i can monitor/check and enhance the network resources utilization to the fullest for better performance.
    Thanks in advance.
    Regards
    Surjit

    SAP application server utilizing not more than 0.30%-0.35% of network resources, resulting in very poor performance.
    To be honest with you I can't see the network to been the performance issue unless you have serius trouble with network latency or maybe your system been located in a remote location.
    You can check the average GUI time (in milliseconds) in ST03N, then you can make conclutions on if the info travelling to the frontend is been compromised because of network issue.
    Also, usually comunication in between the instances should be based on a private fast VLAN to allow free traffic in between them
    Regards
    Juan

  • X99S XPOWER AC - Not utilizing full 40 lanes on 5960x

    Hi.
    I have a tri SLI setup that is running at 16x/8x/8x bandwidth with my 5960X. The 5960X has 40 lanes so I should be running 16x/16x/8x?
    I have the following
    E1 - GTX Titan
    E2 - GTX Titan
    E3 - empty
    E4 - GTX Titan
    E5 - empty
    E6 - ZxR Sound Card
    M2_1 - XP941 SSD
    Looking in the manual at the PCIe bandwidth table, both E1 and E4 are 16x capable. But, inside of the manual, I see no combination in the PCIe table that allows for E1, E2, E4 to run 16x/8x/16x. The only one there is E1 16x, E4 16x, and E6 at 16x meaning I would have to switch E2 GTX Titan with E6 ZxR sound card.
    Can you please make a bios update that will fix this? No watercooler is going to space their cards out unevenly. Maybe it works better for air coolers, but it doesn't look nice for watercoolers (not to mention I do not have a bridge block that supports such an odd arrangement of GPUs).
    I am looking to the full 40 lanes because two cards will be ran in SLI for display, where as the third card will run as a dedicated CUDA compute card. I'm looking to have the display cards run 16x, and the CUDA compute card to run 8x.
    Thanks.

    Quote from: blade52x on 19-October-14, 23:33:55
    Hi.
    I have a tri SLI setup that is running at 16x/8x/8x bandwidth with my 5960X. The 5960X has 40 lanes so I should be running 16x/16x/8x?
    I have the following
    E1 - GTX Titan
    E2 - GTX Titan
    E3 - empty
    E4 - GTX Titan
    E5 - empty
    E6 - ZxR Sound Card
    M2_1 - XP941 SSD
    Looking in the manual at the PCIe bandwidth table, both E1 and E4 are 16x capable. But, inside of the manual, I see no combination in the PCIe table that allows for E1, E2, E4 to run 16x/8x/16x. The only one there is E1 16x, E4 16x, and E6 at 16x meaning I would have to switch E2 GTX Titan with E6 ZxR sound card.
    Can you please make a bios update that will fix this?
    http://www.msi.com/product/mb/X99S-XPOWER-AC.html#hero-specification
    3-way mode: x16/ x0/ x0/ x16/ x8
    For the CPU that supports 40 PCIe lanes
    There is no BIOS update able to change the hardware design.

  • Intel Xserve not using full CPU

    When submitting a batch to qmaster, I notice my other g5 machines utilize almost all the processor, and on the intel xserve, all 4 CPUs usually max out at 40% tops. Is this a limitation, to be expected, etc?

    Things do look clean - not familiar with all of the Processes, but some of the common "culprits" do not appear. Nice job in keeping these to a minimum. Most people have 60-80 going. Maybe someone else can look down your list and find something that can be eliminated easily for NLE work. I just don't see any potential problems, but remember I do not recognize some of those.
    I like the Virtual Memory setting, with only one question: you mention a partition on the RAID 0. Why a partition? The reason that I ask is that Windows first sees your 2 HDD's as one (the RAID), then it sees your partitions as several. If accessing what it now thinks are 2, or more, physical HDD's, it's telling the controller to get the heads in a couple of places at the same time. You probably have a good reason for doing it this way, but it does throw up a yellow flag to me. Having the RAID managed by the MoBo chip is not the fastest/best, but is usually more than adequate. It does save on a US$400 RAID card with multiple channels/chips.
    You have provided me, and others, with food for thought. I'll keep looking over your data, but just see nothing, except for the Page File on a partition, that might affect performance.
    One test that I'd run would be a slight re-config on your Scratch Disks. I'd do a Save_As, or a Save_As_a_Copy for your Project. Leave your original totally alone. Do a test with the Scratch Disks set to "Same as Project." This will put them on the RAID 0. Then, perform the same operations and monitor the time. Unfortunately, you're investing hours into this, but you might get enough of a performance increase to justify it for future Projects. One way around the time, would be to do a "test" Project with shorter versions of similar Assets. Do all processing on that Project, as configured, and time the results. Do the Save_As, make the changes to the Scratch Disks, and process again. Any difference?
    Good luck, and thanks for gathering the data,
    Hunt

  • "Activity Monitor" utility does not update the "CPU Time" column in real ti

    If I use Activity Monitor to display "All Processes", why doesn't it update the "CPU Time" column in real time? It updates the "% CPU" and others, but not the total "CPU Time". I have all columns viewed if that matters. However if you double click on a process in the Activity Monitor window, the details about the process are displayed in another window (number of threads, ports, CPU Time, Context Switches, Faults, etc), and as long as that detail window exists, the "CPU Time" column is updated in real time in the main Activity Monitor window. Is this a bug or a feature?? Does Leopard do this also? Have lots of free memory so that is not an issue. Thanks...
    -Bob

    I noticed the same behavior reported by Bob: Not regarding the "process filter" or the "update frequency" selected "CPU Time" column is only updated when details dialog is open. I noticed it just today (which triggered the search here), I wonder if this "feature" has been always present or maybe activity monitor is getting lazy?
    Regards,
    Mauro

  • Windows 7 Gadgets slow to respond, 50% CPU utilisation with gadgets that "move"

    I have a fresh installation of Windows 7 RC and everything was fine. I have no idea what may have caused this, but on day 2 of using Windows 7 RC, any desktop gadgets that move (like the CPU meter or the clock with its second hand enabled) cause the sidebar.exe process to go to 50% CPU utilisation (which is 100% of one core on my dual core CPU).
    These gadgets then just become unresponsive and I have to close them. I can only keep the Weather gadget on because it doesn't have any animations. I have all Windows updates and the latest NVIDIA driver from Windows Update and even their own website doesn't help. There is something wrong with my gadgets. I have never installed any extra gadgets.
    What could it be?

    Same issue, but after Vista x64 Home Prem upgrade to Win7 Home Prem x64.  ATI Radeon HD 4850 graphics card, Quad Core & can't even change the settings of the Clock if I set the seconds hand.  The clock takes up a continuous full core (25% of overall CPU) while the seconds hand is running; if the seconds hand is off then CPU utilization is 0% between minute updates.  Other gadgets are similar, but many such as Calendar, Weather, and some non-MS gadgets work perfectly fine.  Must be some kind of javascript issue.  I have Avira Antivir before & after upgrade to Win7, so AVG isn't the issue for me - I even tried with Antivir and Comodo Firewall turned off. UAC on. 
    This machine is FAST & it can't handle a little gadget.  (7.2/7.3 Windows Experience subscores on all but hard disk, which is 5.9)  Didn't have this issue in Vista x64 Home Prem.
    edit:
    Checked the Win7 Resource Monitor CPU tab:
    sidebar.exe - While not under heavy CPU load, "Analyze Wait Chain..." from context menu continually shows "One or more threads of sidebar.exe are waiting to finish network I/O", and indicates thread 4140 which is WinInet.dll (Internet Extensions for Win32 application extension.)  Under heavy CPU load, several other threads starting from sidebar.exe were indicated from the same message from "Analyze Wait Chain..." as well.
    All other non-sidebar processes indicate "running normally" from "Analyze Wait Chain..."
    Another clue:  If I login using the "Administrator" account, there is *** no performance problem ***!  The sidebar.exe shows as "running normally" from "Analyze Wait Chain..." 
    So I also tried:
    - w/UAC turned off on the normal user (administrator rights) account (no fix)
    - tried creating a *new* normal user (administrator rights) account; still sloooowwww (also try this one w/o UAC, still bad)
    -Problem Status:  normal User Accounts have the sidebar performance problem, Administrator account runs fine.
    Edit: 12/31/09
    SOLVED!!!!
    After digging into the sidebar.exe output from Sysinternals Process Monitor, I noticed a peculiar difference between the output for the properly functioning Administrator account & my User accounts.  The admin acct showed the monitor profile for a good number of operations, while the User accts showed the wsRGB.cdmp value for the same operations, which was peculiar because I had my User accounts configured to use the system defaults, the same way I had the admin acct configured.
    Going into the Windows Color Management, Advanced tab, I found that if I chose either of the "virtual device model profile" choices for the "Device Profile" setting, the performance hit was instant.  If I used any of the "normal" icm profiles, performance was good.  So I changed my system default Device Profile from "sRGB virtual device model profile" to the std sRGB.icm profile & everything is working great!
    Note: In addition to the Gadgets, this also impacted the Windows Media Center Internet TV listings performance, which was also back to good performance after this WCM setting change.  I'd expect there are numerous other apps that could be affected by the "virtual device model profile" handling.  Googling "virtual device model profile" seems to bear that out, as I see others who ran into app problems when using that in the WCM applet.

  • PSE performance slow

    PSE 8, Windows XP, 2Gb memory
    Symptoms
    Takes quite some time for thumbnails to generate or for larger
    If I select several thumbnails and type a tag name into the tags search box, it can take up to 15 seconds for the tag results to be displayed
    There could be several factors at play
    Older PC
    I have an older Dell Inspiron running XP. Having said that it is still an Intel 1.8GHz with Hyperthreading and 2Gb memory. When I notice the slow performance I do not see excessive CPU, nor memory consumption. The FSB I think is 667Mhz
    Images are on external NAS
    I have a ReadyNas Duo (v1) that is ethernet cabled to my router which is ethernet cabled to my PC. Everything is at 100Mb. I have 2 1Tb drives in X-Raid. When generating thumbnails I do see network activity, but not when tagging. I've never seen the network monitor go over about 50%. The catalog is on local disk.
    Older modem/router
    I have a Linksys WAG200 modem/router. I do see a lot of activity over the ports when I'm working.
    Solutions?
    I'd like some considered suggestions. Sure I could replace everything but then I wouldn't have learned anything.
    I'm wondering whether bringing all the photo's off the NAS into local disk would be an advantage. As a side issue I backed up the catalog recently, and while the folders with the catalog didn't seem that big, the process was something like a 30Gb backup which seemed ridiculous and took about half an hour to write (to the NAS). I'm assuming it was the thumbnails but I don't need to back those up as if I have a catastrophic failure I'm quite happy to let it chug away during recovery and regenerate the thumbnails. Not like I have to do it every week... Anyway if I brought the images into local storage I'd then have to schedule backups to the NAS regularly but that's OK...
    I'm wondering whether the old Linksys gateway is a bottleneck and since I have several other Netgear devces about the place whether upgrading to a gigabit Netgear gateway would do any good.
    PC. I'll have to upgrade anyway in which case I'd go for i7, gigabit ethernet, video card. I'm not entirely convinced that the PC is the major problem though...
    Any help appreciated...

    Hi,
    The activity is slower in case of network media involved. Bringing your media into local disk would make your Organizer faster.
    The size of backup folder is inclusive of media file size, catalog file, thumbnail cache etc. Hence it would not just be thumbnails. Also taking backup would be faster for non-network drives.
    I would suggest you to try Organizer with media on non-network drives. That should solve majority of your issues.
    Regards,
    vaishali

  • System and performance slowing down even after Diskwarrior use?

    i own a G5 Powermac
    it seems to be getting slower. i have had it a year and a half. i have 2 G ram and it is a dual processor so i don't really understand why it should be going more slowly at all. I don' even burden it with heavy duty programs like Final cut because it is not right in the head.
    what do you recommend?
    does this happen often?
    any ideas?
    mediusCentral

    Mac's get slower the more the boot drive is filled past 50% and really slow when nearly full.
    Keep your boot drive less than 80% and ideally less than 50% filled.
    Learn to clone your boot drive to a external drive and back again, this will optimize the drive and increase performance as Mac OS X is constantly accessing the boot drive for something or another.
    You'll need a cloning software that will copy the entire drive, just copying folders won't work due to invisible and locked system files.

  • IMac OS X performing slow/hangs

    My iMac system is performing slower than expected and often hangs. I often wait up to a minute for the next action to happen, while the little cursor sits and spins. What to do? Frustrating.
    System specs that seem pertinent:
    4G memory (2 slots taken; 2 slots free)
    storage report says 176G free out of 499G
    OS X, 10.8.2
    Thanks, Apple community.

    Most often this indicates you are running to many applications concurrently for the amount of physical RAM you have installed. It may also mean you have a crashed process:
    Open Activity Monitor in the Utilities folder.  Select All Processes from the Processes dropdown menu.  Click twice on the CPU% column header to display in descending order.  If you find a process using a large amount of CPU time (>=70,) then select the process and click on the Quit icon in the toolbar.  Click on the Force Quit button to kill the process.  See if that helps.  Be sure to note the name of the runaway process so you can track down the cause of the problem.
    While Activity Monitor is open click on the System Memory tab in the bottom portion of the AM window. Use COMMAND-SHIFT-4, use the crosshairs to select this portion of the AM window, then use the Camera icon of this message composition window to post the image here.

  • "Error: the xajax Javascript file could not be included. Perhaps the URL is incorrect? URL: /includes/javascript/xajax_js/xajax.js". other posters with a xajax error on this board got the response to go look for developer resources. (i'm not a developer)

    I'm on a mac running 10.6.8, was using Firefox 6.02 when the problem started, i performed a clean install of 7.01, installed the latest Flashplayer, and reinstalled java (the 10.6.5 update file from apple's site).
    I seem able to load video at youtube.com, and was able to load web-based irc chatrooms at ircchat.tv. however, at jamplay.com a paid member site, all of the lessons are flash video, and there's a live video feed chat room that is also flash based, and I am not able to view this content. the video content pops up the error message "Error: the xajax Javascript file could not be included. Perhaps the URL is incorrect?
    URL: /includes/javascript/xajax_js/xajax.js" and the clicking on the "launch chatroom" button does exactly nothing.
    I have contacted the jamplay site team but their recommendations are the steps I have already taken, mentioned above, that did not resolve the issue.
    some links to content on their site visible without being a member:
    http://www.jamplay.com/
    this one has a flash interface that, if things were working, would have a video in the center and members and staff talking abouth that area of the site when you click on one of the title buttons. Now that I am having this problem, nothing is clickable and there's no indication that there should be a video in the center. The error message regarding xajax does not come up at all.
    http://www.jamplay.com/guitar-lessons/beginners/1/527-16-circle-of-fifths
    same thing. video content is missing, xajax error does not appear.
    http://www.jamplay.com/live
    when I load these all content except the video loads on the page. this page does not give me the error at all, just does not load the video
    I can't seem to recreate the actual error without being logged in as a member of the site.
    The apple support team gave me a one-time free suppport ticket to troubleshoot an ethernet issue I was having. Unfortunately, the databases I removed were not deemed either by apple or me to be noteworthy or necessary to retain (all of them were for software I not longer have or use). I don't know which plist file might have been the one I need now. I did reinstall the java and flash items. The only other item that I think it might be is a set of belkin router software that I uninstalled because i'm not using any belkin hardware anymore.
    If someone has any kind of idea I would be most appreciative.

    That's a comment in the file. It has no effect at all.

  • ExportAsFDF javascript method is not working with Reader 10.1.1

    Hi,
    I have pdf document in which I have applied all Extendend Reader Rights. And now when I am trying to exports alll annotation in fdf with Reader 10.1.1 then it is faling to export. However it is working fine with Reader 10.0.0. Looks like this is broken in later version of 10.0.0.
    Please let me know if any alternative solution is there to export all annotation in fdf file thru code.
    Regards,
    Arvind

    I would open a formal support ticket with our developer support folks.
    From: Adobe Forums <[email protected]<mailto:[email protected]>>
    Reply-To: "[email protected]<mailto:[email protected]>" <[email protected]<mailto:[email protected]>>
    Date: Sun, 27 Nov 2011 22:48:35 -0800
    To: Leonard Rosenthol <[email protected]<mailto:[email protected]>>
    Subject: exportAsFDF javascript method is not working with Reader 10.1.1
    exportAsFDF javascript method is not working with Reader 10.1.1
    created by arvindg007<http://forums.adobe.com/people/arvindg007> in Acrobat SDK - View the full discussion<http://forums.adobe.com/message/4049525#4049525

  • JSF 1.2 performance is not adequate

    Sun JSF developers...
    JSF 1.2 performance is not adequate.
    Here is a simple use case to prove:
    Render a table with ~100 columns and ~1000 rows (with facelets)
    (you could increase the numbers to stress test)
    using h:dataTable tag.
    You have to repeat h:column hundred times in XHTML (to create bigger
    UI component tree).
    Have a backing bean with 100 properties and JSF action which returns
    list of 1000 those beans.
    In this case there is no DB interaction, no back-end involved.
    Have your application set up with Ajax4Jsf (so it goes via its filter). You could
    also compare the results with and without it.
    Measure RENDER RESPONSE phase, and overall response time.
    Compare with JSP which renders same table from same bean.
    This is very simple test case to write and you will see that JSF in current state is really slow.
    This test could also be used to measure performance of new versions to see where they stand.
    It would be also interesting to see how render time grows with increase of
    number of columns (number of components in UI tree) (Is it linear or not?)
    How fast does it grow with the number of rows (Is it linear growth with increase of data size?)
    I believe this type of tests should be standardized before each release.
    Does JSF development team perform regular profiling of JSF RI java code?
    Running profiling for this use case could also reveal interesting things which
    could be improved.
    Regards,
    --MG                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    But you have done nothing here to determine if itscales linearly or not.
    See below.
    You increase number of columns (rows) and measure
    time.
    When you draw on a chart how time grows with increase
    in number of columns (rows) you will see if it linear
    or not.
    I understand how to determine the growth rate; I was merely pointing out that you have only claimed to run one test and one cannot determine the growth rate from that.
    It would be also interesting to see how render timegrows with increase of
    number of columns (number of components in UI tree)(Is it linear or not?)
    How fast does it grow with the number of rows (Isit linear growth with increase of data size?)
    Unless you are talking about spreadsheets I doubtthese apps are useful.
    They are very useful.I bet they would be more useful as native desktop apps.
    >
    But this is besides the point; my point is that thescenario of having a large number of components on a
    page does not occur often in real applications.
    Really? You are not serious?Yes I am serious. I can see we have different perspectives here.
    >
    A much more common performance scenario is highload.
    This is separate story. I'm talking about latency of
    loading one page.
    Please do not confuse latency and bandwidth.I am not talking about bandwidth but throughput.
    >
    BTW most of the web sites have small number of
    concurrent users,
    so the most common scenario is actually exactly
    opposite.
    Low load but complex pages with lots of components on
    them.Again, I can see we have different perspectives here. My experiences are the opposite.
    >
    So your results will be much more useful if youinclude a load test on a moderately sized page. I'm
    not dismissing your original test as invalid or
    useless.
    My results are more useful than what you are
    proposing, as they help to pinpoint the cause of
    latency. You could run my test with profiler and
    investigate the root cause of it.You do not need to increase the number of components on the page to numbers beyond practice in order to determine this. It is sufficient to run profilers on average pages many times and determine where the most time is spent. In fact, your approach may result in an implementation optimized for pages with a large number of components but not for pages with a lower number of components.
    >
    My point is that the user experience is not degradedby the performance but by the poor presentation of
    information. Many web browsers to not take kindly to
    such large tables as well.
    With JSF user experience is degraded by performance.I'd like to see your numbers and the code you used.
    JSF makes it easier
    to do nicer user presentation, but performance
    suffers to degree that it is not usable in number of
    cases.
    JSF inability to generate page fast on server side
    has nothing to do with browsers.Correct; but I was talking about pages with large tables in general and why they are a bad idea for web applications. A native application is more appropriate. But this is orthogonal to the purpose of this thread.
    >
    Again, you are missing the point. Perhaps JSFperformance differs between the application servers.
    A fair and useful test will determine that. I'm not
    talking about the relative performance vs JSP here.
    If JSF does things inefficiently in its code it will
    show consistently bad results
    across all app servers. Again my test is to detect
    and pinpoint JSF inefficiencies and not app servers
    ones.
    Don't you think JSF developers would want to look for
    it first rather than investigating why it perform
    badly on some particular app server because of
    its internal reasons.I'm approaching this from the perspective of a developer evaluating whether to use JSF and which application server to use. This is very useful from this point of view.
    As far as the JSF developers go, as I noted above one does not need to scale out the number of components on the page to determine performance bottlenecks in the implementation. An average case is more appropriate.
    Also, if you are going to compare JSF to other technologies you need to try a variety of platforms for both. Some application servers might do JSP better than others. If you limit yourself to one platform you run the risk that it is skewing the comparison one way or another.

  • Fully utilizing all CPU's for a LabVIEW application

    Hi All,
    Have any of you figured out how to harness all of the CPU's in modern machines?
    Backgroud:
    I have an application that does a lot of signal processing and it was pegging the CPU of the machine it was originally deployed on for many minutes.
    As a quick first step we suggested the customer try the application on a new high-end machine. THey did and the preformance improved ...
    BUT...
    When we look at the
    Task Manager >>> Perfomance tab
    it appears we are not not utilizing all of the available CPU's.
    This observation is based on the 8 CPU graphs displayed in the Task Manager.
    The first 4 graphs show very heavy CPU useage but the reamining four graphs show little or no loads.
    I am guessing that this may be due to LV (8.X) using a default of 4 threads for each execution system.
    Since the last time we were on-site, I have looked at
    ...\LabVIEW\vi.lib\utility\sysinfo.llb\threadconfig.vi
    and it appears all I have to do is run that utility one time and save the config as 8 threads for each execution system.
    Now before I send someone back to site, I'd like to find out if someone has traveled this road before me and would like to share their wisdom.
    Thank you,
    Ben
    "Mommy, I want to go FAST!" (Daughter of one of my old girl friends)
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction
    Solved!
    Go to Solution.

    I received a couple of questions concerning this post, so a bit more information to clear things up.
    LabVIEW's Logic
    LabVIEW's default thread creation logic is to create the max of (number of cores, 4). The bug we had is that we inadvertently limited this to four (no humorous comments please ).
    What does the utility do?
    The utility writes some settings to your ini file if different than default. When you run the utility, it displays the number of processors you have in a field at the top. It displays the number of threads being used for each priority in a section below. For a machine with 1-4 cores, the default thread count will be four. For a machine with 8 cores, the default thread count "should" be 8 (and, as of LabVIEW 8.5, it is).
    Roy

  • PSE 10 will print from the organizer and in sharing mode, but not from full edit

    I'm using pse10 on a computer with an i7 processor with 12GB of ram and windows 7 with a color laser printer.  The print function works fine while in organizer and sharing mode, but not from full edit.   When I try to print from full edit, I get an  error message with a red "X" on it that says:  "Before you can perform printer related tasks such as page setup or printing a document, you need to install a printer."  I can't figure it out.  The printer is in "ready" state, and works fine in the organizer mode, and in share.  Any thoughts on what I'm missing? Thanks for any help in advance.

    This occurs frequently with HP printers, though  others are implicted as well from time to time
    In Control panel>Devices and printers, right click your active printer, go to Printer Properties, and rename the printer to something short, e.g. my printer
    http://kb2.adobe.com/cps/865/cpsid_86566.html

  • Query performance slow

    Hi Experts,
    Please clarify my doubts.
    1. How can we know the particular query performance slow in all?
    2. How can we define a cell in BEx?
    3. Info cube is info provider, Info Object is not Info Provider why?
    Thanks in advance

    Hi,
    1. How can we know the particular query performance slow in all?
       When you run the query it's take more time we know that query is taken more if where that query is taking more time you can collect the statics.
       like Selct your cube and set BI statics check box after that it will give the all statics data regarding your query.
      DB time (Data based time),Frent end Time (Query), Agrreation time like etc. based on that we go for the perfomance aggreations, compresion, indexes etc.
    2. How can we define a cell in BEx? 
       In Your Bex query your using two structures it's enabled. If you want create the different formulate by row wise you go for this.
    3. Info cube is info provider, Info Object is not Info Provider why?  
        Info object also info provider,
        when your info object also you can convert into info provider using " Convert as data target".
    Thanks and Regards,
    Venkat.
    Edited by: venkatewara reddy on Jul 27, 2011 12:05 PM

Maybe you are looking for

  • Problem in punching AP invoice.

    Problem in punching AP invoice afetr upgrade to ERP R-12.

  • EJB 3.0 and JTA (Distributed transactions)

    hi, first of all sorry for my english. then :) i'm working with JBoss Aplication Server an Oracle DataBase using EJB 3.0 i have no problem before ... but now i want work with multiple database and i wand to manage transactions ... how i understand JT

  • Firefox crash stops Safari from connecting to internet

    Since my Firefox 2.0.0.9 crashed last week, I have had very odd problems with my WiFi (Earthlink through an Apple Airport). Network Diagnostics says everything is okay, but Safari can no longer connect to the internet, and after Firefox crashes I can

  • How to Get Duration, Start/Stop Time of Each Instance (Run)

    Can someone tell me how to get the Duration, as well as Execution Start/Stop times for a given run (Instance) of a Report, stored in History? For example, on the BO Server in InfoView, I can browse the History of each report, and for each run, I can

  • Firefox wont load at all!

    I open firefox and it never loads any websites. The screen and website bar stay blank. This has never happened before. I turned my computer off last night, but I honestly don't see how that could do anything. Please help!