Are AS3 timers make use of multi processor cores?

Can anyone tell if a Timer in AS3 will force Flash player to create a new Thread internally and so make use of another processor core on a multicore processor?
To Bernd: I am asking because onFrame I can just blit the prepared in a timer handler function frame. I mean to create a timer running each 1ms and call the emulator code there and only show generated pixels in the ON_ENTER_FRAME handler function. In that way, theoretically, the emulator will use in best case a whole CPU-core. This will most probably not reach the desired performance anyway, but is still an improvement. Still, as I mentioned in my earlier posts, Adobe should think of speeding up the AVM. It is still generally 4-5 slower than Java when using Alchemy. Man, there must be a way to speed up the AVM, or?
For those interested what I am implementing, look at:
Sega emulated games in flash 
If moderators think that this link is in some way an advertisement and harms the rules of this forum, please feel free to remove it. I will not be offended at all.

Hello Boris,
thanks for taking the time and explaining why your project needs 60 fps. If I understand you correctly those 60 fps are necessary to maintain full audio samples rate. You said your emulator collects sound samples at the frame rate and the reduced sampling rate of 24/60 results in "choppy sound". Are there any other reasons why 60 fps are necessary? The video seems smooth.
That "choppy sound" was exactly what I was hearing when you sent me the source code of your project. But did you notice that I "solved" (read: "hacked around") the choppy sound problem even at those bad sampling rates? First off, I am not arguing with you about whether you need 60fps, or not. You convinced me that you do need 60fps. I still want to help you solve your problem (it might take a while until you get a FlashPlayer that delivers the performance you need).
But maybe it is a good time to step back for a moment and share some of the results of your and my performance improvements to your project first. (Please correct me if my numbers are incorrect, or if you disagree with my statements):
1) Embedding the resources instead of using the URLLoader.
Your version uses URLLoader in order to load game resources. Embedding the resources instead does not increase the performance. But I find it more elegant and easier to use. Here is how I did it:
[Embed(source="RESOURCES.BIN", mimeType="application/octet-stream")]
private var EmbeddedBIN:Class;
const rom : ByteArray = new EmbeddedBIN;
2) Sharing ByteArrays between C code and AS code.
I noticed that your code copied a lot of bytes between video and audio memory buffers on the C side into a ByteArray that you needed in order to call AS functions. I suggested using a technique for sharing ByteArrays between C code and AS code, which I will explain in a separate post.
The results of this performance optimization were mildly disappointing: the frame rate only notched up by 1-2 fps.
3) Optimized switch/case for op table calls
Your C code used a big function table that allows you to map op codes to functions. I wrote a script that converted that function table to a huge switch/case statement that is equivalent to your function table. This performance optimization was a winner. You got an improvement of 30% in performance. I believe the frame rate suddenly jumped to 25fps, which means that you roughly gained 6fps. I talked with Scott (Petersen, the inventor of Alchemy) and he said that function calls in general and function tables are expensive. This may be a weakness within the Alchemy glue code, or in ActionScript. You can work around that weakness by replacing function calls and function tables with switch/case statements.
4) Using inline assembler.
I replaced the MemUser class with an inline assembler version  as I proposed in this post:
http://forums.adobe.com/thread/660099?tstart=0
The results were disappointing, there was no noticeable performance gain.
Now, let me return to my choppy sound hack I mentioned earlier. This is were we enter my "not so perfect world"...
In order to play custom sound you usually create a sound object and add an EventListener for SampleDataEvent.SAMPLE_DATA:
_sound = new Sound();
_sound.addEventListener( SampleDataEvent.SAMPLE_DATA, sampleDataHandler );
The Flash Player then calls your sampleDataHandler function for retrieving audio samples. The frequency of those requests does not necessarily match with the frequency onFrameEnter is being called. Unfortunately your architecture only gets "tickled" by onFrameEnter, which is currently only being called 25fps. This becomes your bottleneck, because no matter how often the Flash Player asks for more samples, the amount will always be limited by the frame rate. In this architecture you always end up with the FlashPlayer asking for more samples than you have if the frame rate is too low.
This is bad news. But can't we chat a little bit and assume that the "sample holes" can be filled by using sample neighbors on the time line? In other words, can't we just  stretch the samples? Well, this is what I came up with:
private function sampleDataHandler(event:SampleDataEvent):void
     if( audioBuffer.length > 0 )
          var L : Number;
          var R : Number;
          //The sound channel is requesting more samples. If it ever runs out then a sound complete message will occur.               
          const audioBufferSize : uint = _swc.sega_audioBufferSize();
          /*     minSamples, see http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/events/SampleDataEvent.html
               Provide between 2048 and 8192 samples to the data property of the SampleDataEvent object.
               For best performance, provide as many samples as possible. The fewer samples you provide,
               the more likely it is that clicks and pops will occur during playback. This behavior can
               differ on various platforms and can occur in various situations - for example, when
               resizing the browser. You might write code that works on one platform when you provide
               only 2048 samples, but that same code might not work as well when run on a different platform.
               If you require the lowest latency possible, consider making the amount of data user-selectable.                         
          const minSamples : uint = 2048;
          /*     For the maximum sample rate of 44100 we still only get 1470 samples:
               snd.buffer_size = (rate / vdp_rate) = 44100 / 60 = 735.
               samples = snd.buffer_size * channels = 735 * 2 = 1470.
               So we need to stretch the samples until we have at least 2048 samples.
               stretch = Math.ceil(2048 / (735*2)) = 3.
               snd.buffer_size * channels * stretch = 735 * 2 * 3 = 2790.
               Bingo: 2790 > 2048 !
          const stretch : uint = Math.ceil(minSamples / audioBufferSize);
          audioBuffer.position = 0;
          if( stretch == 1 )
               event.data.writeBytes( audioBuffer );
          else
               for( var i : uint = 0; i < audioBufferSize; ++i )
                    L = audioBuffer.readFloat();
                    R = audioBuffer.readFloat();
                    for( var k : uint = 0; k < stretch; ++k )
                         event.data.writeFloat(L);
                         event.data.writeFloat(R);
          audioBuffer.position = 0;
After using that method the sound was not choppy anymore! Even though I did hear a few crackling bits here and there the sound quality improved significantly.
Please consider this implementation as a workaround until Adobe delivers a FlashPlayer that is 3 times faster :-)
Best wishes,
- Bernd

Similar Messages

  • How to use Qosmio F50 Quad core HD Processor in other software

    I would like like to know how to make use of the quad core processor in other programs besides Toshiba Upconvert, Transcoding, Face Navigation and Gesture Control functionality. Anybody has any clue? Thanks!

    cklow01 has right.
    how to use this hd processor with other programm , new program with blueray support for video. because dvd movie factory 5 sold with qosmio is older but oprimised for this processor.
    otherwise we are with an up to date notebook with old, very old software !!

  • How to make use of the crypto modules in the new Sparc T2 processors?

    Having just received some frontal indoctrination about the features of the new Niagara (T1/T2) processor platform, I've started to dig around its ncp cryptography support cores a bit, but I've got a few open ends that I hope someone here can tie up for me..
    - Is there any way to make current OpenSSL take advantage of the ncp cores? The FAQs don't mention it anywhere except a general "Included ability to use crypto hardware options". Does this already include Sun ncp? The openssl derivate that comes with Solaris10 is antique and I really want to replace it.. but if that means losing hardware support, I'm considering ditching the T2 altogether and going for US4+, doing crypto in the "old" way on the main cpu cores.
    - The Niagara crypto blueprint mentions some commands to run speed tests, which I did on a T2000 server I've got around here, and on an old 420 in parallel, and I got some results that puzzle me:
    The T2000:
    sign verify sign/s verify/s
    dsa 512 bits 0.0001s 0.0001s 13025.0 11979.2
    dsa 1024 bits 0.0001s 0.0001s 12835.7 12426.3
    dsa 2048 bits 0.3030s 0.6135s 3.3 1.6
    The 420:
    sign verify sign/s verify/s
    dsa 512 bits 0.001171s 0.001427s 853.8 700.7
    dsa 1024 bits 0.003521s 0.004292s 284.0 233.0
    dsa 2048 bits 0.011990s 0.014306s 83.4 69.9
    The T2k looks nice for short keys, but does it really drop from 13k/sec to 1.6/sec for the 2kbit keys, or is that a display bug?
    If not, does that mean that the ncp in the T1 cpu is limited to 1kbit keys, and if I intend to use longer DSA keys, I better stay doing it on normal CPU core?
    I see the same huge performance drop in RSA when switching from 2048 to 4096 bit key sizes. Ok, RSA isn't really that popular anymore.. but even DSA-1024 is aged a bit and many crypto researchers recommend the 2kbit key sizes already, at least for applications that are supposed to be running for a few years.
    Are there any improvements being made in the new T2 cpu series, compared to the T1 I ran the tests on?
    - The T1/T2 brochure also lists functions like crc32, md5, sha1 hashing algorithms in the crypto cores - how can I make use of those? I've got a few applications that do heavy checksumming and file hashing, and I'd love to offload these operations to the crypto cores; however I couldn't yet find any tool that allows me to do that. The regular md5sum binaries coming with Solaris still do the math on the main cpu core.

    I'd love to, but I don't have the money for a T2 system, and even if I did, I don't have the time to play with it right now anyways :(

  • Oracle BPEL running on multi-processor/multi-core using one core

    Hi,
    At a site we have running Oracle BPEL on OAS (10.1.2), AIX 5.3 LPAR with 10 cores (5 processors). Shortly we have an noticable load on the system, which revealed that Oracle BPEL seems to be bound to one processor. Therefore the other processors are hardly used. Can this be solved by re-configuring OAS or Oracle BPEL to use multiple processors for separate threads or Is this an Oracle Application Server or Oracle BPEL limititation and should we install multiple OC4J's to make effective use of the processors.
    Cheers,
    Peter

    Oracle® Process Manager and Notification Server Administrator's Guide
    10g Release 2 (10.1.2)
    B13996-02
    5.2 OC4J Minimum Configuration
    The following lines represent the minimum configuration for OC4J. Default values are assigned to all other configuration elements and attributes for OC4J.
    <ias-component id="OC4J">
    <process-type id="home" module-id="OC4J">
    <port id="ajp" range="3301-3400" />
    <port id="rmi" range="3101-3200" />
    <port id="jms" range="3201-3300" />
    <process-set id="default-island" numprocs="1"/>
    </process-type>
    </ias-component>
    Oracle® Application Server Release Notes
    10g Release 3 (10.1.3.2) for Microsoft Windows
    Part Number B32199-03
    4.1.1 Limited Management Support for Multiple-JVM OC4J Instances
    With Oracle Application Server 10g Release 3 (10.1.3.2), you can configure any OC4J instance to use multiple Java Virtual Machines (JVMs). You can perform this configuration change by using the Application Server Control Console or by setting the numprocs argument in the opmn.xml file to a number greater than one (1).
    The opmn.xml file is located in the following directory in your Oracle Application Server Oracle home:
    ORACLE_HOME\opmn\conf\
    To set the number JVMs in the Application Server Control Console, see "Creating Additional JVMs for an OC4J Instance" in the Application Server Control online help.
    To set the number of JVMs by editing the numprocs argument in the opmn.xml file, refer to the following example, which shows the numprocs entry you must modify:
    <ias-component id="OC4J"> <process-type id="home" module-id="OC4J" status="enabled">
    <process-set id="default_group" numprocs="2"/>
    </process-type>
    </ias-component>
    Note, however, that this feature is not supported by Application Server Control. Specifically, Application Server Control (represented by the ascontrol application) cannot run on an OC4J instance that is running multiple JVMs. As a result, be sure that you do not configure multiple JVMs for the administration OC4J instance (the OC4J instance that is hosting the active ascontrol).If you choose to configure the number of JVMs for the administration OC4J to more than one (1), then you must use command line tools to manage your Oracle Application Server environment. For example, you must use:
    admin_client.jar for deployment, re-deployment, undeployment, start and stop applications, and shared library management
    Apache Ant for deployment, redeployment, and undeployment of your applications
    opmnctl commands for starting, stopping, and other life cycle operations on the Oracle Application Server
    Further, if you are using multiple JVMs on the administration OC4J and, as a result, the Application Server Control Console is not available, then you must make any Oracle Application Server instance configuration changes manually. Manual configuration changes often require you to shut down the Oracle Application Server instance, manually configure the relevant XML files, and then restart Oracle Application Server.

  • We are recording 2 voices over an instrumental track. Every time there is a rest in the vocal part the volume of the instrumental track noticeable increases. Is this fixable? We are using Alesis multi mix 8 with 2 microphones.

    We are recording 2 voices over an instrumental track. Every time there is a rest in the vocal part the volume of the instrumental track noticeable increases. Is this fixable? We are using Alesis multi mix 8 with 2 microphones.

    Badger33rk wrote:
    Every time there is a rest in the vocal part the volume of the instrumental track noticeable increases.
    turn the Ducking menuItem off

  • IPC on multi-processor environment

    I have written a C program which uses IPC resource "message queue" and "shared memory" on a single processor SUN Sparc workstation (with
    Solaris 2.5.1 OS).
    I would like to know whether my program can function correctly if it runs on a multi-processor SUN platform (e.g. Sun HPC model 450 with more than 1 CPU installed).
    Does anyone have any experience on this?
    Best Regards,
    Annie

    Hi.
    The IPC features will work find in an MP environment. It is designed for processes which are normally protected from each other to be able to exchange data. You just have make sure that access to your message queue or especially shared memory is synchronised between multiple processes by using semaphores (see semget(2)).
    You could also look at using just one process but multithreading it.
    The multithreaded programming guide covers the issues.
    Have you expereienced a particular problem?
    Regards,
    Ralph
    SUN DTS

  • Multi-processor support!

    I must say that the only feature that will make me upgrade ever again from CS4 (or even to upgrade my MacPro) would be full multi-processor support.
    Unfortunately, Adobe has been introducing new features that turn out to be imperfect and NEVER get updated or fixed any more.
    Hoping for CS5 to come through!

    You're right. What I meant was FULL multi-processor support.
    Like saving and opening files, especially when saving large, multi-layered 16 bit files. Just tested it again - a file that is 2.91GB when opened in photoshop (which any file I work on typically is) takes 42 seconds to save as an 8 bit file and (after deleting layers to make it the same 2.91GB when open in photoshop in 16 bit format) takes 5min and 20 seconds to save in 16 bit while the activity monitor shows only one processor being utilized.
    I would like to work more in 16 bit but it's not practical now as it takes too long for just the saving part.
    At least it should be non-modal so that you could work on another image while others are being saved and/or opened.
    There are also a few filters that do not use multiple processors - like the lens blur filter, which works amazingly well but makes me take a coffee break every time I use (especially when used with a large blur amount).

  • How to SLI & Make Use of my 8800s?

    Hey, Ive looked around these forums and haven't really found this question answered yet so...
    I have two 8800s in my Mac Pro and I would like to make use of both of them in Vista, I don't care about using them in Mac OS X.
    From what I understand, correct me if Im wrong, just installing an SLI isn't enough to make the cards work together in Vista? I need some type of obscure driver software to run them, right?
    If anyone could inform me on what exactly I have to do, I will greatly appreciate it, thanks!

    Actually that is a touch of marketing, and slay of hand a bit. I doubt I can find the review on Techreport where one tester "cornered" Nvidia while testing an Intel Skulltrail system, as to what is technically going on under the hood, sounded more like "pay me a royalty" at best.
    Unfortunately, Nvidia still seems committed to maintaining the fiction that locking its competitors' chipsets out of SLI
    http://techreport.com/discussions.x/15405
    To use the 780a's HybridPower capability, folks will need a compatible graphics card. Del Rizzo says the newly launched GeForce 9800 GX2 is the first product to support the technology, and he adds that Nvidia plans to announce another Hybrid Power-compatible graphics card next week.
    http://www.techreport.com/discussions.x/14416
    Skulltrail is the codename for Intel’s ultmate enthusiast product, a two CPU 8 core overclockable monster machine with four x16 PCI-Express slots. Francois was showing it off with 2 Nvidia G80’s nicely nestled within the bowels of a beautifully painted Alienware ninja gaming rig.
    http://blogs.intel.com/technology/2008/01/skulltrailsneak_peek_in_thei.php
    http://hothardware.com/News/IntelX38_Supports_SLI__SortOf/
    Of the most interest to future Mac Pro owners -
    *Nvidia: Intel Nehalem mobos will get SLI support*
    +by Cyril Kowaliski — 1:07 PM on July 14, 2008+
    The winds are changing at Nvidia. After testing the waters by letting Intel add SLI support to its pricey dual-CPU Skulltrail platform, Nvidia has announced that Intel's X58 Express chipset will also support SLI multi-GPU configurations. Code-named Tylersburg, the X58 chipset should complement next-gen Nehalem desktop processors in the high-end enthusiast space.
    http://techreport.com/discussions.x/15113
    http://www.techreport.com/archive.x?date=2008-2-26&tags=Motherboards&types=blog% 2Cnews

  • Lightroom does NOT make use of new/fast hardwares!

    For those who are considering a new computer here's my experience....
    Lightroom 4 does NOT make full use of new/fast CPUs / SSDs!
    I’ve been frustrated with Lightroom speed since launch and held on to my “old” Xeon work station for two years…. I recently invested in the latest Intel CPU and IO technology and I'm very disappointed how Lightroom 4 does not make use of it. It's most noticeably sluggish from one photo to the next during editing. This may seem small but when I edits hundreds of images everyday it really slow me down.
    See my system specs at very bottom of this post. I don't think there's a bottle neck in performance! I love my new setup and it's a pleasure to use the OS. BUT I'm so disapointed everytime I load up Lightroom. (My typical Lightroom edits are 600-900 sRAW images - I edit down my photos in Photo Mechanic 5! It's just too slow in Lightroom)
    Lightroom still take time to load each image when advance from one image to the next (During editing). Isn’t smart enough to cache 100 images in advance let say. (1:1 previews as been created!)
    Won’t make use of more than 8GB ram. No matter how big the RAW working set is.
    Very slow switching from one working module to another
    Lightroom Jpeg export won’t make use of more than 25% of new Intel Xeon E5 series dual processors processing power 12 cores. 6-9 cores are literally taking a nap during export!
    I’ve tried everything found in Adobe suggestion link below:
    http://helpx.adobe.com/lightroom/kb/optimize-performance-lightroom.html
    * I definately tried 1:1 render preview and all of the other jazz found in link directly above!
    * Running all latest firmwares and updates
    I hope the next version of Lightroom will actually make use of good hardware instead of adding on more features. We desperately need a snappy/responsive Lightroom for sake of productivity! I'm not here to put down Adobe. I'm a big fan of Adobe and Intel and have been using them for 15 years.
    My Setup:
    Dell Workstation T7600
    Dual Xeon E5-2630 2.5GHz (12 Cores Total)
    64GB 1600Mhz RAM
    OCZ RevoDrive 3X2 - 1500MB/s (Boot / Camera Raw Cache)
    4 Intel 520 Series SSD in RAID0 with dedicated 1GB Cache SAS/SATA PCIe 3.0 controller (Working RAW files)
    Nvidia GTX680 - 4GB VRAM (I know this made for 3D but it has PCIe 3.0 connectivity - Quadro 4000 PCIe 2.0 wasn't any better)
    Drobo-S 2GBx5 (Strictly Storage)
    Windows 7 Ultimate (Bare minimum! Just Adobe CS6 suite and Microsoft Office 2010)

    Can you copy this to the Feature Request forum so we can vote on it please?  http://feedback.photoshop.com/photoshop_family  And then post the link here so we can find it.  It'll get my vote.
    There are a couple of others you might like to vote on too, such as using the embedded preview for editing down.  http://feedback.photoshop.com/photoshop_family/topics/lightroom_capability_to_display_embe dded_preview
    In terms of workarounds for now, if you haven't tried the 4.2RC yet, you might find the module switching a bit faster.  And for better use of your exports processing power, try splitting your export into 3 or 4 and running them concurrently (i.e. set the first going, and immediately set the second going, etc.) as that'll make better use of the extra cores if your hard drives are able to keep up with them.

  • Multi processor support Win 7 HP

    Dear experts,
    I'm running a Mac Pro 2 x 2,8 quad core (8 cores in total) and installed Win7 Home premium-64 in a Bootcamp partition. Everything runs fine, except that I noticed that Win 7 does not make use of the 2 processors. It's only seeing 1 processor (4 cores) and as a result, Geekbech shows half the performance I get when running OS-X Lion.
    Is there a way to get WIN7 looking at the other processor also?
    Your help greatly appreciated!
    Ruud

    Hi Hatter,
    The only reason I installed Win7 on my Mac is to support a single application, my Tacx VR home trainer software. I never would have guessed that supporting a home trainer would require a professional edition of Windows.
    Microsoft build the support for multiple processors in Win professional but decided to remove that from the HP version. That is what I call crippling and I think that this is a ridicules decision. Not only driver problems will be an extra bonus when using Windows, even the operating system itself is not supporting your hardware to the fullest.
    I would like to see the chart you are referring to. I could only find a lot of misleading information on this subject. I could even find a Win 7 spec sheet that told me that the HP version offered support for 2 processors!
    Regards, Ruud

  • PDFs Using Formatting Objects Processor

    I'm having problems rendering pdf documents using FOP as detailed in your document 'Rendering Oracle HTML DB Reports as PDFs Using Formatting Objects Processor'.
    When I call the pdf_GrabXML to send the XML report output to the htmldb_fop_render.jsp file I get the following error:
    OracleJSP: oracle.jsp.provider.JspCompileException:
    Errors compiling:/opt/oracle/midtier/9.0.4/j2ee/home/application-deployments/default/defaultWebApp/persistence/_pages//_htmldb__fop__render.java
    8 package org.apache.fop.apps does not exist import org.apache.fop.apps.Driver;
    93 [jsp src:line #:44] cannot resolve symbol symbol : class Driver location: class htmldb_fop__render Driver driver = new Driver();
    93 [jsp src:line #:44]cannot resolve symbol symbol : class Driver location: class htmldb_fop__render Driver driver = new Driver();
    96 [jsp src:line #:47] cannot resolve symbol symbol : variable Driver location: class htmldb_fop__render driver.setRenderer(Driver.RENDER_PDF);
    NOTE:
    We did not have a oc4j directory under $ORACLE_HOME. We have, therefore, placed htmldb_fob_render.jsp and htmldb_example.xslt in the following directory: $ORACLE_HOME/2jee/home/default-web-app. The var g_Render_URL in javascript file (htmldb_pdf.js) was modified to http://<host name>:<port>/j2ee/htmldb_fop_render.jsp.

    Kris,
    I placed the three jar files into the $ORACLE_HOME/j2ee/home/default-web-app/WEB-INF/lib directory (recall that I don't have a oc4j directory) and I changed the library entries in the application.xml file to point to this directory. The htmldb_pdf.js now seems to find the jar files but I now get an error that indicates I'm not able to make a connection:
    java.net.ConnectException: Connection refused     at java.net.PlainSocketImpl.socketConnect(Native Method)     at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:305)     at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:171)     at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:158)     at java.net.Socket.connect(Socket.java:452)     at java.net.Socket.connect(Socket.java:402)     at sun.net.NetworkClient.doConnect(NetworkClient.java:139)     at sun.net.www.http.HttpClient.openServer(HttpClient.java:402)     at sun.net.www.http.HttpClient.openServer(HttpClient.java:618)     at sun.net.www.http.HttpClient.<init>(HttpClient.java:306)     at sun.net.www.http.HttpClient.<init>(HttpClient.java:267)     at sun.net.www.http.HttpClient.New(HttpClient.java:339)     at sun.net.www.http.HttpClient.New(HttpClient.java:320)     at sun.net.www.http.HttpClient.New(HttpClient.java:315)     at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:512)     at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:489)     at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:617)     at java.net.URL.openStream(URL.java:913)     at oracle.xml.parser.v2.XMLReader.openURL(XMLReader.java:2133)     at oracle.xml.parser.v2.XMLReader.pushXMLReader(XMLReader.java:257)     at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:200)     at oracle.xml.parser.v2.XSLProcessor.newXSLStylesheet(XSLProcessor.java:557)     at oracle.xml.parser.v2.XSLStylesheet.<init>(XSLStylesheet.java:230)     at htmldb_fop__render._jspService(_htmldb__fop__render.java:76)     [SRC:/htmldb_fop_render.jsp:27]     at com.orionserver[Oracle Application Server Containers for J2EE 10g (9.0.4.0.0)].http.OrionHttpJspPage.service(OrionHttpJspPage.java:56)     at oracle.jsp.runtimev2.JspPageTable.service(JspPageTable.java:347)     at oracle.jsp.runtimev2.JspServlet.internalService(JspServlet.java:509)     at oracle.jsp.runtimev2.JspServlet.service(JspServlet.java:413)     at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)     at com.evermind[Oracle Application Server Containers for J2EE 10g (9.0.4.0.0)].server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:765)     at com.evermind[Oracle Application Server Containers for J2EE 10g (9.0.4.0.0)].server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:317)     at com.evermind[Oracle Application Server Containers for J2EE 10g (9.0.4.0.0)].server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:790)     at com.evermind[Oracle Application Server Containers for J2EE 10g (9.0.4.0.0)].server.http.AJPRequestHandler.run(AJPRequestHandler.java:208)     at com.evermind[Oracle Application Server Containers for J2EE 10g (9.0.4.0.0)].server.http.AJPRequestHandler.run(AJPRequestHandler.java:125)     at com.evermind[Oracle Application Server Containers for J2EE 10g (9.0.4.0.0)].util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:192)     at java.lang.Thread.run(Thread.java:534)
    QUESTIONS:
    1. Should I place all jar files in the $ORACLE_HOME/j2ee/home/default-web-app/WEB-INF/lib directory . . . there are several jar files that are not referenced in the application.xml file.
    2. Should the port number that is defined in the htmldb_pdf.js and htmldb_fop_render.jsp files be the HTTP Server Listener port.
    Thanks,
    David

  • Concurrency in Swing,  Multi-processor system

    I have two questions:
    1. This is a classic situation where I am looking for a definitive answer on: I've read about the single-thread rule/EDT, SwingWorker, and the use of invokeLater()/invokeAndWait(). The system I am designing will have multiple Swing windows (JInternalFrames) that do fairly complex GUI work. No direct interaction is needed between the windows which greatly simplifies things. Some windows are horrendously complex, and I simply want to ensure that one slow window doesn't bog the rest of the UIs. I'm not entirely clear on what exactly I should be threading: the entire JInternalFrame itself should be runnable? The expensive operation within the JInternalFrame? A good example of this is a complex paint() method: in this case I've heard of spawning a thread to render to a back-buffer of sorts, then blitting the whole thing when ready. In short, what's the cleanest approach here to ensure that one rogue window doesn't block others? I apologize if this is something addressed over and over but most examples seem to point to the classic case of "the expensive DB operation" within a Swing app.
    2. Short and sweet: any way to have Swing take advantage of multi-processor systems, say, a system with 6 processors available to it? If you have one Swing process that spawns 10 threads, that's still just one process and the OS probably wouldn't be smart enough to distribute the threads across processors, I'm guessing. Any input on this would be helpful. Thank you!

    (1) You need to use a profiler. This is the first step in any sort of optimization. The profiler does two important things: First, it tells you where are the real bottlenecks (which is usually not what you expect), and eliminates any doubt as to a certain section of code being 'slow' or 'fast'. Second, the profilter lets you compare results before and after. That way, you can check that your code changes actually increased performance, and by exactly how much.
    (2) Generally speaking, if there are 10 threads and 10 CPU's, then each thread runs concurrently on a different CPU.
    As per (1), the suggestion to use double buffering is the likely best way to go. When you think about what it takes to draw an image, 90% of it can be done in a worker thread. The geometry, creating Shapes, drawing then onto a graphics object, transformations and filters, all that can be done offline. Only copying the buffered image onscreen is the 10% that needs to happen in the EDT thread. But again, use a profiler first.

  • Advantages of multi processor systems?

    I am considering getting a new system to use for my photo processing applications. I see that Adobe has releases LR v2.0 which supports 64 bit OS and has announced that CS4, when ever it is released will also support 64 bits, well at least for Windows. This being the case, switching to Vista 64 seems to be the way to go. BUT what about the processor? The choices seem to be a quad core version from either Intel or AMD or moving higher, a Intel Xeon processor. This gives me two more choices, use two 5100 series processors for a total of four cores or go up to two 5400 series processors for a total of eight cores. The 5100 series approach seems to use less power that the 5400's and can be clocked faster. Also, last year I had the opportunity to quickly run a bench mark with CS3 on a Dell dual Xeon workstation running Windows XP 32 bit OS. It appeared that the the system only managed to make use of only 4 of the 8 processors.
    I have done a fair amount of searching and have not found any good answers to the question of which processor configuration will give the best performance. To keep this discussion reasonable I would like to limit it to one quad core processor vs. two dual core Xeons vs. two quad core Xeons.
    Hopefully some of you will be able to shed some insight. It any of the Adobe folks would care to comment, it would be appreciated. I suspect that Adobe has looked into this issue and has lots of information. The question then would be can they share it? Maybe not.
    Bill

    I run a Xeon Quad Core 2.33GHz at work and a Core 2 Duo 2.66GHz at home. Both are very fast, but the Xeon Quad Core is significantly faster. Is it a difference between going home at 5:00 vs. 6:00? Nope, more like 5:00 vs. 5:10 at best. But it does make the workday more pleasant.
    I would go with Vista 64, 8GB (or more) of RAM and something like a Core 2 Quad Q6600. Pop a $40 cooler on that bad boy, overclock to 3.0+ GHz and watch it fly. For more fun, get a Skulltrain motherboard and add a second CPU when your apps can take full advantage of 8 cores.
    And install an nVidia 9800GX2 graphics card - CS4 is supposed to take advantage of advanced GPUs.

  • WLS on Multi-Processors

    A few questions about WLS 5.1 on multi-processor machines:
    1. Is there anything that needs to be done(other than purchase another
    license) for a weblogic server to work on a multi-processor system?
    2. Will WLS take advantage of all processors with just ONE invocation of WLS?
    Or will I have to run one instance of WLS for each processor?
    3. Will performance gains be uniform or will certain features gain more
    from multiple processors?
    Any answers, insights or pointers to answers are appreciated.
    Thanks.
    -Heng

    >
    I consider WebLogic to be a great no-nonsense J2EE implementation (not
    counting class loaders ;-).Look for major improvements in that area in version 6.0.
    Thanks,
    Michael
    Michael Girdley
    BEA Systems Inc
    "Cameron Purdy" <[email protected]> wrote in message
    news:[email protected]...
    Rob,
    I consider WebLogic to be a great no-nonsense J2EE implementation (not
    counting class loaders ;-). Gemstone's architecture is quite elaboratewhen
    compared to WebLogic, and BTW they spare no opportunity to compare to
    WebLogic although never by name. (Read their white paper on scalabilityto
    see what I mean.) I am quite impressed by their architecture; it appearsto
    be set up for dynamic reconfiguration of many-tier processing. Forexample,
    where WL coalesces (i.e. pass by ref if possible), Gemstone will always
    distribute if possible, creating a "path" through (e.g.) 7 levels of JVMs
    (each level having a dynamic number of JVMs available in a pool) and if
    there is a problem at any level, the request gets re-routed (full failover
    apparently). I would say that they are set up quite well to solve the
    travelling salesperson problem ... you could probably implement aweb-driven
    neural net on their architecture. (I've never used the Gemstone product,
    but I've read about everything that they published on it.) I would assume
    that for certain types of scaling problems, the Gemstone architecturewould
    work very very well. I would also guess that there are latency issues and
    administration nightmares, but I've had the latter with every app server
    that I've ever used, so ... $.02.
    Cameron Purdy
    [email protected]
    http://www.tangosol.com
    WebLogic Consulting Available
    "Rob Woollen" <[email protected]> wrote in message
    news:[email protected]...
    Dimitri Rakitine wrote:
    Hrm. Gemstone reasons are somewhat different.I'm not a Gemstone expert, but I believe their architecture is quite
    different from a WebLogic cluster. Different architectures might have
    different trade-offs.
    However, out of curiosity, what are their reasons?
    Anyway, here is my question:
    why running multiple instances of WL is more efficient than running
    one
    with
    high execute thread count?The usual reason is that most garbage collectors suspend all of the jvm
    threads. Using multiple WLS instances causes the pauses to be
    staggered. Newer java vms offere incremental collectors as an option so
    this may no longer as big of an issue.
    -- Rob
    >

  • Subscription cannot be used with multi-user device...

    I just received an email message stating "Subscription cannot be used with multi-user devices" and "Here's a quick reminder that Skype's subscriptions are for individual use only and cannot be used with multi-user devices such as PBXs"
    I'm not sure why i received it, I am the only user and I only call out via my Iphone wifi and, occasionally, my PC.
    Could anyone please explain this to me?
    Thanks - Steve

    Hi Steve,
    Basically as long as you don't make subscription calls from more than one device at the same time there shouldn't be an issue.
    I'll look into this messaging as it seems to be bit misleading.
    Could you check your email, I sent you an email asking bit more information around this notification.
    Andre
    If answer was helpful please mark it with Kudos and if issue is resolved mark it with solution. This will help other users find this answer more easily. Thanks in advance!

Maybe you are looking for

  • I want to cancel Creative Cloud ... but can't get through to the gatekeepers

    Like so many others I've been tearing my hair out in an effort to cancel Creative Cloud, which has never worked on my computer. I tried the 1-800 number, but after costing me a lot of money (due to long waits and a per-minute charge on my phone plan)

  • Comparing strings in IfThen() structure of Hyperion Financial Reporting

    Can we compare current value of a dimension with a string in the IfThen() structure in hyperion financial reporting tool? I have not been able to carry out the same successfully. Also, I need to get value from a partcular cell to use in IfThen() cond

  • Multiple sources with single downstream capture

    Is it possible to have multiple source machines all send their redo logs to a single downstream capture DB that will collect LCRs and queue the LCRs for all the source machines?

  • ATI Mobility Radeon HD 5870 Not Detected in CC or CS6

    So, I have a dilemma, and I'm hoping Adobe staff might be able to help. I'm including all the nitty gritty details, so bear with me. I'm running an Asus G74JH laptop. Asus has decided not to update the driver for my specific card anymore, leaving me

  • ADF EA 16, client state saving bug

    Hi, I have similiar issue as one reported at: Re: Bug in ADF client-side state saving. I have bug with <af:showDetail> component when using CLIENT_STATE_METHOD. I have this code in jsp: <af:showDetail id="payingItem" disclosedText="a" undisclosedText