Optimising Performance

What is optimising performance and why is it sooooo anoying?
When I'm using garageband this pops up at the worst times!!
What performance is there to optimise?
Sometimes it come up 4times in 15m for me.
Help!

Have a look at this review: 
http://arstechnica.com/apple/2011/11/garageband-for-iphone-8-track-studio-in-you r-pocket/
In particular, we noticed that GarageBand on the iPhone paused much more often to "optimize performance." It seems as though the app renders its applied effects or midi tracks to a temporary audio track instead of trying to generate virtual sounds all on the fly. We noticed this more on the original iPad compared to the dual-core iPad 2 in our previous review. We believe that GarageBand may be using an additional core to do such optimization in the background on A5-powered devices (including the iPhone 4S). The slower the processor, the more often effects and instrument changes will likely trigger this optimization step, so consider that if you want to run it on a 3GS or older iPod touch. The pauses are a minor irritation in our view; they shouldn't get in the way of getting serious work done, though they might be more bothersome to the casual user "playing" with the app.
To avoid the "optimizing" on an older iPod, I'd try to solo tracks and avoid changing instruments while recording tracks. Also, quit all background applications that might still be running. Reset the iPod, before you start working with GarageBand.

Similar Messages

  • Optimising Performance with Firebox

    I had a look over some of the discussions recently as I was having a few issues with the Presonus Firebox I bought last week. Thanks to helpful advice given to someone else, I was able to resolve the problems with digital crackling. I breathed a sigh of relief that I hadn't wasted my money!
    I recorded a few pieces, nothing more two guitar tracks, to test it out. It sounded great. However, this morning I got up to start recording something and I've been having problems with the recording being stopped and the "Too Many tracks, effects, etc, try... Optimising Performance" box coming up.
    I concede that my computer isn't the most powerful of the current range, but I am only recording one track of audio and that's it. There are no other tracks playing or even muted.
    The set-up/arrangements is currently as follows:
    The Firebox is going into the PowerBook and there's a 200GB La Cie external hard drive daisy-chained off the back of it. The only applications running are GB (2.02 or whatever the latest is), the two Firebox mixer and control applications, err, Airport (I could turn this off) and Bluetooth as I'm using a Bluetooth mouse so I can sit to one end of the room and isolate external noise away from the computer.
    I've followed the standard GB advice on optimising. I've made sure settings in Energy Saver are set for highest performance. I've even tried changing to built-in audio for output (which I'm not listening to anyway as I play), but this seems to make no difference. All effects are switched off.
    The problem doesn't appear consistent. Sometimes the recording stops after a few bars, sometimes it manages three minutes before halting. Agh! It's not really the sort of guitar piece that I can edit together from pieces and until I get this part down, I can't start on the rest as I'm not playing against any click track so that the tempo flows more freely over the course of the piece. Playing a five-minute piece and thinking you've finally got it without any fluffs, only to look up and see "Optimising Performance" from across the room is exasperating!!
    I've never had any such issues with GB using built-in audio for similar. It can handle four audio tracks with limited effects quite readily. I tend to mix these down to one track and reimport them into a new one at this point.
    There's a pressing need to get this guitar track done for a friend. Once the acoustic part is in, I really don't have any need for the Firebox as such. I could go back to using the old mic set up (a rather antique Sony mic rescued from a rubbish bin in Tokyo and using a MiniDisc as sort of pre-amp) but it's noisy and the sound isn't nearly as good as the new microphone. When the recording works, that is!
    I imagine it's possible (is it?) to directly connect the audio output of the Firebox to the mic input on the computer and thus stick to built-in audio, but I'm sure that using Firewire should work. I'm hardly overloading anything currently.
    Any suggestion or tips would be most gratefully listened to!

    I think you may well be right and the computer isn't currently sufficient, but it does seem odd to me that it can't even handle one single track of audio with the Firebox connected via Firewire. I could understand five or six. Since the only thing that has changed in the set-up is the switch from built-in audio to the Presonus, it has to be that. Or rather, as you say, the computer being unable to deal with it adequately with the current RAM.
    I'd lock the tracks but there is only one of them!
    (Looks at credit card statement and wonders about upgrading..!)
    Thanks.

  • Optimising performance of interactive forms

    Hi,
    I am about to embark on a project to transfer all the external forms that my company uses to Adobe Interactive Forms for SAP.
    Some trial work has already been undertaken which has included (attempting) to build forms in excess of 15 pages.  Unfortunately, forms of this size have proved to be very slow in rendering... even on in internal machines.  What this would mean for our customers is of a paritcular concern to me.
    As part of my project approach, I am determined to ensure that these forms are built with optimisation in mind and significant compromises regarding form length of interactive complexity do not have to be undertaken.
    I have found some information on the web ([http://partners.adobe.com/public/developer/en/pdf/designer_performance_1_0.pdf]), but was wondering what other opions are out there?
    Would building the form using XML-code, rather than through LiveCycle's front-end designer provide greater performance control?
    Any help or insights would be most welcome.
    Thanks,
    Rob.

    One blog which you might find useful.
    /people/raghavendra.prabhu/blog/2010/10/21/performance-improvement-in-hcm-pf
    Thanks,
    Aravind

  • Optimising Execution speeds

    Hi,
    I'm constructing a program that sends commands down the serial port and
    reads the data coming in. This data is in the form of character strings
    which I am converting in various conversion VIs to data that can be
    displayed on dials etc. I'm experiencing problems trying to get the dials
    to run smoothly and wondered what ways other than mentioned in the help
    files are there to optimise performance. So far I have placed all the
    conversions in a separate VI which then writes to global variables. These
    globals are then called in the display VI in a while loop - iterating as
    fast as possible. I load the conversion VIs up at the start of opening the
    display VI and set them running in the background when required. My ma
    in
    problem is I want the data displayed on the dials to be as near as is
    possible to real-time. I would be grateful for any help. The serial port
    link is running at 19200 baud.
    Regards,
    Martin.

    > I'm constructing a program that sends commands down the serial port and
    > reads the data coming in. This data is in the form of character strings
    > which I am converting in various conversion VIs to data that can be
    > displayed on dials etc. I'm experiencing problems trying to get the dials
    > to run smoothly and wondered what ways other than mentioned in the help
    > files are there to optimise performance. So far I have placed all the
    > conversions in a separate VI which then writes to global variables. These
    > globals are then called in the display VI in a while loop - iterating as
    > fast as possible. I load the conversion VIs up at the start of opening the
    > display VI and set them running in the background when required. My main
    > problem is I want th
    e data displayed on the dials to be as near as is
    > possible to real-time. I would be grateful for any help. The serial port
    > link is running at 19200 baud.
    >
    As fast as possible is usually way too fast for UI stuff, and consumes too
    much CPU time not allowing the other loop to run as often as you like. You
    can either put a delay in the UI loop, like 10ms, you can put the two
    loops together, or you can use synchronization VIs to control how often
    the UI loop is coaxed to run.
    Another thing to do is to use the profile tool to see how much time different
    VIs are taking. You can then focus on easing the load on that VI.
    Often it
    is due to a runaway loop that is guaranteed to eat as much processor
    time as
    is available. While this often seems like a good idea at the time, it can
    mean that globals are being polled 100,000s of times per second, and even
    though the controls are fast, 60-100Hz is all the screen updates anyway; so
    you want to slow it down to run less often, which will
    allow for the other
    tasks to run more quickly or just lighten the CPU load and make the machine
    feel more responsive.
    Greg McKaskle

  • RFC & SAP Performance

    Hi,
    We have a situation where Web Portal makes an RFC call. We are thinking about limiting the number of users that can log onto the portal simultaneously.
    What are the performance issues related to maximum number of simultaneous calls to this RFC? If say 100 log into the site simultaneously, 100 RFC calls would be generated. Will there be any performance related issues?
    Thanks

    Hello,
    Generally if data is structured properly,it should not create problems.
    What causes problem for such RFC calls is amount of time SAP takes to respond .If you have RFC which is not wirtten in good way (for optimised performance) then SAP takes lots of execution time to return data back to calling application & that sometimes is an issue for users who would not like to wait for 30-40 seconds just to get some data based on which they will go ahead & decide further actions.
    Hope this helps.
    Thanks.

  • Performance over WAN

    Hi,
    We are using Forms based application deployed over 9iAS Rel2. We uses IE to access the application and Jinitiator installaed on client machine.
    Presently Database and application server are on the same city for example CITY A. All the application users also on the same city(CITY A).
    We have a plan to move the Database and application server to another city(CITY B) and keep the users in the CITY A.
    Application validate each field level and each validation will goto Database and come back.
    I would like to know whether we will have any performance issue,latency issue with the new approach.
    Pls let know.
    Thanks.

    user13165454 wrote:
    I am trying to pull LOB data from db and transmit it over WAN networl and I noticed that its performance is reduced to greater extent compared to client server model. WAN/LAN/etc does not dictate whether or not the client-server model applies.
    Client-server is a software architecture.
    It is not a hardware architecture..the client component can reside on a different h/w platform than the server component.. or it may not (can reside on same h/w platform as the server s/w component).
    It is not a network architecture... the client component may use a network protocol to communicate with the server component.. or it may not (can use IPC instead of IP).
    Can anyone plzz help me to guide to optimise performance of LOB fetch and network communication.From the application side, the only real things that you can govern is the amount of data to transfer (minimise it for performance) and the sizes of the data payload in the network packets (maximise it for performance).
    In other words, only send the minimal data from the server that is needed by the client. When doing so, make effective use of "+fetch sizes+" (properties like InitialLOBFetchSize(), FetchSize() and RowSize() ).

  • SNP Optimiser Scenario

    Dear Friends ,
    The scenario is as follows :
    1.There are 16 Distribution Centers which are procuring products from 2 production plants say Plant 1 and Plant 2 .
    2.There are common products which can be manufactured in either of the production plants .
    3.The following is expected from the SNP Optimizer run à First book the capacity of the Plant 1 for the common codes based on demand availability date , after the capacity is booked 100% then the product should be sourced from the other plant ie Plant 2 as there is idle capacity available in Plant 2  .
    4.I have been able to get the above mentioned result wherein the production planned is prepared through SNP Optimiser in Plant 1 and after 100% booking the external procurement is generated for Plant 2 and this happens for top to bottom codes ie complete product shift FG—SFG—RM .
    5.Now the requirement is that the Inventory is available for the said code in Plant 2 then the complete production ie net production after deducting inventory should be generated in Plant 2 and similarily if the inventory is available in Plant 1 then the complete product ie top to bottom  should be manufactured in Plant 1 . Logically wherever inventory exists ie plant1 or plant 2 the balance said product should be manufactured in the plant which has inventory .
    Would request you to review the above scenario and let me know your suggestions on how we can map the point 5 in the SNP Optimiser , needless to say the consistency should be maintained in the solution ie inventory is dynamic can be at Plant 1 or Plant 2 as the codes are common and can be manufactured in either of the plants .
    Thanks and Best Regards ,
    Prashant Kumar

    Dear Murali ,
    Thanks for your inputs . This is how the Optimiser Performed :
    Settings in the Objects : Incase I put a transportation cost in transportation lane between plant 1 and 2 it does not procure externally infact constraints the quantity ie reduces it . Even in case of PPM Cost Lower in Plant 1 and higher it Plant 2 it reduces the quantity rather than sourcing from Plant 2 . Hence my planning run was based on same PPM Cost in both the PPMs .
    (a) Where there is stock of the SFG ( second level ) in Plant 1 the stock was considered due to storage cost and production planned made in Plant 1 and
    Plant 2 . Share of production planned figure is based on the consumption defined in the PPM . Production Planned figure at Plant1 greater than Plant 2 guess stock was at plant 1 . 
    (b) Where there is stock of the SFG ( second level ) in PLant 2 the stock was considered due to storage cost and production planned made in Plant 2 and
    Plant 1 .  Share of production planned figure is based on consumption defined in the PPM . Production Planned Figure at Plant 2 greater than Plant 1 guess stock was at plant 2 .
    (c) Incase there is no stock at either of the plants there is procurement done from plant 2 and production planned at plant 1 .
    (d) Also what would be the situation in case we have BOM below the second level and stock is available in any of the level material below the second level code . Would it consider the stock and ensure production planned of the second level in the plant where stock of third or say fourth level code is available . i.e. PLant 1 having third level with stock would ensure that plant 1 is loaded for production for levels higher as well.
    I tried to make this explanatory but think made this lengthy , would appreciate your validation and advice on the above points .
    Thanks and Regards ,
    Prashant Kumar

  • Code  taking too much time to output

    Following  code is taking too much time to execute . (some time giving Time_out )
    ind = sy-tabix.
        SELECT SINGLE * FROM mseg INTO mseg
           WHERE bwart = '102' AND
                 lfbnr = itab-mblnr AND
                 ebeln = itab-ebeln AND
                 ebelp = itab-ebelp.
        IF sy-subrc = 0.
          DELETE itab INDEX ind.
          CONTINUE.
    Is there any other way to write this code to reduce the output time.
    Thanks

    Hi,
    I think you are executing this code in a loop which is causing the problem. The rule is "Never put SELECT statements inside a loop".
    Try to rewrite the code as follows:
    * Outside the loop
    SELECT *
    from MSEG
    into table lt_mseg
    for all entries in itab
    where bwart = '102' AND
    lfbnr = itab-mblnr AND
    ebeln = itab-ebeln AND
    ebelp = itab-ebelp.
    Then inside the loop, do a READ on the internal table
    Loop at itab.
    read table lt_mseg with key bwart = '102'. "plus other conditions
    if sy-subrc ne 0.
    delete itab. "index is automatically determined here from SY-TABIX
    endif.
    endloop.
    I think this should optimise performance. You can check your code's performance using SE30 or ST05.
    Hope this helps! Please revert if you need anything else!!
    Cheers,
    Shailesh.
    Always provide feedback for helpful answers!

  • Illustrator Files Heavy/Big?

    I was on the net this morning researching for a certain trick on Adobe Illustrator CS4. Then came across a few comments regarding Illustrator designers as being slow due to application features. I was disappointed. I have been on FHMX , et al for years. But the marvels in Illustrator are awesome.
    For starters, most of us designers finding comfort within the same box of things we were taught in college than continuing education with research. As such we develop a psychological condition that finds Illustrator not worth it. Remember FHMX was discontinued a few years ago. technology will be based on compartibility with Illustrator. How are we gona cope.
    There are lots of things that can be twicked in Adobe Illustrator to improve speed and performance of the application. Most basic will be identifying transparencies (they have a habit of increasing file size).
    The come to preferences pull the slider under the units and Display preferencies up or down.
    On Layers, turn off layers containing PSDs or Tiff if they aren't being touched.
    Work in the Pixel Preview Mode and only turn the overprint preview when one wants to investigate underdraws and knockouts
    And perhaps, open Photoshop, reduce the PSDs in your file and save as a copy in a different folder. re-link the image with a smaller image (this is proof/ preparatory stage after all). Only relink the high res image when printing out a proof or generating a pdf. Remember also to capitalise/invoking the PGF features in the application by saving a PDF compartible document.
    Re-link high-res images only when printing proof or on Final Artwork
    Doing step and repeats? Re-link to high-res, work in "Outline Mode" step generate PS or whatever FINAL out-put FF(File Forma) you use.
    The final file could be heavy yes, but its worth it because you have linked high-res PSDs, if compensated for dot-gain but in any case this document will be backed-up to save disk space after generating your len files.
    Trust me this optimises performance considerably during the proof stages.
    Hope this makes sense.
    Bye for now
    Questions?: [email protected]

    is this what you mean?
    Processor
    2.5GHz dual-core Intel Core i5 processor (Turbo Boost up to 3.1GHz) with 3MB L3 cache
    2.9GHz dual-core Intel Core i7 processor (Turbo Boost up to 3.6GHz) with 4MB L3 cache
    Memory
    2.5GHz
    4GB of 1600MHz DDR3 memory
    Configurable to 8GB
    2.9GHz
    8GB of 1600MHz DDR3 memory
    Storage1
    2.5GHz
    500GB 5400-rpm hard drive
    Configurable options:
    750GB 5400-rpm hard drive
    128GB solid-state drive
    256GB solid-state drive
    512GB solid-state drive
    2.9GHz
    750GB 5400-rpm hard drive
    Configurable options:
    1TB 5400-rpm hard drive
    128GB solid-state drive
    256GB solid-state drive
    512GB solid-state drive

  • PRAM and slow computers...

    Hey there I'm doing a little recon for my sister,she is often calling me(because I have a mac too) to ask me what she can do to speed up her computer. I was looking around for her and found a few things but I wanted to see them explained a bit...
    What is PRAM and how would I clear it?
    What other things can I suggest for her?
    I have seen MacJanitor Cache Out X and AppleJack are these worth the dl?
    She thinks the slowness is due to extensive use of Photoshop, art student, but since I am far away from her, different province, I can't really diagnose it myself. So what long distance suggestions can you give?
    Thanks in advance
    mad-elph

    Hi Mad-elph
    I disagree that repairing disk permissions would be in any way useful. It is a troubleshooting step, not system maintenance.
    Also, if you look at the link provided by the previous poster you will see what information is contained in PRAM and that clearing it won't help (again, another troubleshooting step).
    Something like Macjanitor or Onyx would be a good download - running the automation mode in Onyx (that is the program I am most familiar with) will blow out some of the dust in the system by running the maintenance scripts.
    Be aware that if you do run the automation, and it is set to delete system/user/kernel caches, you will observe a light system slowdown as the caches are rebuilt.
    There are various ways to enhance the performance of your sister's machine, but we would need to know a little more detail about it. If you can obtain the specification from Apple Menu > About This Mac it would be helpful, especially:
    Processor Type/Speed
    RAM
    Hard Disk (total size and free space)
    Graphics Card
    Operating System Version
    Let us know and it will be easier to define specific steps for optimising performance.
    Cheers,
    Rich
    Powerbook G4 DVI, 1Ghz   Mac OS X (10.4.6)  

  • While the Nokia site says the Lumia update is avai...

    Hi all,
    When you have read the thread on the update release, you have visited the update webpages but when you check in Zune it says you have the latest update installed please read the following.
    The status indication on the Nokia website tells you whether the update is approved and made available for distribution through the Microsoft update path. When the update actually becomes available for your Lumia 710 or 800 you will receive a notification on the phone after which you can install the update using Zune.
    Also please be aware the Zune update checker uses a cache which will allow you to actually check the status in Zune once in 24 hours. If you check again within this time you will receive the same result as before even if the update becomes available in the meantime. So to make sure you get the update as soon as it becomes available it is best to wait for the notification on the phone.
    Propagation through the update servers for Zune may take a some time and is something Nokia has no control over so please have a little more patience when you see the status as 'Update Available' on the Nokia webpage and wait for the notification on your Lumia Windows Phone.
    When the update becomes available for your phone it will come in three parts. Two updates for the WIndows Phone OS and the update for the Nokia Firmware. These will be applied sequentially but there can be some time between them being offered through Zune. Please make sure you do check for additional software when you have installed the first and/or second one.
    'OK, but what about the promised update apps like Camera Extras?'  you may ask.
    Please go to the update apps webpage and read the notes which state;
    *Camera Extras will be available from Marketplace during June and July 2012. To optimise performance of the Camera Extras application with the Nokia Lumia 610 certain features may be limited. Nokia City Lens will be available for Nokia Lumia 900, 800, and 710 during 2012.
    Hope this helps,
    Kosh
    Press the 'Accept As Solution' icon if I have solved your problem, click on the Star Icon below if my advice has helped you!

    I am also a proud N9 owner but i have the more shy 16gb version. I bought mine from optimus although I have never done anything to unlock it from day 1 i stuck my vodafone micro-sim in and it worked perfectly (:
    I have also been waiting since it was announced. This would be much easier if Nokia made the update available through the Nokia Suite or even on a support website.
    We wait...

  • HDD setup - Internal RAID, External RAID or single discs

    I know variations on this question have been done to death on this forum, but I'm still struggling to understand how to apply it to my specific requirements and hardware.
    I am looking to optimise the performance/speed of my Hackintosh for video editing on Premiere and grading on Resolve Lite, Colorista and AE.
    I work mainly with Pro-Res files, sometimes with AVCHD, and occasionally with raw.
    My Hackintosh specs are as follows:
    Part
    Component
    Processor
    4th Generation Haswell Core i7-4770 - Quad Core 3.4GHz (3.9GHz Turbo Boost)
    Motherboard
    GIGABYTE 8 Series GA-Z87N-WIFI
    Memory
    16GB Crucial Ballistix Tactical 1600MHz
    Primary Graphics
    Zotac Nvidia GTX 770 - 2GB Video Memory, 1059MHz Core Clock, 7010MHz Memory Clock
    Secondary Graphics
    Intel HD 4600 Graphics
    Maximum Video Resolution
    2560x1600
      Hard Drive
      240GB CRUCIAL SSD 32MB Cache (OS and Programs)
    Hard Drive
    1TB TOSHIBA SATA III 7200RPM 32MB Cache
    Network Card
    Integrated 10/100/1000 Gigabit Ethernet LAN
    Wifi
    Atheros AR9287 802.11BGN 300Mbps
    Sound
    High Definition 7.1 Channel Audio
    Power Supply
    Corsair CX600M 600w Modular PSU, 80 PLUS Bronze
    Chassis
    Bitfenix Prodidgy
    External Ports
    1 x PS/2 Keyboard/Mouse Port
    3 x HDMI Ports
    3 x DVI Ports
    1 x Display Port
    2 x Antenna Connectors
    6 x USB 3.0/2.0 Ports
    2 x USB 2.0/1.1 Ports
    2 x Ethernet Ports
    1 x Optical S/PDIF Out Connector
    5 x Audio Jacks
    Operating System
    OS X Mavericks
    I have also purchased a USB3 Lacie 4Big Quadra 8TB external RAID.
    I currently run the OS and programs from the SSD, and have partitioned the 1TB SATA III HDD, using one partition for scratch and the other for projects, media and exports.
    My question is, what would be my best solution for optimising performance?
    1. Buying a couple more HDDs, installing them internally and striping along with the existing disc as a 3-disc RAID 0, using the Lacie as backup/storage only?
    2. Buying a couple more HDDs, installing them internally as single discs (SSD for OS & Apps; HDD1 for Scratch; HDD2 for Media and Projects; HDD3 for Exports), again using the Lacie for backup/storage only.
    3. Using the Lacie as an external 4-disc RAID 0, and the existing internal HDD as storage. Buying further HDD(s) for backup.
    Also, if RAID is the answer, what is the best configuration in terms of which disc(s) to point Premiere to? i.e does Premiere treat the RAID as a single disc for i/o purposes, creating a bottleneck with high bandwidths, or does it act as it would if different file types were pointed at separate individual discs?
    Many thanks in advance for helping the IT dunce!

    Start here: Tweakers Page and continue reading all the articles there.

  • CPU and RAM for TimesTen

    I have test performance statistics for our java application (using oracle db) on a 4 X Quad-Core AMD Opteron™ 8360 SE cpu node with 64 GB RAM.
    I am trying to collect performance statistics for TimesTen on a 2 X Dual Core Intel Xeon 5160 cpu node with 8 GB RAM.
    My question is how does TimesTen treat processors. Is dual core considered as 2 cpu or single cpu?
    So am I comparing performance - (2 X 2 = 4) TT vs (4 X 4 = 16) Oracle DB ?
    Any suggestions on how should we compare the performance statistics obtained on the two machines ?

    Please see my answers embedded below:
    1. Under what conditions does TimesTen use multiple processors ?
    CJ>> There are many background daemon components to TimesTen that are multi-threaded and may use multiple processors to some extent as required (main daemon, sub-daemons, replication agent, cache agent).
    The main daemon is a very lightweight process that is purely supervisory in nature. It is not involved in database transatcion processing etc. and so its CPU usage is very low.
    Each active datastore (database) has a dedicated managing sub-daemon. Again, this is not directly involved in transaction processing but it too has several threads. The checkpointer thread may use a lot of CPU while a checkpoint is occurring. The log flusher thread will use CPU in proportion to the intensity of write operations (primarily INSERT/UPDATE/DELETE) in the application workload (this thread is responsible or flushing the transaction log buffer to the log files on disk).
    If replication, or AWT caching, is used the replication agent transmitter and/or receiver threads may use significant CPU depending on the replicated workload.
    If Cache Connect is being used then the cache agent threads may use significant CPU when e.g. an AUTOREFRESH is in progress.
    These ar all background activities and you do not have direct control over how many CPUs or how much CPU time is used by them. They will try and use what they need and the O/S will allocate them time based on available system resources.
    Equally significant is the CPU power used to process application transactions. Each application process/thread that is executing via a separate connection will potentially be executing concurrently within TimesTen hence if you have 20 application threads (or processes) each with its own connection to a TimesTen datastore then TimesTen could potentially use up to 20 CPUs/cores concurrently. This is the key factor in your ability to control how many CPUs/cores TimesTen. The crucial things to understand here are:
    1. In direct connection mode there is no dedicated TimesTen server process. All DBMS logic is encapsulated in the TimesTen library (libtten.so). All database query and transaction processing is actually executed in the context of the application thrad that makes the database call. Hence, as I mentioned, if there are 'n' concurrent application processes/threads each with a separate database connection then TimesTen can potentially use 'n' CPUs/cores concurrently. Essentially, # concurrent connections = max concurrent CPUs/coires.
    2. In client server there is a dedicated server process or thread (depending on configuration) for each application connection and so again # concurrent connections = maximum number of concurrent CPUs / cores that will be used.
    2. What techniques do I use to ensure maximum performance on this -
    a. Machine with 2 cpu (each is a dual core processor) and 8 GB RAM.
    CJ>> The number of physical CPUs (chips) is irrelevant. What matters is the number of cores. In this case 4. So, the system can concurrently execute up to 4 tasks. Anything more than 4 and tasks may have to wait for available CPU in order to execute. Since there are no blocking operations within TimesTen database processing then from a TimesTen perspective this machine can execute 4 application threads performing database access at maximum speed. Of course in reality CPU time is needed for the application, O/S, TT background processes etc. etc. so one would aim for fewer than 4 concurrent database processes. This assumes of cousre that there is no blocking at the application level either and that these processes or threads can therefore spend 100% of the available time executing. If the application blocks for any reason (e.g. waiting for the next 'request' from somewhere) then this introduces idle toime and so one can increase the number of concurrent application processes/threads to use up this idle time thereby increasing overall throughput.
    b. Application using direct connection mode with weblogic Application server
    CJ>> See my comments above. Generally one would use a connection pool and configure the number of connections to optimise performance (some experiments will be needed to arrive at the optimal value since it is very much dependant on the application workload and processing model).
    c. All data is cached in TimesTen (using Cache Groups) for read intensive operations (There are few write operations as well but TT is mainly for the reads). Should the connections be 4 or 3 in this case ?
    CJ>> If the volume of cache refrshes is low then you don't ned to 'reserve' much CPU for checkpointing logging and so you have all 4 cores available for application+database processing (plus O/S etc.). If the application is mostly using the database and not waiting for stuff outside the database then the optimal number of connections is probably in the range of 3-6. If the application does a lot of non-database work which may include waiting for things then the optimal number of connections will be much higher. Again, you need to experiment to find out what is optimal.
    Chris

  • Glossary of XI Terms

    HI I am trying to build up my glossary of XI Terms and was wandering if someone could give me the meanings (exactly what they are and do) and maybe an example of the following. I will reward top points for answers thank you:
    Custom Mapping
    Value Mapping
    Acknowledgements
    Advanced Optimisation
    Performance Tuning
    Streamlining Processes
    Cut Over
    Integration Testing
    Integration Engine Tuning
    Advanced Queue Processing
    Proof of Concept
    I know these are relatively simple but I just want to make sure I understand them better and whats involved....

    Hi Alex,
    i can help you for some terms:
    > Custom Mapping
    Mapping done without Message Mapping Tool, but with Java Mapping, ABAP Mapping or XSLT Mapping
    > Acknowledgements
    There are 2 type of ackowledgements:
    transport acknowledgements to confirm that the message has been successfully delivered.
    System Acknowledgements: to confirm that the message has been delivered and successfully processed by the target system.
    For idoc acknowledgement take a look to this document:
    https://websmp206.sap-ag.de/~sapdownload/011000358700003477212005E/HowTo_IDOC_Ack_20040817RR.pdf
    > Performance Tuning
    Tuning of the application server for performance optimization:
    for J2EE tuning take a look to Note 723909 - Java VM settings for J2EE 6.30/6.40/7.0
    Hppe this help
    Francesco

  • Determine indexes by rule of thumb - no data or table structures

    Hi,
    Im doing some research (would really like to see peoples answers)
    could some one give me an idea of the best way to index a tables the tables that appear in the statement below, to optimise performance.
    select p.fname, p.sname ,p.personid,av.availid, av.adate, nwa.hospitalid,
    nvl(to_char(av.astart,'HH24:MI'),'Not Specified') as ActualStart, nvl(to_char(av.aend,'HH24:MI'),'Not Specified') as ActualEnd, av.anyearly, av.anymiddle, av.anylate, av.anynight
    from tblperson p
    left outer join tblavailability av on p.personid = av.personid
    left outer join tblnurseworkarea nwa on p.personid = nwa.personid
    order by 1, 2;av.anyearly, av.anymiddle, av.anylate, av.anynight are all boolean fields 1/0
    Please, I need someone to tell me how they would index the tables used here, from instinct..... and rule of thumb....
    Much appriciated

    what about if the query were like so:
    select p.fname, p.sname ,p.personid,av.availid, av.adate, nwa.hospitalid,
    nvl(to_char(av.astart,'HH24:MI'),'Not Specified') as ActualStart, nvl(to_char(av.aend,'HH24:MI'),'Not Specified') as ActualEnd, av.anyearly, av.anymiddle, av.anylate, av.anynight
    from tblperson p
    left outer join tblavailability av on p.personid = av.personid
    left outer join tblnurseworkarea nwa on p.personid = nwa.personid
    WHERE av.availid = 3
    order by 1, 2;any difference in what you would do?

Maybe you are looking for

  • I lost my "recently added" file and can't get it back.

    I lost my "recently added"  folder and can't get it back.

  • How to clear iphone that belonged to someone else

    I was wondering if anyone can advise me on something please. My brother has given my Mother his old iphone 3gs, do anyone know if I can restore this to factory settings for her to clear all his items and if I can set up a new account in her name on A

  • SAP Notes Issue

    Hello Experts,   Actually I had an issue while opeing SAP Standard Query, this issue is due to some missing BI Content information then I requested Basis team to apply SAP Note, they came back to me saying that SAP Note is already applied, then immed

  • Mobileme picture gallery in my iphone

    How can i see my Mobileme picture gallery in my Iphone? Please help me

  • IPS 6.0 Security Monitor

    Will the 6.0 sensors work with SecMon? And please don't tell me I will be forced to use CS MARS. So will there be an update to SecMon to allow it to work with 6.0?