Optimization ideas?

hi allwe have cube which had two large sparse dimensions and other 3 are dense.Spearse dimesions growing rapidly on every month. we are adding new customers into outline. For all 12 months of 2003, my calc script take less than 5 minutes. my script is very simple:fix(jan:mar, 2003)x= x * 55;end fixBasically we are updated cells with new data values. This script needs to run on entire database at only one time. that why we kept in calc script. we build jan'2004 data. MY outline growed from 20 mb to 55 mb. Does it reasonable to grow like this?and my calc script took me almost 3 hrs instead of complete lessthan 5 minutes.some time i am thinking Does am i doing wrong?i am thinking if i load feb data, my outline will grow around 70 mb , and my calc time take long time?what is the best way to handle this?any suggestion will be greatly appreciated...?regards

Anish_newtojava wrote:
So would it be a right thing to load everything separately by creating different threads.
Using multiple thread may or may not help you. If you're reading everything from the same disk, it almost certainly will not, unless there's a fair amount of processing to be done once it's loaded. One thread (or more if it's a multi-CPU machine) could be doing the processing while the other thread is waiting on I/O. Note that there's no guarantee that this would help, however.
If you're loading from different disks, or from a combination of local disk and network based sources (like a database), then having multiple threads may help, as one thread could do its work while the other is blocked on I/O.
so what happens with these threads once they do their work. Are they destroyed by JVM as they have no work assigned to them ?When a thread's run() method completes, if none of your code is holding onto a reference to the Thread object, then it becomes eligible for GC.
Is there any other better way to load different components in parallelism other than creating different threads ???The only way to do things "in parallel" is with multiple threads.

Similar Messages

  • ITunes 7.01 Stuttering on MBP

    Good morning,
    I know there are a million issues with iTunes 7, but I haven't seen too much about stuttering on the mac side.
    I upgraded to 7 and then 7.01 on my MBPro, and am having a terrible stuttering problem. It tends to be when mail is checking and downloading new mail and/or any other network activity. It certainly wasn't this way before. I don't have much open, and I have 2g of ram. I am connected to an external monitor (have tried my 17" and my 30" and it does it on both.)
    Any ideas on this, or is this just a current "feature"?
    Thanks.
    sqe

    I concur: iTunes 7, while nice and fancy, performs much worse than iTunes 6 on my 15" Powerbook G4. (1.67 Ghz, 80Gig HDD, 1 Gig RAM, 10.4.8) I don't usually have stuttering issues, but instead get the "Spinning beachball of death" every few minutes. Looking at activitiy monitor, this occurs most often when iTunes fires up the hard drive.
    I'm a person who tends to skip to different parts of songs (that is, makes very good use of the scrubber bar!), and usually skip around in playlists, depending on my mood. I realize this doesn't help, but this is still unacceptable. When I skip songs, it plays fine for 30 seconds or so, then goes into freeze mode for around 5 seconds (although it sometimes takes >20) while it accesses the drive. Should I change songs again, the freeze is back!
    This gets VERY irritating when attempting to A. Find a song lost in the library. B. Add album art manually - it's ALWAYS SPINNING! Argh!
    In iTunes 6, these types of performance issues rarely occured - at least I never noticed one! What's suprising is that my Powerbook seems far more sluggish than our 7-year-old PowerMac G4 on iTunes! Wow. Considering the G4 barely meets Tiger's requirements and audio playback takes 30% of CPU time, this is very odd. To be fair, the Powerbook's drive is a 5400 RPM, while the G4 has 2-7200 RPM drives in it (still on original ATA 66 bus).
    So, does anyone have any fix, or optimization ideas?
    I'm about to post my own thread - specifically for that question.
    -Dan

  • How to Resize a picture

    hi
    I am using web cam acquiring image using lab view. i want to reduce the size of the picture before saving it .currently the size of the iamge is 300*200. i want to save it in smaller size becaz iwill be displaying this image on mobile and that require small pictures.any idea how to do it in labview.

    Attached is a VI which resizes a 2D array (7.0) . It was designed to resize pictures and for pictures of your size it should be fast enough. I don't think webcams use 24 bit color, so you may have to make some modifications. Also, it doesn't do any "clever" stuff like averaging color values. It just takes some of the elements from the original, which seems to be good enough.
    From what I saw, this VI gets slow when handling large arrays (I saw about a second for a 2800x1500). Does anyone have any optimizations ideas? I know that when windows creates thumbnails it also takes time, so I'm not too optimistic.
    Message Edited by tst on 07-27-2005 12:14 PM
    Try to take over the world!
    Attachments:
    Resize 2D array2.vi ‏73 KB

  • I have a dual boot system. Windows xp and Windows 7. Photoshop elements 7 would not run on Windows 7, so I purchased Elements 13. Slow as christmas. Not worth using at the speed it takes to do work on photos

    I have a dual boot system, xp pro and 7 pro. Photoshop elements 7 runs fine on xp, but would not run on windows 7. Bought elements 13, Very, Very slow. To slow to work on photographs. I have bought new ram and done every other optimize ideas I have found online.
    Any ideas.  Asus anthlon 64 x 2, 4g ram, m2n e motherboard, NVidia video card, 2+ g processor

    Please post Photoshop Elements related queries over at
    http://forums.adobe.com/community/photoshop_elements

  • Ideas for rejuvenating MacBook in order to optimize GarageBand performance?

    Hello,
    I am finally about to start recording my album, using a very basic kit of Garageband 3.0.4, an Audio Technica AT4033 mic, Edirol UA-25 USB audio interface and various acoustic and electronic instruments (drums, percussion, vocals, piano, guitar, old analogue synth, etc). Backing up everything as I go onto a Lacie Rugged external hard drive. There's going to be quite a few layers of sound involved (lots of harmonies) and I'll probably be using some of GarageBand's built-in effects such as reverb.
    The definite aim is to manipulate these admittedly modest resources to create a finished product that is as professional and high-quality sounding (once it has been mastered) as anything that you would hear in the charts.
    I have a majorly limited budget so cannot afford a whole new Mac laptop at the moment.
    My question is, how can I best tweak my existing late-2006(?) MacBook so that GarageBand performance will be as reliable, latency-free and quick as possible? I really want to avoid GarageBand crashing and losing valuable recordings and work.
    I already have a few ideas:
    --Have just ordered a Kingston 2GB RAM memory module kit (KTA-MB667K2/2G) to increase my RAM from 1GB to 2GB
    --I've repaired Disk Permissions and done a full cleanup using CleanMyMac
    --Was thinking about transferring everything except perhaps the Home Folder to a large external Firewire hard drive to free up maximum internal HD memory as I currently only have 35.3 GB available
    --Was also considering buying a secondary used (& cheap) Mac laptop from eBay for everything else (internet, other work, itunes etc) and using my MacBook for GarageBand only, in order to increase the internal hard disk's lifespan
    I would welcome any and all comments, recommendations and suggestions.
    Many thanks in advance
    kris

    kristopher19 wrote:
    Thanks for your reply gjmnz. I had checked my mac's specs on macupgrades.com and also did the Crucial memory scan, and these sites recommended a maximum of 2GB in 2 x 1GB PC5300 modules, which I've now fitted. Do you think it could actually take a bigger RAM upgrade, though?
    I would not know without doing a check. I was just going by the spec's in your sig. I also have a white macbook 2Ghz with 667 Mhz bus speed and it is currently running 4GB Ram and can actually take 6. With the white mac books there can be a difference between the original advertised spec's and what they can handle. I used the link I posted above to discover what my MB can handle. It may be that my MB was late in the refresh cycle and yours was early. Perhaps double check with the OWC site in the link. As with the Ram, many of the MB's can handle much bigger and faster internal drives, even solid state. It has been my experience with PC's that by using bench marking software to check the average read/write speeds of internal and external drives that the best spent money is on your internal drives when comparing against using USB2 and FW400 external drives of speeds up to 7200rpm.
    Edit: Just did a check on the 2,1 macbook and here is a quote from the OWC site
    MacBook2,1 (All) - Install up to 4.0GB total memory, uses up to 3.0GB.
    Note: Although limited to physically utilizing a maximum of 3GB, there is a performance
    benefit due to 128 Bit addressing when 4GB( 2GB x 2 Matched set) is installed.
    Just double check yourself...
    Message was edited by: gjmnz

  • SQL query optimization... Any idea?

    Hi all.
    Our DB (Oracle 9i RAC) is handling the prepaid service of a GSM mobile operator.
    We're having a query running every night to list the used vouchers from a table containing the vouchers data.
    Here is the query:
    select ticketno||' '||serialid||' '||used||' '||vouchervalue||' '||usedby from smv_avoucher where state='2' and trunc(used)=trunc(sysdate-1);
    As you can see we scan the entire table for used vouchers (state='2') for the previous day (sysdate-1). The 'used' column contains the date the voucher was used.
    Can this query be optimized? How can you improve this very simple query, or make it nicer for the DB?
    The reason we are trying to optimize this query is that it takes a long time to execute and generates "snapshot too old" error messages. We could use a large rollback segment for it, but first we would like to find out if the query can be optimized or not.
    Thank you for your insights.

    Thank you for your answers.
    What is the execution plan of this query? Can you post it?
    Operation                      Object
    SELECT STATEMENT ()
    TABLE ACCESS (FULL)           SMV_AVOUCHER
    How many records this table contains?About 25 million records
    Do you have any indexes on smv_avoucher?Yes we do have several indexes but not on 'used'
    Also you have to make sure you have most current statistics collected.Sorry would you mind to clarify this? I am not sure I understand fully.
    It seems to me that this query does full table scan, since you use trunc(used) (unless you have Function Based Index on trunc(used)).It does indeed.
    If you have index on used, it won't be used since you use function against the column. Oracle would use full table scan instead. I get it. Thanks for this information.
    I assume this table is and the data is frequently changed. These circumstances may lead to "snapshot too old".The table is updated very frequently (subscribers use vouchers to recharge their prepaid phones and new vouchers are provisioned into the DB all the time). This is a 5 million subscribers network.
    My initial suggestion would be to get rid of trunc(used)=trunc(sysdate-1) and to replace it with something like used between trunc(sysdate-1) and trunc(sysdate).Of course you have to have index on used.
    I will create this index and try this again.
    About column state ... How many distinct values it has?5 distinct values only.
    Might be a good candidate for IOTUnfortunately we are not at liberty to make such changes. Thanks for the suggestion though.
    Regards

  • Ideas - Further optimizations to make execution faster ??

    Hi,
    Is there any further optimizations that can be done to my program. 
    If I can make it faster it will be great.
    Please advise.
    Abhilash S Nair
    Research Assistant @ Photonic Devices and Systems lab
    [ LabView professional Development System - Version 11.0 - 32-bit ]
    LabView Gear:
    1. NI PXI-7951R & NI 5761
    2. The Imaging Source USB 3.0 monochrome camera with trigger : DMK 23UM021
    OPERATING SYSTEM - [ MS windows 7 Home Premium 64-bit SP-1 ]
    CPU - [Intel Core i7-2600 CPU @ 3.40Ghz ]
    MEMORY - [ 16.0 GB RAM ]
    GPU - [ NVIDIA GeForce GT 530 ]
    Attachments:
    Forum -OPTIMIZATIONS.zip ‏52 KB

    The time delay (Wait) in the Motor FOR LOOP subVI takes more time than everything else in the program put together.
    The 800/Speed calculation for the Wait gives a 1 ms delay for all speed values from 534 to 1600. Zero delay above 1600 and 2 ms from 534 down to 320.  This is probably not what you wanted.  The input to the Wait (ms) function is coerced to an unsigned 32-bit integer (U32).
    There are lots of optimizations which can be done but they all have effect in the microsecond range.
    What do you expect to gain by speeding up this program?
    Lynn

  • Can someone take a look at my project and give me ideas to reduce lag/optimize it? (relatively new to flash and need help)

    Dropbox - TV_AL_Q
    I really need to reduce the lag happening in this project. I know practically nothing about action script except for what I've used in this project so please keep that in mind.
    Im going to try and break apart the grouped art so its in its most basic form and see if that helps, and also change all my tweens to frame by frame since those two things helped me with an animation I did awhile back that was lagging severely.
    But I was hoping someone with more knowledge could take a look and see if there is anything that they know of that I could do to help fix this. I'm working in Flash cs6 and using action script 3.
    I have to turn this project in by this Friday (the 20th) so I would love to get this sorted by then but even if I can't I would still be interested in some possible solutions, just for learnings sake.
    Thanks.

    Thank you for yet another cryptic reply, remember I'm the idiot here who doesn't understand this stuff so please can you explain things in a little more detail. Thanks.
    I am using the JW Player setup assistant and it lists two types of code
    The "Embed code" and the "swfobject 1.5 code" but not an Object/Embed code that you mention.
    This is the code that the website is telling me to use.
    <script type="text/javascript" src="/embed/swfobject.js"></script>
    This text will be replaced
    <script type="text/javascript">
    var so = new SWFObject('/embed/player.swf','mpl','480','270','9');
    so.addParam('allowscriptaccess','always');
    so.addParam('allowfullscreen','true');
    so.addParam('flashvars','&file=http://web.me.com/nathmac31/Flash/WaineWeddingHD. swf');
    so.write('player');
    </script>
    My movie does play on through the player on the JW site, after about 30 seconds of waiting for the picture to appear. When I cut and paste the above code into my webpage, I don't get anything. I have dropped the "Player.swf" and "swfobject.js" files into the folder containing the contents of the page that I want the player to appear on.
    Where is the exact best place to drop these files?

  • Startup optimization of component .. Ideas on threads

    Hello everyone,
    I have got a component which nearly takes 2 minutes to startup. Its responsibility varying from to startup various other components and load a huge amount from database.
    Example: Say my component name is Ports
    Now it is ports main class responsibility to load various addresses and Names and so on. The thing is that there are many dependencies with inter-related components so that they have to be loaded in a sequence.
    I found a way so i can break the dependency so that i can load some parts separately. So would it be a right thing to load everything separately by creating different threads. My concept on threads is a little bit weak, so what happens with these threads once they do their work. Are they destroyed by JVM as they have no work assigned to them ?
    Is there any other better way to load different components in parallelism other than creating different threads ???
    Thanks a bunchhhh

    Anish_newtojava wrote:
    So would it be a right thing to load everything separately by creating different threads.
    Using multiple thread may or may not help you. If you're reading everything from the same disk, it almost certainly will not, unless there's a fair amount of processing to be done once it's loaded. One thread (or more if it's a multi-CPU machine) could be doing the processing while the other thread is waiting on I/O. Note that there's no guarantee that this would help, however.
    If you're loading from different disks, or from a combination of local disk and network based sources (like a database), then having multiple threads may help, as one thread could do its work while the other is blocked on I/O.
    so what happens with these threads once they do their work. Are they destroyed by JVM as they have no work assigned to them ?When a thread's run() method completes, if none of your code is holding onto a reference to the Thread object, then it becomes eligible for GC.
    Is there any other better way to load different components in parallelism other than creating different threads ???The only way to do things "in parallel" is with multiple threads.

  • New app idea - blog title optimizer

    I just came across a Wordpress app called Headlines. I'd love to see this as a BC app!
    http://kingsumo.com/apps/headlines/
    I'm not a dev, but thought I'd share this if anyone wants to take a stab at it.

    nice post, thanks for sharing

  • In-Place Element Structures, References and Pointers, Compiler Optimization, and General Stupidity

    [The title of this forum is "Labview Ideas". Although this is NOT a direct suggestion for a change or addition to Labview, it seems appropriate to me to post it in this forum.]
    In-Place Element Structures, References and Pointers, Compiler Optimization, and General Stupidity
    I'd like to see NI actually start a round-table discussion about VI references, Data Value references, local variables, compiler optimizations, etc. I'm a C programmer; I'm used to pointers. They are simple, functional, and well defined. If you know the data type of an object and have a pointer to it, you have the object. I am used to compilers that optimize without the user having to go to weird lengths to arrange it. 
    The 'reference' you get when you right click and "Create Reference" on a control or indicator seems to be merely a shorthand read/write version of the Value property that can't be wired into a flow-of-control (like the error wire) and so causes synchronization issues and race conditions. I try not to use local variables.
    I use references a lot like C pointers; I pass items to SubVIs using references. But the use of references (as compared to C pointers) is really limited, and the implementation is insconsistent, not factorial in capabilites, and buggy. For instance, why can you pass an array by reference and NOT be able to determine the size of the array EXCEPT by dereferencing it and using the "Size Array" VI? I can even get references for all array elements; but I don't know how many there are...! Since arrays are represented internally in Labview as handles, and consist of basically a C-style pointer to the data, and array sizing information, why is the array handle opaque? Why doesn't the reference include operators to look at the referenced handle without instantiating a copy of the array? Why isn't there a "Size Array From Reference" VI in the library that doesn't instantiate a copy of the array locally, but just looks at the array handle?
    Data Value references seem to have been invented solely for the "In-Place Element Structure". Having to write the code to obtain the Data Value Reference before using the In-Place Element Structure simply points out how different a Labview reference is from a C pointer. The Labview help page for Data Value References simply says "Creates a reference to data that you can use to transfer and access the data in a serialized way.".  I've had programmers ask me if this means that the data must be accessed sequentially (serially)...!!!  What exactly does that mean? For those of use who can read between the lines, it means that Labview obtains a semaphore protecting the data references so that only one thread can modify it at a time. Is that the only reason for Data Value References? To provide something that implements the semaphore???
    The In-Place Element Structure talks about minimizing copying of data and compiler optimization. Those kind of optimizations are built in to the compiler in virtually every other language... with no special 'construct' needing to be placed around the code to identify that it can be performed without a local copy. Are you telling me that the Labview compiler is so stupid that it can't identify certain code threads as needing to be single-threaded when optimizing? That the USER has to wrap the code in semaphores before the compiler can figure out it should optimize??? That the compiler cannot implement single threading of parts of the user's code to improve execution efficiency?
    Instead of depending on the user base to send in suggestions one-at-a-time it would be nice if NI would actually host discussions aimed at coming up with a coherent and comprehensive way to handle pointers/references/optimization etc. One of the reasons Labview is so scattered is because individual ideas are evaluated and included without any group discussion about the total environment. How about a MODERATED group, available by invitation only (based on NI interactions with users in person, via support, and on the web) to try and get discussions about Labview evolution going?
    Based solely on the number of Labview bugs I've encountered and reported, I'd guess this has never been done, with the user community, or within NI itself.....

    Here are some articles that can help provide some insights into LabVIEW programming and the LabVIEW compiler. They are both interesting and recommended reading for all intermediate-to-advanced LabVIEW programmers.
    NI LabVIEW Compiler: Under the Hood
    VI Memory Usage
    The second article is a little out-of-date, as it doesn't discuss some of the newer technologies available such as the In-Place Element Structure you were referring to. However, many of the general concepts still apply. Some general notes from your post:
    1. I think part of your confusion is that you are trying to use control references and local variables like you would use variables in a C program. This is not a good analogy. Control references are references to user interface controls, and should almost always be used to control the behavior and appearance of those controls, not to store or transmit data like a pointer. LabVIEW is a dataflow language. Data is intended to be stored or transmitted through wires in most cases, not in references. It is admittedly difficult to make this transition for some text-based programmers. Programming efficiently in LabVIEW sometimes requires a different mindset.
    2. The LabVIEW compiler, while by no means perfect, is a complicated, feature-rich set of machinery that includes a large and growing set of optimizations. Many of these are described in the first link I posted. This includes optimizations you'd find in many programming environments, such as dead code elimination, inlining, and constant folding. One optimization in particular is called inplaceness, which is where LabVIEW determines when buffers can be reused. Contrary to your statement, the In-Place Element Structure is not always required for this optimization to take place. There are many circumstances (dating back years before the IPE structure) where LabVIEW can determine inplaceness and reuse buffers. The IPE structure simply helps users enforce inplaceness in some situations where it's not clear enough on the diagram for the LabVIEW compiler to make that determination.
    The more you learn about programming in LabVIEW, the more you realize that inplaceness itself is the closest analogy to pointers in C, not control references or data references or other such things. Those features have their place, but core, fundamental LabVIEW programming does not require them.
    Jarrod S.
    National Instruments

  • I need to make a pdf document, made in photoshop, 'page turn' and then add it to my website. I know I can do this in indesign but indesign will not open pdf files for some inexplicable reason. Any ideas how I can do it without completely starting again?

    I need to make a pdf document, already made in photoshop, 'page turn' and then add it to my website as an e-brochure. I know I can do this in indesign but indesign will not open pdf files for some inexplicable reason. Any ideas how I can do it without completely starting again?

    Hello waitingone,
    please try this (all terms are translated from my German programs to my best knowledge):
    1. Did the creator of the pdf file enable the import options?
    2. See import options: choose an other visibility option for your layer.
    3. Let you show the import options and click into one with a black background and try these out (often a gray is selected).
    4. See trimming: try the different modes there. Often works: "Media".
    5. Is the pdf file (eg from Word) correctly created?
    6. Is the PDF file protected? >>> no import possible.
    7. If that does not help, store the pdf file in Acrobat, repair possible errors, run the PDF Optimizer before placing in InDesign.
    Good luck!
    Hans-Günter

  • Mview's taking a looong time to refresh. Any ideas?

    I have the following mview below. We are running the refresh using the FORCE option nightly. There are times when the refresh takes hours and we have to kill the job. Then, we just do a drop/recreate of the mview and it completes in < 20 minutes. Anyone have any ideas as to why?
    The merchant table has about 800k rows.
    The merchant_transaction table has about 18 million rows.
    I have included the explain plan of this query below.
    CREATE MATERIALIZED VIEW CARS.MER_MERTXN_MV
    TABLESPACE MVIEW_TB
    NOCACHE
    LOGGING
    NOCOMPRESS
    NOPARALLEL
    BUILD IMMEDIATE
    REFRESH FORCE
    NEXT NULL
    WITH PRIMARY KEY
    AS
    SELECT mt.merchant_num,
    mt.transaction_ref,
    mt.transaction_type,
    mt.transaction_status_code,
    mt.transaction_date,
    mt.sale_date,
    NVL(mt.transaction_credit_amt,0),
    NVL(mt.transaction_debit_amt,0),
    NVL(mt.open_credit_amt,0),
    NVL(mt.open_debit_amt,0),
    NVL(mt.reserve_bal,0),
    NVL(mt.collection_bal,0),
    NVL(mt.acct_bal,0),
    m.coll_assigned,
    m.acct_location_code,
    m.acct_type_code,
    m.marker_bank_code,
    m.cost_center_code,
    m.portfolio_code,
    m.rowid,
    mt.rowid
    FROM merchant_transactions mt, merchant m
    WHERE mt.merchant_num = m.merchant_num;
    EXPLAIN PLAN
    Operation     Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    SELECT STATEMENT Optimizer Mode=ALL_ROWS          18 M          212259                     
    HASH JOIN          18 M     2G     212259                     
    TABLE ACCESS FULL     CARS.MERCHANT     791 K     40 M     9206                     
    TABLE ACCESS FULL     CARS.MERCHANT_TRANSACTIONS     18 M     1G     96231

    1. Yes, but they aren't being used as per explain plan. I added a hint which helped. The problem is when we do a complete refresh. The fast refresh works fine. However, we have times when there is so much data added, it decides to do a complete. We are running FORCE. Does it DELETE the data or TRUNCATE the data when it does a complete? It seems its doing a DELETE.
    2. no rows
    3. 10.1. No parallelism being used.

  • How to Optimize SCXI 1600 for speed with Thermocouples

    I'm working on a data acquisition system for my engineering firm and I'm trying to find a way to use our new thermocouple system as fast as possible.
    The requirements for the DAQ process are:
    Read 32 voltage channels from a PCI-6071E card
    Read 32 thermocouple channels from a SCXI-1600 with an 1102C accessory
    Complete the entire operation in under 5ms (this is so other parts of the program can respond to the incoming data quickly and trigger safety protocols if necessary)
    Using LabVIEW 7.1 and MAX 4.4, I've got the voltage channels working to my satisfaction (with traditional DAQ VIs) and the rep rates I measure when I run the program are around 1ms (I do this by putting the DAQ code in a loop and reading the millisecond timer every time through that loop, then calculating the average time between loop executions).  I have been trying to get similar performance from the thermocouple channels using DAQ Assistant and DAQmx.  Some of the problems I've encountered are:
    Very slow rep rates with 1-sample and N-sample acquisition modes (300-500ms)
    Good rep rates when I switch to continuous mode, but then I get buffer overflow error -200279.
    When I attempted to correct that error by setting the DAQmx buffer to overwrite unread data and only read the most recent sample, the calculated sample rate went to 20ms.  It was around 8ms when I left the error unhandled and continued acquisition.
    At this point I'm out of ideas and am just looking for something to try and optimize the DAQ process for speed, as much as is possible.
    Thank you for any help.

    I guess I would be interested in checking out your code to see if there is anything I can recommend on changing.  However, I do have a few general Ideas of how to improve your performance.  These recommendations are purely based on what you could be doing to slow down the speed of the program because I am not sure how exactly you have everything set up.  
    -Are you setting up the task and closing the task each time that you read from your daq card?  the way to get around this is to only have the DAQmx read vi in the while loop so you do not have time alloted for opening and closing the task each time.
    -Try using a Producer/Consumer architecture.  This architecture uses queues and splits the aquisition with the post processing.  Here is a link to how to set up this architecture and some information on when to use it.
    Application Design Patterns: Producer/Consumer
    http://zone.ni.com/devzone/cda/tut/p/id/3023 
    Message Edited by Jordan F on 02-06-2009 04:35 PM
    Regards,
    Jordan F
    National Instruments

  • View for optimization

    Hi !
    I have a view with that I want to optimize...each of the select statement has lower costs. all the select are using an index.
    but the view has a pretty big cost because of union statement
    Do you have an idea regarding the optimization?
    thanks a lot!
    any idea would be helpful
    Message was edited by:
    RIKOS

    1. Can you repost the query with appropriate formatting. You probably want to use the [ pre] [ pre] tags (without the space) to format the query.
    2. Can you explain in words what this query is doing?
    Justin

Maybe you are looking for

  • How do I get a modorator to reset my target SNR

    At approx 11.30 on 3.5.11 my line continuously dropped and re-established connection for about 20-30 mins (each time synchronising at circa 4800kbps and SNR of 6.0). By 5.5.11, the (target) SNR had risen to 9db and the line was synching between 3700

  • Can't install a downloaded app

    Through Safari, I purchased and downloaded an app from Hallmark.  It downloaded OK into my Downloads folder.  When I clicked on it to install, I got the following message: "xxxxxx.pkg" can't be opened because it is from an unidentified developer.  Yo

  • Server 0 process got ended, server not coming up.

    Used property files -> files [00] : C:\usr\sap\MI1\DVEBMGS02\j2ee\cluster\instance.properties Instance properties -> ms host    : APRINS06 -> ms port    : 3903 -> os libs    : C:\usr\sap\MI1\DVEBMGS02\j2ee\os_libs -> admin URL  : -> run mode   : norm

  • Essbase ADD-in and SSL

    Anyone know a way to secure a connection from the ADD-In? I cannot get the Essbase Add-in to use SSL. Even with a load balancer doing offload. Any ideas?

  • OEM Monitoring for PHP Applications

    Hi, Does OEM Grid Control has any pack or plugin available for PHP based web applications, deployed at standard Apache web server? Regards.