Is anyone working with large datasets ( 200M) in LabVIEW?

I am working with external Bioinformatics databasesa and find the datasets to be quite large (2 files easily come out at 50M or more). Is anyone working with large datasets like these? What is your experience with performance?

Colby, it all depends on how much memory you have in your system. You could be okay doing all that with 1GB of memory, but you still have to take care to not make copies of your data in your program. That said, I would not be surprised if your code could be written so that it would work on a machine with much less ram by using efficient algorithms. I am not a statistician, but I know that the averages & standard deviations can be calculated using a few bytes (even on arbitrary length data sets). Can't the ANOVA be performed using the standard deviations and means (and other information like the degrees of freedom, etc.)? Potentially, you could calculate all the various bits that are necessary and do the F-test with that information, and not need to ever have the entire data set in memory at one time. The tricky part for your application may be getting the desired data at the necessary times from all those different sources. I am usually working with files on disk where I grab x samples at a time, perform the statistics, dump the samples and get the next set, repeat as necessary. I can calculate the average of an arbitrary length data set easily by only loading one sample at a time from disk (it's still more efficient to work in small batches because the disk I/O overhead builds up).
Let me use the calculation of the mean as an example (hopefully the notation makes sense): see the jpg. What this means in plain english is that the mean can be calculated solely as a function of the current data point, the previous mean, and the sample number. For instance, given the data set [1 2 3 4 5], sum it, and divide by 5, you get 3. Or take it a point at a time: the average of [1]=1, [2+1*1]/2=1.5, [3+1.5*2]/3=2, [4+2*3]/4=2.5, [5+2.5*4]/5=3. This second method required far more multiplications and divisions, but it only ever required remembering the previous mean and the sample number, in addition to the new data point. Using this technique, I can find the average of gigs of data without ever needing more than three doubles and an int32 in memory. A similar derivation can be done for the variance, but it's easier to look it up (I can provide it if you have trouble finding it). Also, I think this funtionality is built into the LabVIEW pt by pt statistics functions.
I think you can probably get the data you need from those db's through some carefully crafted queries, but it's hard to say more without knowing a lot more about your application.
Hope this helps!
Chris
Attachments:
Mean Derivation.JPG ‏20 KB

Similar Messages

  • Working with large Artboards/Files in Illustrator

    Hello all!
    I'm currently designing a full size film poster for a client. The dimensions of the poster are 27" x 40" (industry standard film poster).
    I am a little uncertain in working with large files in Illustrator, so I have several problems that have come up in several design projects using similar large formats.
    The file size is MASSIVE. This poster uses several large, high-res images that I've embedded. I didn't want them to pixelate, so I made sure they were high quality. After embedding all these images, along with the vector graphics, the entire .ai file is 500MB. How can I reduce this file size? Can I do something with the images to make the .ai file smaller?
    I made my artboard 27" x 40" - the final size of the poster. Is this standard practice? Or when designing for a large print format, are you supposed to make a smaller, more manageable artboard size, and then scale up after to avoid these massive file sizes?
    I need to upload my support files for the project, including .ai and .eps - so it won't work if they're 500MB. This would be good info to understand for all projects I think.
    Any help with this would be appreciated. I can't seem to find any coherent information pertaining to this problem that seems to address my particular issues. Thank you very much!
    Asher

    Hi Asher,
    It's probably those high-res images you've embedded. Firstly, be sure your images are only as large as you need them Secondly, a solution would be to use linked images while you're working instead of embedding them into the file.
    Here is a link to a forum with a lot of great discussion about this issue, to get you started: http://www.cartotalk.com/lofiversion/index.php?t126.html
    And another: http://www.graphicdesignforum.com/forum/archive/index.php/t-1907.html
    Here is a great list of tips that someone in the above forum gave:
    -Properly scale files.  Do not take a 6x6' file then use the scaling tool to make it a 2x2'  Instead scale it in photoshop to 2x2 and reimport it.  Make a rule like over 20%, bring it back in to photoshop for rescaling.
    -Check resolutions. 600dpi is not going to be necessary for such and such printer.
    -Delete unused art.  Sloppy artists may leave old unused images under another image.  The old one is not being used but it still takes up space, therefore unecessarily inflating your file.
    -Choose to link instead of embedd.  This is your choice.  Either way you still have to send a large file but many times linking is less total MB then embedding.  Also embedding works well with duplicated images. That way multiple uses link to one original, whereas embedding would make copies.
    -When you are done, using compression software like ZIP or SIT (stuffit)
    http://www.maczipit.com/
    Compression can reduce file sizes alot, depending on the files.
    This business deals with alot of large files.  Generally people use FTP's to send large files, or plain old CD.  Another option is using segmented compression.  Something like winRAR/macRAR or dropsegment (peice of stuffit deluxe) compresses files, then breaks it up into smaller manageble pieces.   This way you can break up a 50mb file into say 10x 5mb pieces and send them 5mb at a time. 
    http://www.rarlab.com/download.htm
    *make sure your client knows how to uncompress those files.  You may want to link them to the site to download the software."
    Good luck!

  • Anyone working with eView under JCAPS 5.1.X

    Has anyone worked with eView under JCAPS 5.1.X? I am exploring the use of this tool in a project and I would like to talk to someone who has real world experience.

    Using eView to uniquely identify the person is a good solution especially when customizing the OTDs to each specific source.
    Deployment?
    Well, it depends, impossible to make a blanket statement here.
    With our case, a dedicated eView (logical host) domain is deployed, another domain is deployed for HL7 messages. JMS queues and topics are the the integration layers between them. Currently, looking into JMS Grid for a logical decoupling and clustering between domains.
    Also, going to migrate our eGate SRE messages to JCAPS gradually, the messages which pertain to patient demographic events (ADT) will subscribe to JMS topics wired to eView.
    The great thing about JCAPS (J2EE) is to logically partition components wand then deploy as required. For example, going to deploy HL7 message collaborations and eView components to a single domain now because transactions are low.
    Designed this without consultants, have many years experience with J2EE software development, my problem is with the limited JCAPS Netbeans IDE functionality - ready for JCAPS 5.2!
    Anyway, the eIndex project included with JCAPS, should get you started, forget the "patient" semantics, this is really a person in disguise.

  • Working with Large Numbers

    Hi there,
    I am currently doing a school assignment and not looking for answers but just a little guidance.
    I am working with large numbers and the modulo operator.
    I might have some numbers such as :
    int n = 221;
    int e = 5;
    int d = 77;
    int message = 84;
    int en = (int) (Math.pow(message, e) % n);
    int dn = (int) (Math.pow(en, d) % n);Would there be a better way to do this kind of calculation. The dn value should come out the same as message. But I always get something different and I think I might be losing something in the fact that an int can only hold smaller values.

    EJP wrote:
    It might make sense in some contexts to have a positive and negative infinity.
    Yes, perhaps that's a better name. Guess I was harking back to old COBOL days :-).(*)
    But the reason these things exist in FP is because the hardware can actually deliver them. That rationale doesn't apply to BIgInteger.Actually, it does. All I'm talking about is a value that compares higher or lower than any other. That could be done either by a special internal sign value (my slight preference) or by simply adding code to compareTo(), equals() and hashCode() methods that takes the two constants into account (as they already do with ZERO and ONE).
    Don't worry, I'm not holding my breath; but I have come across a few situations in which values like that would have been useful.
    Winston
    Edited by: YoungWinston on Mar 22, 2011 9:07 AM
    (*) Actually, '±infinity' tends to suggest a valid arithmetic value, and I wasn't thinking of changing existing BigInteger/BigDecimal maths (except perhaps to throw an exception if either value is involved).

  • Working with Large List in sharepoint 2010

    Hi All
    I have a list with almost 10k records in my sharepoint list and based on some business requirement i am binding (almost 6k records) the data to asp.net grid view and this will visible on the home page of the portal where most of the users will access. Can
    someone please guide the best method to reduce the performance inorder the program to hit the SP list everytime the page loads...
    Thanks & Regards
    Rakesh Kumar

    Hi,
    If you are Working with large data retrieval from the content database (SharePoint list), the points below for your reference:
    1. Limit the number of returned items.
    SPQuery query = new SPQuery();
    query.RowLimit =6000; // we want to retrieve2000 items
    query.ListItemCollectionPosition = prevItems.ListItemCollectionPosition; // starting at a previous position
    SPListItemCollection items = SPContext.Current.List.GetItems(query);
    2. Limit the number of returned columns.
    SPQuery query = new SPQuery();
    query.ViewFields = "";
    3. Query specific items using CAML (Collaborative Markup Language).
    SPQuery query = new SPQuery();
    query.Query = "15";
    4.Use ContentIterator class
    https://spcounselor-public.sharepoint.com/Blog/Post/2/Querying-a--big-list--with-ContentIterator-and-multiple-filters
    5. Create a Stored Procedure in Database to get the special data, create a web service to get the data, when create a web part to show the data in home page.
    Best Regards
    Dennis Guo
    TechNet Community Support

  • Speed up Illustrator CC when working with large vector files

    Raster (mainly) files up to 350 Mb. run fast in Illustrator CC, while vector files of 10 Mb. are a pain in the *blieb* (eg. zooming & panning). When reading the file it seems to freeze around 95 % for a few minutes. Memory usage goes up to 6 Gb. Processor usage 30 - 50 %.
    Are there ways to speed things up while working with large vector files in Illustrator CC?
    System:
    64 bit Windows 7 enterprise
    Memory: 16 Gb
    Processor: Intel Xeon 3,7 GHz (8 threads)
    Graphics: nVidia Geforce K4000

    Files with large amounts vector points will put a strain on the fastest of computers. But any type of speed increase we can get you can save you lots of time.
    Delete any unwanted stray points using  Select >> Object >> stray points
    Optimize performance | Windows
    Did you draw this yourself, is the file as clean as can be? Are there any repeated paths underneath your art which do not need to be there from live tracing or stock art sites?
    Check the control panel >> programs and features and sort by installed recently and uninstall anything suspicious.
    Sorry there will be no short or single answer to this, as per the previous poster using layers effectively, and working in outline mode when possible might the best you can do.

  • Photoshop CS6 keeps freezing when I work with large files

    I've had problems with Photoshop CS6 freezing on me and giving me RAM and Scratch Disk alerts/warnings ever since I upgraded to Windows 8.  This usually only happens when I work with large files, however once I work with a large file, I can't seem to work with any file at all that day.  Today however I have received my first error in which Photoshop says that it has stopped working.  I thought that if I post this event info about the error, it might be of some help to someone to try to help me.  The log info is as follows:
    General info
    Faulting application name: Photoshop.exe, version: 13.1.2.0, time stamp: 0x50e86403
    Faulting module name: KERNELBASE.dll, version: 6.2.9200.16451, time stamp: 0x50988950
    Exception code: 0xe06d7363
    Fault offset: 0x00014b32
    Faulting process id: 0x1834
    Faulting application start time: 0x01ce6664ee6acc59
    Faulting application path: C:\Program Files (x86)\Adobe\Adobe Photoshop CS6\Photoshop.exe
    Faulting module path: C:\Windows\SYSTEM32\KERNELBASE.dll
    Report Id: 2e5de768-d259-11e2-be86-742f68828cd0
    Faulting package full name:
    Faulting package-relative application ID:
    I really hope to hear from someone soon, my job requires me to work with Photoshop every day and I run into errors and bugs almost constantly and all of the help I've received so far from people in my office doesn't seem to make much difference at all.  I'll be checking in regularly, so if you need any further details or need me to elaborate on anything, I should be able to get back to you fairly quickly.
    Thank you.

    Here you go Conroy.  These are probably a mess after various attempts at getting help.

  • Anyone worked with customizing the requisition account generator workflow?

    Has anyone worked with customizing the requisition account generator workflow?
    please let me know
    I need help asap.
    thanks

    Hi Malla,
    Solution here:
    http://garethroberts.blogspot.com/2007/08/gl-account-code-combinations-on-fly-key.html
    Regards,
    Gareth

  • How does the sync functionality work with large libraries on small devices?

    How does the sync functionality work with large libraries?
    Say I sync 100gb of photos with the new Photos app and turn on sync on a 16gb iphone. Will it fill the device up to 16gb? Can I tell it to limit to xgb so I leave room for music and apps? How does this work? Will it slow down my phone if its trying to sync 100gb across smaller devices?

    "Will the Apple TV now read directly from the Time Capsule?" ATV does not 'read' from the TC. It connects to the Mac and itunes library associated with the Mac. Of course the itunes program can 'point' to a library on the TC. You'll still need to have itunes open and the Mac powered on and not sleeping with the TC mounted to the Desktop to use ATV properly. So bottom line the use of TC just adds one more step to view files on the ATV.

  • Best practices for working with large placed bitmap images?

    Hey all,
    I need some advice on the best way to approach building these files. I've been working on some banners that are very large: 3 x 7 feet.
    Each banner has a simple vector graphic treatment at the top and bottom (rectangle with a different colored rule on top, and vector logo) and a small amount of text, just a URL and a headline.The headline is type (not converted to outlines) and usually has some other effect applied to it, say a drop shadow or outer glow. Under these graphics is a full bleed image. The placed images need to be 150ppi at actual size, so they're honking big, sometimes up to 2GB. Once the layouts are approved, they have to go to a vendor for output.
    The Illustrator docs are really large, and I've read in other threads how to combat that (PDF compatibility, raster settings). But even still, does anyone have any insight into the best way to deal with these things? The dimensions are large, and then the images are large, and it just makes for lots of looking at the spinning ball of death...
    If it were me, I'd build them in InDe, but the vector graphics need to be edited for each one, and I so don't like to do that in InDe unless forced. To me, it's still ultimately a page layout app, not a drawing app. (Old school here.)
    FYI, our machines are all MBPs with 8G ram and the latest Intel Core 2 Duo chips, 2.66 and 2.8GHz. If we keep the files local (as opposed to working on the server) it should be fairly zippy... No?
    Any advice is appreciated, thanks!

    You can get into memory trouble with very large placed pdf files. Tiffs too.
    This has to do with the preview, which contains much more information than you need for working with.
    On the other hand if you place EPSs and take care not to turn on overprint preview you can get away with huge files.
    If you do turn on overprint preview your machine will slow down a lot and the file may become totally unmanageable.
    Compare this with to InDesign where you can control the quality of the preview. A hi-res preview will slow you down and most often you don't need it anyway.
    I was working (in Illie) the other day on much larger files than you mention – displays for whole walls – and had some considerable trouble until I reverted to the old EPS format. They say it's dying but it ain't dead yet .

  • Slimbox not working with Spry Dataset

    Hi there,
    I'm trying to populate a page with Spry Dataset and use Slimbox2 to show a set of 4 images when a thumbnail is clicked.
    The original webpage WITHOUT Spry Dataset is here:
    http://shadowmuseum.com/portfolio/p-web.html
    Currently it works with just slimbox, but as soon as I add a Spry Dataset, the large images won't load anymore. When a thumbnail is clicked on, it opens the 1st large image of the set in a new page, completely removing the lightbox effect.
    I've scouted the internet for solutions; it's been suggested that slimbox doesn't work because spry doesn't have time to load the content first. But I'm not savvy with javascript at all, so have no idea how to work around it....
    Any help would be greatly appreciated.
    Thanks a lot.
    P.S. Below is the HTML code for the page with both spry & slimbox:
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml"><!-- InstanceBegin template="/Templates/bone.dwt" codeOutsideHTMLIsLocked="false" -->
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /><meta name="keywords" content="Shadow Museum, design, photography, london, web, web design, ana, ana lorraine lui, benjamin, backhouse, E1, E8, N16, graphic design, st martins, creative, agency, bespoke, multi-disciplinary" /><meta name="description" content="London based bespoke multi-disciplinary creative agency. Services include web design, photography, print design, and filmmaking." />
    <!-- InstanceBeginEditable name="doctitle" -->
    <title>Shadow Museum || London-based bespoke multi-disciplinary creative agency</title>
    <script type="text/javascript" src="../zzAssets/scripts/Lightbox/jquery-1.3.1.min.js"></script>
    <script type="text/javascript" src="../zzAssets/scripts/Lightbox/js/slimbox2.js"></script>
    <script src="../zzAssets/scripts/SpryAssets/xpath.js" type="text/javascript"></script>
    <script src="../zzAssets/scripts/SpryAssets/SpryData.js" type="text/javascript"></script>
    <link rel="stylesheet" href="../zzAssets/scripts/Lightbox/css/slimbox2.css" type="text/css" media="screen" />
    <!-- InstanceEndEditable -->
    <link href="../zzAssets/scripts/main.css" rel="stylesheet" type="text/css" />
    <script src="../zzAssets/scripts/SpryAssets/SpryMenuBar.js" type="text/javascript"></script>
    <link href="../zzAssets/scripts/SpryAssets/SpryMenuBarHorizontal.css" rel="stylesheet" type="text/css" />
    <script type="text/javascript" src="../zzAssets/scripts/clock/clockp.js"></script>
    <script type="text/javascript" src="../zzAssets/scripts/clock/clockh.js"></script>
    <!-- InstanceBeginEditable name="head" -->
    <script type="text/javascript">
    <!--
    var dsWeb = new Spry.Data.XMLDataSet("p-web.xml", "portfolio/project");
    dsWeb.setColumnType("info", "html");
    //-->
    </script>
    <!-- InstanceEndEditable --><!-- InstanceParam name="footer" type="boolean" value="true" --><!-- InstanceParam name="clock" type="boolean" value="true" -->
    </head>
    <body>
    <div id="clock_a"></div>
    <div id="menu">
      <ul id="mainMenu" class="MenuBarHorizontal">
        <li><a href="../index.html">Home</a>      </li>
        <li><a href="../news/news.html">News</a></li>
        <li><a class="MenuBarItemSubmenu" href="#">About</a>
            <ul>
              <li><a href="../about/our_story.html">Our Story</a></li>
              <li><a href="../about/our_values.html">Our Values</a></li>
            </ul>
        </li>
        <li><a href="#" class="MenuBarItemSubmenu">Services</a>
          <ul>
            <li><a href="../services/web_design.html">Web Design</a></li>
            <li><a href="../services/print_design.html">Print Design</a></li>
            <li><a href="../services/photography.html">Photography</a></li>
            <li><a href="../services/filmmaking.html">Filmmaking</a></li>
            </ul>
        </li>
        <li><a href="#" class="MenuBarItemSubmenu">Portfolio</a>
          <ul>
            <li><a href="p-web.html">Web Design</a></li>
            <li><a href="p-print.html">Print Design</a></li>
            <li><a href="p-photography.html">Photography</a></li>
            <li><a href="p-filmmaking.html">Filmmaking</a></li>
          </ul>
        </li>
        <li><a href="../contact/contact.html">Contact</a></li>
      </ul>
    </div>
    <!-- InstanceBeginEditable name="content area" -->
    <div id="content">
      <div id="p-web">
        <h2>Web  Portfolio</h2>
         <div class="SpryHiddenRegion" spry:region="dsWeb">
        <table width="100%" border="0" cellspacing="0" cellpadding="0">
          <tr spry:repeat="dsWeb">
            <td align="center" valign="top">
                   <a href="../zzAssets/images/p-web/{pic1}" rel="{label}" title="{title}"><img src="../zzAssets/images/p-web/{thm}" width="180" height="130" /></a>
                   <a href="../zzAssets/images/p-web/{pic2}" rel="{label}" title="{title}"></a>
                   <a href="../zzAssets/images/p-web/{pic3}" rel="{label}" title="{title}"></a>
                   <a href="../zzAssets/images/p-web/{pic4}" rel="{label}" title="{title}"></a>
              </td>
            <td align="left" valign="top">
                   <h3>{title}</h3>
                   {info}
                   <p><a href="http://{url}" target="_blank">{url-label}</a></p>
              </td>
          </tr>
        </table>
         </div>
        <p> </p>
      </div>
    </div>
    <!-- InstanceEndEditable -->
    <div id="footer">
      <div id="watermarkRight">
        <table width="98%" border="0" align="center" cellpadding="0" cellspacing="0">
          <tr>
            <td align="left" valign="top">©2009 Shadow Museum | Company Number 6576238
              | <a href="../terms.html" class="colourless">Terms &amp; Conditions</a></td>
            <td align="right" valign="top">contact us: <a href="mailto:[email protected]">[email protected]</a></td>
          </tr>
        </table>
      </div>
    </div>
    <script type="text/javascript">
    <!--
    var MenuBar1 = new Spry.Widget.MenuBar("mainMenu", {imgDown:"SpryAssets/SpryMenuBarDownHover.gif", imgRight:"SpryAssets/SpryMenuBarRightHover.gif"});
    //-->
    </script>
    </body>
    <!-- InstanceEnd --></html>

    You are not re-initializing the lightbox library code after Spry have generated the markup.
    You can use the Spry region observer onPostUpdate to get notified of region re-generation and recall the initialization code of the lightbox.

  • Working with large tables - thumbnail size

    Hi,
    I'm working with some oversized tables in IBA. What I usually do is make the table as needed and then use the "Uses thumbnail on page" option in the Layout section of the inspector and adjust the thumbnail size to fit the page as needed. What happened this time, that after the document has been closed and reopened, some tables reset the thumbnail size back to default, which is small. I can't seem to find what's causing this, one table is not behaving like that, although it's been done using the same method. Anyone else ran into the same thing? Any suggestions?
    Thanks in advance.

    Why don't you use a stored proc?
    Why are you ordering it?
    Should I take partial entries in a loop? Yep. Because software isn't perfect. No point in attempting to process the universe when you know it will fail sometime and it is easier to handle smaller failures than large ones (and you won't have to redo everything.)

  • Advice for working with large clip files

    A few years ago I made a movie using iMovie2. At the time I was working with clips recoded on one minidv disc. I am now ready to begin a project (just bought iLife06) that is much larger. I have roughly ten 60 min minidv discs. I am getting nervous about the amount of memory needed to import and work with this many clips.
    Is it possible to import the clips to my external fire wire HD? Can I still work in iMovie while the project lives on the ext firewire drive? Can anyone tell me roughly how much memory is needed for larger projects like this? (I expect the final project will be 30-40 minutes).
    Since the footage all comes from our European concert tour, it easily divides into 3 separate sections - would it be easier to create 3 different iMovie projects? Is it possible to combine them in the end to create one film?
    Thank you so much for your help with this.
    Sincerely,
    Bob Linscott

    Is it possible to import the clips to my external
    fire wire HD? Can I still work in iMovie while the
    project lives on the ext firewire drive?
    Should be fine. I've been editing 4 hours worth of footage down to about 50 minutes, on a project stored on a FireWire 800 drive. FireWire 400 (if that's what you're using) is half the speed, but should still be fine, I think.
    So, create a new project, save it to your external drive, and then import your footage into the project.
    Can anyone
    tell me roughly how much memory is needed for larger
    projects like this? (I expect the final project will
    be 30-40 minutes).
    If you mean hard drive space for the project, I think you're talking about 11 GB per hour: so, in short, a lot. I think you could import your footage, edit a bit, and empty your trash to free up space, but iMovie does seem to keep your originals fairly relentlessly, so I'm not entirely sure.
    Watch out for extracting audio from clips too: I've been doing this in order to do cross-fades, but it increases the size of your movie. I ended up buying a second, bus-powered 80 GB external hard drive just to hold the iMovie project
    Since the footage all comes from our European concert
    tour, it easily divides into 3 separate sections -
    would it be easier to create 3 different iMovie
    projects? Is it possible to combine them in the end
    to create one film?
    This could well be a good idea. Once your projects gets over a certain length, you may experience the "herky jerky" pkayback issue (search for "herky jerky" on these forums), making editing difficult. So three separate projects might be easier.
    To combine them all at the end, you'll want to export the 2nd and 3rd projects as full quality DV (Share > QuickTime > Compress Movie For: Full Quality), then import them (File > Import) into your first project, and add them to the timeline.

  • Charts with large datasets?

    I'm writing an application that requires graphing of multi-series sql statements over moderately large datasets. Think 45-90k data points or so.
    I've noticed that with datasets larger than 5k or so, the flash charts eat a lot of cpu time when rendering and I haven't gotten flash charts to display 15k data points appropriately. I get a message about flash having a long running script.
    Does anybody have suggestions for how to display 50,000+ data point charts in apex? Is there a recommended tool to integrate with apex that would create the chart server-side as a graphic and then push the graphic to the client? Also, if possible I would like to call this tool directly from DB jobs to push graphs out to people via email on a recurring basis.
    Any suggestions would be very much appreciated.
    Thanks,
    Matt

    Thanks Mike.
    I originally worked exclusively in Mac. It was the only game in town at one time. I have been working on higher end Windows-based workstations for the past 10 years or so (I also do some video production). Apple products are 'kool,' just not cost-effective. I am currently running Win7 on a dual-quad core with 8GB Ram and 3+TB high-speed storage.
    I used DeltaGraph for several years but their PostScript was version 1.0. I had a lot of problems with the files, such as not ungrouping grouped objects, font problems and difficulties applying newer effects -- even after re-saving to current version AI. At version 5, I querried Red Rock regarding upgrade of PS support but they said it was not in their plans. I also found that setting up some plots was terrifically complicated in DG. It was quicker to set up simple geometry in layered plots in Illustrator. I have not looked at DG 6 but will check on their PS status.
    I have not looked at importing Excel via PDF. I often do test plots from the source worksheets for reference in Excel but have never considered the results to be workable or usable in a published format. I will take another look at Excel per your suggestion.
    It sure would be great if AI charting were a bit more robust and reliable.

  • Performance issue while working with large files.

    Hello Gurus,
    I have to upload about 1 million keys from a CSV file on the application server and then delete the entries from a DB table containing 18 million entries. This is causing performance problems and my programm is very slow. Which approach will be better?
    1. First read all the data in the CSV and then use the delete statement?
    2. Or delete each line directly after reading the key from the file?
    And another program has to update about 2 million entries in a DB table containing  20 million entries. Here I also have very big performance problems(the program has been running for more the 14 hours). Which is the best way to work with such a large amount?
    I tried to rewrite the program so that it will run parallel but since this program will only run once the costs of implementing a aRFC parallization are too big. Please help, maybe someone doing migration is good at this
    Regards,
    Ioan.

    Hi,
    I would suggest, you should split the files and then process each set.
    lock the table to ensure it is available all time.
    After each set ,do a commit and then proceed.
    This would ensure there is no break in middle and have to start again by deleteing the entries from files which are already processed.
    Also make use of the sorted table and keys when deleting/updating DB.
    In Delete, when multiple entries are involved , use of  an internal table might be tricky as some records may be successfully deleted and some maynot.
    To make sure, first get the count of records from DB that are matching in Internal table set 1
    Then do the delete from DB with the Internal tabel set 1
    Again check the count from DB that are matching in Internal table set 1 and see the count is zero.
    This would make sure the entire records are deleted. but again may add some performance
    And the goal here is to reduce the execution time.
    Gurus may have a better idea..
    Regards
    Sree

Maybe you are looking for