Dealing with large quantities of media???

I have several hundred h.264 .mov files sitting on a OSX server. It is set up to stream these files and does so well.
My question is, What would be the simpelest solution to make those quicktime movies available as links on a web page and automatically update the web page when new files are added to the movies folder?
QTSS Publisher? cant seem to get it going and from the looks of this discussion group its probably not the solution I'm looking for.

I don't think you will find a single magic bullet to solve all the issues.  The main point is that there will be no shortcut for major schema and index changes.  You will need at least 120% free space to create a clustered index and facilitate
major schema changes.
I suggest an incremental approach to address you biggest pain points.  You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table.  Perhaps
some indexes targeted at improving that process is a good first step.
What SQL Server version and edition are you using?  You'll have more options with Enterprise (partitioning, row/page compression). 
Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column.  Then create a new table (using SELECT INTO) that has strongly
typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one.  You can follow up later to address columns data corrections and/or transformations. 
Dan Guzman, SQL Server MVP, http://www.dbdelta.com

Similar Messages

  • What is best way dealing with large tiff file in OSX Lion?

    I'm working with large tiff  file (engineering drawing), but preview couldnt handle (unresponsive) this situation.
    What is best way dealing with large tiff file in OSX Lion? (Viewing only or simple editing)
    Thx,
    54n9471

    Use an iPad and this app http://itunes.apple.com/WebObjects/MZStore.woa/wa/viewSoftware?id=400600005&mt=8

  • Premiere Pro 2.0 slows when dealing with larger video files

    I'm having issues with Premiere Pro 2.0 slowing to a crawl and taking 60-90 seconds to come back to life when dealing with larger .avi's (8+ mins). When I try to play a clip on the timeline, drag the slider over said clip, or play from a title into said clip on the timeline, Premiere hangs. The clips on question are all rendered, and the peak file has been generated for each different clip has well. This is a new problem; the last time I was working with a larger clip (45+ mins, captured from a Hi-8 cam), I had no problems. Now, I experience this slow down with all longer clips, although I've only dealt with footage captured from a Hi-8 cam and also a mini-DV cam. This problem has made Premiere nearly unusable. I'm desperate at this point.
    System:
    CPU: P4 HT 2.4ghz
    Ram: 2x 1gb DDR
    Video: ATI Radeon 9000 Series
    Scratch Disk: 250gb WD My Book - USB 2.0 (I suspect this might be part of the problem)
    OS: XP Pro SP2
    I'm not on my machine right now, and I can definitely provide more information if needed.
    Thanks in advance.

    Aside from some other issues, I found that USB was just not suited for editing to/from, and on a much faster machine, that you list.
    FW-400 was only slightly better. It took FW-800, before I could actually use the externals for anything more than storage, i.e. no editing, just archiving.
    eSATA would be even better/faster.
    Please see Harm's ARTICLES on hardware, before you begin investing.
    Good luck,
    Hunt
    [Edit] Oops, I see that Harm DID link to his articles. Missed that. Still, it is worth mentioning again.
    Also, as an aside, PrPro 2.0 has no problem on my workstation when working with several 2 hour DV-AVI's, even when these are edited to/from FW-800 externals.
    Message was edited by: the_wine_snob - [Edit]

  • Can ui:table deal with large table?

    I have found h:dataTable can do pagination because it's data source is just a DataModel. But ui:table's datasouce is a data provider which looks some complex and confused.
    I have a large table and I want to load the data on-demand . So I try to implement a provider. But soon I found that the ui:table may be load all data from provider always.
    In TableRowGroup.java, there are many code such as:
    provider.getRowKeys(rowCount, null);
    null will let provider load all data.
    So ui:table can NOT deal with large table!?
    thx.
    fan

    But ui:table just uses TableDataProvider interface.TableData provider is a wrapper for the CachedRowSet
    There are two layers between the UI:Table comonent and the database table: the RowSet layer and the Data Provider layer. The RowSet layer makes the connection to the database, executes the queries, and manages the result set. The Data Provider layer provides a common interface for accessing many types of data, from rowsets, to Array objects, to Enterprise JavaBeans objects.
    Typically, the only time that you work with the RowSet object is when you need to set query parameters. In most other cases, you should use the Data Provider to access and manipulate the data.
    What can a CachedRowSet (or CachedRowSetprovider?)
    do?Check out the API that I pointed you to to see what you can do with a CachedRowSet
    Does the Table cache the data itself?
    Maybe this way is convenient for filter and order?
    Thx.I do not know the answers to these questions.

  • XSU: Dealing with large tables / large XML files

    Hi,
    I'm trying to generate a XML file from a "large" table (about 7 million lines, 512Mbytes of storage) by means of XSU. I get into "java.lang.OutOfMemoryError" even after raising the heap size up to 1 Gbyte (option -Xmx1024m of the java cmd line).
    For the moment, I'm involved in an evaluation process. But in a near future, our applications are likely to deal with large amount of XML data, (typically hundreds of Mbytes of storage, which means possibly Gbytes of XML code), both in updating/inserting data and producing XML streams from existing data in relationnal DB.
    Any ideas about memory issues regarding XSU? Should we consider to use XMLType instead of "classical" relational tables loaded/unloaded by means of XSU?
    Any hint appreciated.
    Regards,
    /Hervi QUENIVET
    P.S. our environment is Linux red hat 7.3 and Oracle 9.2.0.1 server

    Hi,
    I'm trying to generate a XML file from a "large" table (about 7 million lines, 512Mbytes of storage) by means of XSU. I get into "java.lang.OutOfMemoryError" even after raising the heap size up to 1 Gbyte (option -Xmx1024m of the java cmd line).
    For the moment, I'm involved in an evaluation process. But in a near future, our applications are likely to deal with large amount of XML data, (typically hundreds of Mbytes of storage, which means possibly Gbytes of XML code), both in updating/inserting data and producing XML streams from existing data in relationnal DB.
    Any ideas about memory issues regarding XSU? Should we consider to use XMLType instead of "classical" relational tables loaded/unloaded by means of XSU?
    Any hint appreciated.
    Regards,
    /Hervi QUENIVET
    P.S. our environment is Linux red hat 7.3 and Oracle 9.2.0.1 server Try to split the XML before you process it. You can take look into XMLDocumentSplitter explained in Building Oracle XML Applications Book By Steven Meunch.
    The other alternative is write your own SAX handler and send the chuncks of XML for insert

  • Dealing with large volumes of data

    Background:
    I recently "inherited" support for our company's "data mining" group, which amounts to a number of semi-technical people who have received introductory level training in writing SQL queries and been turned loose with SQL Server Management
    Studio to develop and run queries to "mine" several databases that have been created for their use.  The database design (if you can call it that) is absolutely horrible.  All of the data, which we receive at defined intervals from our
    clients, is typically dumped into a single table consisting of 200+ varchar(x) fields.  There are no indexes or primary keys on the tables in these databases, and the tables in each database contain several hundred million rows (for example one table
    contains 650 million rows of data and takes up a little over 1 TB of disk space, and we receive weekly feeds from our client which adds another 300,000 rows of data).
    Needless to say, query performance is terrible, since every query ends up being a table scan of 650 million rows of data.  I have been asked to "fix" the problems.
    My experience is primarily in applications development.  I know enough about SQL Server to perform some basic performance tuning and write reasonably efficient queries; however, I'm not accustomed to having to completely overhaul such a poor design
    with such a large volume of data.  We have already tried to add an identity column and set it up as a primary key, but the server ran out of disk space while trying to implement the change.
    I'm looking for any recommendations on how best to implement changes to the table(s) housing such a large volume of data.  In the short term, I'm going to need to be able to perform a certain amount of data analysis so I can determine the proper data
    types for fields (and whether any existing data would cause a problem when trying to convert the data to the new data type), so I'll need to know what can be done to make it possible to perform such analysis without the process consuming entire days to analyze
    the data in one or two fields.
    I'm looking for reference materials / information on how to deal with the issues, particularly when a large volumn of data is involved.  I'm also looking for information on how to load large volumes of data to the database (current processing of a typical
    data file takes 10-12 hours to load 300,000 records).  Any guidance that can be provided is appreciated.  If more specific information is needed, I'll be happy to try to answer any questions you might have about my situation.

    I don't think you will find a single magic bullet to solve all the issues.  The main point is that there will be no shortcut for major schema and index changes.  You will need at least 120% free space to create a clustered index and facilitate
    major schema changes.
    I suggest an incremental approach to address you biggest pain points.  You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table.  Perhaps
    some indexes targeted at improving that process is a good first step.
    What SQL Server version and edition are you using?  You'll have more options with Enterprise (partitioning, row/page compression). 
    Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column.  Then create a new table (using SELECT INTO) that has strongly
    typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one.  You can follow up later to address columns data corrections and/or transformations. 
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Dealing with large files, again

    Ok, so I've looked into using BufferedReaders and can't get my head round them; or more specifically, I can't work out how to apply them to my code.
    I have inserted a section of my code below, and want to change it so that I can read in large files (of over 5 million lines of text). I am reading the data into different arrays and then processing them. Obvioulsy, when reading in such large files, my arrays are filling up and failing.
    Can anyone suggest how to read the file into a buffer, deal with a set amount of data, process it, empty the arrays, then read in the next lot?
    Any ideas?
    void readV2(){
         String line;
         int i=0,lineNo=0;
            try {
              //Create input stream
                FileReader fr = new FileReader(inputFile);
                 BufferedReader buff = new BufferedReader(fr);
                while((line = buff.readLine()) != null) {
              if(line.substring(0,2).equals("V2")){
                     lineNo = lineNo+1;
              IL[i] = Integer.parseInt(line.substring(8,15).trim());
                    //Other processing here
                     NoOfPairs = NoOfPairs+1;
                     }//end if
                     else{
                      break;
            }//end while
            buff.close();
            fr.close();
            }//end try
            catch  (IOException e) {
            log.append("IOException error in readESSOV2XY" + e + newline);
            proceed=false;
            }//end catch IOException
            catch (ArrayIndexOutOfBoundsException e) {
                   arrayIndexOutOfBoundsError(lineNo);
         }//end catch ArrayIndexOutOfBoundsException
         catch (StringIndexOutOfBoundsException e) {
              stringIndexOutOfBoundsError(e.getMessage(),lineNo);
    }//end V2Many thanks for any help!
    Tim

    Yeah, ok, so that seems simple enough.
    But once I have read part of the file into my program,
    I need to call another method to deal with the data I
    have read in and write it out to an output file.
    How do I get my file reader to "remember" where I am
    up to in the file I'm reading?
    An obvious way, but possibly not too good technically,
    would be to set a counter and when I go back to the
    fiel reader, skip that number of lines in the inpuit
    file.
    This just doesn't seem too efficient, which is
    critical when it comes to dealing with such large
    files (i.e. several million lines long)I think you might need to change the way you are thinking about streams. The objective of a stream is to read and process data at the same time.
    I would recommend that you re-think your algorithm : instead of reading the whole file, then doing your processing - think about how you could read a line and process a line, then read the next line, etc...
    By working on just the pieces of data that you have just read, you can process huge files with almost no memory requirements.
    As a rule of thumb, if you ever find yourself creating huge arrays to hold data from a file, chances are pretty good that there is a better way. Sometimes you need to buffer things, but very rarely do you need to buffer such huge pieces.
    - K

  • Strategies for dealing with large forms

    I've created a dynamic form that is about 30 pages long. Although performance in reader is OK, when editing, every change takes about 10 seconds to be digested by Designer. I've tried editing parts and then copying them across, but formatting information seems to be lost when pasting.
    What are the practical limits to how long a form can be, and are there any suggestions for how to deal with longer forms?
    Many thanks
    Alex

    Performance may be okay in Reader 8, but a 30-page dynamic form will probably by unusable in Reader 7. I have done work with very large forms before, and I have maintained each page in a separate XDP and then brought them together once all the kinks are worked out. I don't know why you would be losing formatting information.
    There is a technique that I have used where you use dynamic subforms to show only one page of the form at a time. That approach allows a very large form to be used in version 7 with acceptable response time. However, it's a very complex approach, and I have come to think that it's too complex for comfort. The more complex your approach is the more likely it is to fail in a future version of Reader.

  • Tips for dealing with large channel count on cRio

    Hello, I have a very simple application that takes an analog input using an AI module (9205) from a thermistor and based on the value of the input it sends out a true/false signal using a digital out module (9477). Each cRio chassis will have close to 128 channels, the code being exactly the same for each channel.
    I wonder if anyone has any tips for how I can do this so that I don't have to copy and paste each section of code 128 times. Obviously this would be a nightmare if the code ever had to be changed. I'm sure there is a way to make a function or a class but being new to graphical programming I can't think of a good way to do this. I looked for a way to dynamically select a channel but can't seem to find anything, if I could select the channel dynamically I'm guessing I can create a subvi and do it that way. Any tips or help would be greatly appreciated.

    There isn't a way to dynamically choose a channel at runtime.  In order
    for the VI to compile successfully, the compiler must be able to statically
    determine which channel is being read or written in the I/O Node at
    compile time.  However, that doesn't mean you can't write a reusable
    subvi.  If you right click on the FPGA I/O In terminal of the I/O Node and create a constant or control, you should be able to reuse the same logic for all of your channels.  The attached screen shot should illustrate the basics of what this might look like.  If you right click the I/O control/constant and select "Configure I/O Type...", you can configure the interface the I/O Item must support in order for it to be selectable from the control.  While this helps single source some of the logic, you will still eventually need 128 I/O constants somewhere in your FPGA VI hierarchy.
    I should also mention that if each channel being read from the 9205 is contained in a separate subVI or I/O Node, you will also incur some execution time overhead due to the scanning nature of the module.  You mentioned you are reading temperature signals so the additional execution time may not be that important to you.  If it is, you may want to look at the IO Sample Method.  You can find more information and examples on how to use this method in the LV help.  Using the IO Sample Method does allow you to dynamically choose a channel at runtime and is generally more efficient for high channel counts.  However, it's also a lot more complicated to use than the I/O Node.
    You also mentioned concerns about the size of arrays and the performance implications of using a single for loop to iterate across your data set.  That's the classic design trade off when dealing with FPGAs.  If you want to perform as much in parallel as possible, you'll need to store all 128 data points from the 9205 modules at once, process the data in parallel using 128 instances of the same circuit, and then output a digital value based on the result.  If you're using fixed point data types, that's 182 x 26 bits for just the I/O data from the 9205.  While this will yield the fastest execution times, the resulting VI may be too large to fit on your target.  Conversely, you could use the IO Sample Method to read each channel one at a time, process the data using the same circuit, and then output a digital value.  This strategy will use the least amount of logic on the FPGA but will also take the longest to execute.  Of course, there are all sorts of options you could create in between these two extremes.  Without knowing more about your requirements, it's hard to advise which end of the spectrum you should shoot for.  Anyway, hopefully this will give you some ideas on where to get started.
    Attachments:
    IO Constant.JPG ‏31 KB

  • Dealing with large amounts of data

    Hi
    I am new to using Flex and BlazeDS. I can see in the FAQ that binary data transfer from server to Flex app is more efficient. My question is: is there a way to build a Flex databound control (e.g. datagrid) which binds to a SQL query or web service or remoting on the server side and then display infinite amount of data as user pages or scrolls using scrollbar? or does the developer have to write code from scratch to deal with paging or deal with infinite scrollbar by asking server for chunks of data at a time?

    You have to write your own paginating grid. It's easy to do, just make a canvas, throw a grid and some buttons on it, than when user click to the next page, you make a request to the server and when you have a result, set it as a new data model for the grid.
    I would discourage you to return infinite amount of data to the user, provide a search functionality plus pagination.
    Hope that helps.

  • Dealing with large files

    Hi folks, I've done a lot of searching online and having a hard time finding answers to what I consider some basic things. Appreciate any advice you can offer.
    I have about 3 years of videos on my Canon Vixia HFM31 that I've been neglecting. It has a 32 gig flash drive that's full. When I import everything into iMovie, the movies add up to a size larger than my hard drive (300 gigs). So I've imported about 100 gigs of movies and left the rest on the camera for the time being. I also tried taking the MTS files off the camera to my Macbook, but when I connect the camera and open it with finder, the camera has only one file called AVCHD that's 32 gigs, rather than individual files. I also read that importing MTS files into iMovies is problematic. There isn't any Canon software that I can find.
    So I'm at a loss. How can I store all my video files on my hard drive in a reasonable size? Ideally, I'm thinking I should be able to copy the MTS files off my camera, put them in a folder and import them individually to iMovie when I want to create a project. But I can't figure out how to do it. Now I have about 100 gigs of imported movies in iMovie and 200 gigs more on the camera that I can't import because of space issues. And I'm not sure how to store the 100 gigs of movies in iMovie if I choose not to edit them.
    Is my thinking wrong? What do people do?

    AVCHD is actually a folder system with many subfolders, which contain all your movie clips (.MTS files).  (In Mavericks it appears as a package the contents of which you can explore with right-click - show package contents.   There no way of compressing it to a smaller size without degrading clip quality (it is already very efficiently compressed).  so you either have to store you clips on the flash drive or copyit to a large external hard drive. (A backup is highly desirable in any case)
    iMovie 9 imports AVCHD OK either direct from camera, from memory card or a disk copy of the complete ACHV folder.  You cannot import individual .mts files unless you first convert them to a readable format.  (I can explain if you are interested).
    In imovie 10 you do the same as in iMovie 9 or you can drag individual .mts files into a timeline. 
    I hope this helps.  Please confirm which OS and iMovie version you are using

  • How to deal with large amount of data

    Hi folks,
    I have a web page that needs 3 maps to serve. Each map will hold about 150,000 entries, and all of them will use about 100MB total. For some users with lots of data (e.g. 1,000,0000 entries), it may use up to 1GB of memory. If few of these high-volume users log in at the same time, it can bring the server down. The data is from the files, I cannot read it on demand because it will be too slow. Loading the data to maps offers me very good performance, but it does not scale. I am thinking to serialize the maps and deserialize them when I need. Is it my only option?
    Thanks in advance!

    JoachimSauer wrote:
    I don't buy the "too slow" argument.
    I'd put the data into a database. They are built to handle that amount of data.++

  • What's the deal with large white blocks in exported jpegs

    I just exported my first job I ran through Aperture. Out of 736 files, 9 were flawed. When opened, the exported jpeg had a missing section of the photo, that was all white. The white block appeared in random spots but always on the edges. The problem didn't occur the second time around with the same 9 files/ this time processed in a batch of 9.
    I need to know how to export files reliably - in my workflow, I don't have time to double check file quality.
    Has anyone seen this issue? I've seen it with 1.1.1 and 1.1.2.
    I'm shooting with a 1ds mrkII.
    MacBookPro   Mac OS X (10.4.6)  

    i've seen this too, in other odd ways. it seems to affect the images that i've spotted (with the patch tool) and have other corrections/adjustment applied. its almost like the GPU gets too overloaded to accurately recreate all of the image mods. the image seems to be segmented into smaller rectangular sections that probably get loaded into the video card's memeory and the adjustments and corrections are applied on smaller, more managable sections. when the image is stitched back together something goes horribly wrong!
    the sections tend to be mis-matched with some completely black, some that have black spots where the patch tool was applied, other sections have what looks like a white overlay that's 50% opaque. sometimes even my thumbnails come up with the same results as these malformed exports. i've been able to shake them back to normalcy by toggling one of the corrections (spot/patch, highligh/shadow, etc). after the thumbnail updates properly, the export usually works fine.
    this is definitely a bug with aperture and i would report it as such. attach and send the image to the bug report if you can!
    scott
    ps: for what its worth, i'm usually shooting with my Nikon D200 these days.
    PowerMac G5 2.5GHz   Mac OS X (10.4.7)   MacBook Pro 2.0GHz

  • Just in case any one needs a Observable Collection that deals with large data sets, and supports FULL EDITING...

    the VirtualizingObservableCollection does the following:
    Implements the same interfaces and methods as ObservableCollection<T> so you can use it anywhere you’d use an ObservableCollection<T> – no need to change any of your existing controls.
    Supports true multi-user read/write without resets (maximizing performance for large-scale concurrency scenarios).
    Manages memory on its own so it never runs out of memory, no matter how large the data set is (especially important for mobile devices).
    Natively works asynchronously – great for slow network connections and occasionally-connected models.
    Works great out of the box, but is flexible and extendable enough to customize for your needs.
    Has a data access performance curve so good it’s just as fast as the regular ObservableCollection – the cost of using it is negligible.
    Works in any .NET project because it’s implemented in a Portable Code Library (PCL).
    The latest package can be found on nugget. Install-Package VirtualizingObservableCollection. The source is on github. 

    Good job, thank you for sharing
    Best Regards,
    Please remember to mark the replies as answers if they help

  • Question: Best Strategy for Dealing with Large Files 1GB

    Hi Everyone,
    I have to build a UCM system for large files > 1GB.
    What will be the best way to upload them (applet, checkin form, webdav)?
    Also what will be the best way to download them (applet, web form, webdav)?
    Any tips will be greatly appreciated
    Tal.

    Not sure what the official best practice is, but I prefer to get the file on the servers file system first (file copy) and check it in from that path. This would require a customization / calling a custom service.
    Boris
    Edited by: user8760096 on Sep 3, 2009 4:01 AM

Maybe you are looking for

  • PROBLEM WITH MY DATE BETWEEN CLAUSE IN SELECT QUERY

    WHEN I RUN THE CODE IT'S FETCHING DATA FROM 2007,2008,2009. SELECT ERDAT VBELN NETWR FROM VBRK INTO CORRESPONDING FIELDS OF TABLE IT_VBRK WHERE ERDAT BETWEEN '01.11.2008' AND '30.11.2008'. PLEASE GIVE ME A SOLUTION GUYS.

  • Customized dashboard for Expense Management

    I'm planning to implement customized dashboard for expense management ..it includes displaying the how many expenses are approved or rejected and Display the workload per user/role for "Expense Management can u please help me out

  • Milestone tracking in Master project plan

    Hi,   I have 6 projects(project 2010) and also need to have one master project. The master project should contain milestones of each of the 6 subprojects  so that the director could view status of milestones only. Could you please suggest how should

  • Using the Internet Toolkit

    I am able to successfully send an e-mail using the Internet Toolkit; however, the "message" is always empty (the text string wired to message in my VI does not appear in the e-mail). I have tried different types of Email VIs and I always have the sam

  • REDO LOG BUFFER

    Whenever a DML like Insert statement is issued it gets written to the Database buffer cache first by the server process(dedicated server). Which process writes this DML activity to the redo log buffer ? I guess DML is first written to the redolog fil