Format, Format, Everywhere, but not a  Best Practice Anywhere....

I am creating internal corporate training videos... relatively easy items, I don't expect to see them on TV or even in the movie theaters... but I do expect to distribute the results via the web and iPods.
My arsenal includes a Sony HVR-Z5U with HVR-MRC1 and an iMac with FCS.
What format do you record everything on the camera? on to tape? onto CF card? What format do you use within Final Cut? What format do you render?
They seem like basic questions, but there are so many formats that Final Cut accepts, plus three formats from the camera (DV, HDV, DVCAM), and different formats for delivery… (I am thinking that H.264 is the best codec for the web and iPods) as FLV or MP4?
I am looking for advice on the best way to go for the whole production supply chain.
I hope you don’t mind, but I would like to take advantage of knowledge, experience and best practices from anyone with similar circumstances.
Thank you in advance for any/all comments...
-Steven

If you're just going to the web and ipod (or even DVD), you will also get excellent results (and a possibly easier workflow) by shooting and editing in DV.
By the way "best practice" is to shoot, edit and deliver in a format that works for you, your client, and your client's viewers. Unfortunately, there are many several different paths to this goal - hence you are going to get conflicting advice.
If I were you, I'd stick with the format you're most comfortable in working in right now (your need to get paid, after all, and your client or boss doesn't need unnecessary delays). Then when you have time to experiment with other workflows, do so on your own time.
Best of luck with it.

Similar Messages

  • Row level security with session variables, not a best practice?

    Hello,
    We are about to implement row level security in our BI project using OBIEE, and the solution we found most convenient for our requirement was to use session variables with initalization blocks.
    The problem is that this method is listed as a "non best practice" in the Oracle documentation.
    Alternative Security Administration Options - 11g Release 1 (11.1.1)
    (This appendix describes alternative security administration options included for backward compatibility with upgraded systems and are not considered a best practice.)
    Managing Session Variables
    System session variables obtain their values from initialization blocks and are used to authenticate Oracle Business Intelligence users against external sources such as LDAP servers or database tables. Every active BI Server session generates session variables and initializes them. Each session variable instance can be initialized to a different value. For more information about how session variable and initialization blocks are used by Oracle Business Intelligence, see "Using Variables in the Oracle BI Repository" in Oracle Fusion Middleware Metadata Repository Builder's Guide for Oracle Business Intelligence Enterprise Edition.
    How confusing... what is the best practice then?
    Thank you for your help.
    Joao Moreira

    authenticating / authorizing part is take care by weblogic and then USER variable initialized and you may use it for any initblocks for security.
    Init block for authenticating / authorizing and session variables are different, i guess you are mixing both.

  • My Ipod charges only at the store, but not at home or anywhere else!

    I've just got a Ipod 30Gb, and usually I put it to charge on my computer before the battery is completely off, but I forgot to do it and now it is completely empty. I tried to charge it at home, but my computers don't recognize my Ipod, so I took it to the store, they tried with their Power adapter and it worked. I bought an adapter for me, but when I got home, it still didn't work. I tried at work and it did'nt charge either, so I returned to the store, and they tried with their adapter, and it worked again, but not with any other new adapter they opened to test. They have to old model, the big one...does anyone have any idea of what could I do before being obliged to send my Ipod for repair? I'm sure it's not broken...

    Not sure what you mean about "big old" charger working and newer ones not. It is possible that the battery has run down too low to revive by itself or that the iPod has crashed and won't respond.
    Try leaving it for 24-30 hours in a cool place and try again with the AC charger and a known good cable.
    For more info, see my web site: The iPod Battery Unplugged.
    -dan
    BlacBook, iMac 15, G1, G3, G5 iPods   Mac OS X (10.4.8)   Boot Camp

  • Can the vector object created by adobe shape doewloaded from creative cloud in any kinds of format like pdf but not jpg?

    i want the vector files can be edit in illustrator , so  does anyone know how to dowload as a vector object?

    i want the vector files can be edit in illustrator , so  does anyone know how to dowload as a vector object?

  • Separate App Module, View Object based on Select not EO: Best practice?

    Hi,
    I have a list of base tables(parameters) that are used everywhere in my application for selection components (List, Combo, Radio).
    I'm considering creating a separate app module with a view object based on query for each table.
    I have created a BaseTableAM application module for managing those tables, Entity objects and View Objects in update mode.
    Those tables are rarely changed.
    Example of table:
    ItemType
    item_type_id = primary key
    name = description
    In the BasetableAM:
    ItemTypeEO entity object
    ItemTypeVO updateable view object based on related entity object.
    Now in my project I will create combo boxes for selection of the item_type_id.
    Separate application module:
    SelectionViewAM composed of View Objects based on Select statements and not on Entity Objects.
    Example of view object ItemTypeViewVO:
    SELECT ItemType.ITEM_TYPE_ID,
    ItemType.NAME
    FROM ITEM_TYPE ItemType
    All view objects in this application module will be read-only and for selection only.
    By basing all those View Objects on select statements not based on Entity Objects I suppose there will be no locking and "value changed" management => less overhead.
    Is this a good practice?
    Being read-only I could have based those View Object on Entity Object with flag updateable set to no, what would you recommend?
    By creating a separate Application Module for those selection view objects, I know that update will not be possible.
    Thank you for your advice.
    Frederic

    See this article for View Object tuning tips:
    http://www.oracle.com/technology/products/jdev/tips/muench/voperftips/index.html
    Are you planning to use this AM as a nested AM inside other application modules?
    If you don't the "selection AM" will have its own, separate database connection/transaction.
    If you do, it will share connection, transaction with its containing "root" AM.
    It may not be relevant to your application, but just realize (which is explained in the article I point to above) that view objects that are not related to entity objects do not "see" pending changes in the current transaction. That feature depends on the VO/EO cooperation. It's fine to build VO's without an EO -- in fact we've made it easier to do this in 10.1.2 in the Design Time wizards -- but you just want to make sure you realize what features it's giving up. If you don't need those EO-related features, then by all means create an Expert Mode VO that's not related to EO's.

  • I downloaded some music but there's two copies of each song showing up on the phone but not on itunes or anywhere else.

    I recently downloaded songs off of itunes on my computer. usually i don't do this because i think its a pain in the *** to connect my iphone to the computer but since i had recently updated it with icloud and the new ios software i figured i could do it just this once. well anyways i downloaded some songs on the PC i have and everything was going well up until i was about to connect the iphone to sync the songs. for some reason i remembered about icloud and i looked up the songs on my phone and sure enough there they were i clicked on them to see if they would play but they wouldn't so i just connected the phone to the computer after everything had synced i found that there was now two copies of the each song i had just bought.  at first none of the songs would play but then i connected my phone to wifi and i think this is when the icloud came to into work because now one song will play and the other wont. it annoys me because its not showing whether i double bought the songs or if its some kind of stupid glitch. i dont think i double bought them though because theres no double copies showing up on itunes. i just want to know how to fix this because its annoying to have two copies of the song when one doesnt even play. could anyone please help?

    Apple does not respond to these forums but users do.  You can sync both devices to the same computer using the same itunes account.  After initial setup you can pick and choose what apps you want on your devices.  I have an iphone, ipod touch and ipad that I sync to the same computer and account.  I have different apps, music, photos, etc on each.  The key is one computer, one itunes account and you will be fine.

  • Best practice for smooth workflow in PrE?

    Hi all.  I'm an FCP user for many many years, but I'm helping an artist friend of mine with a Kickstarter video...and he's insistent that he's going to do it himself on his Dell laptop running Win7 and PrE (I believe v11, from the CS3 package)...so I'm turning to the forum here for some help.
    In Apple Land (that is, those of us still using FCP 7), we take all our elements in whatever format they're delivered to us and transcode them to ProRes, DVCPro HD or XDCAM...it just makes it easier not to deal with mixed formats on the timeline (please, no snarky comments about that, OK, I turn out broadcast work every week doing this so this method's got something going for it...).  However, when I fired up PrE I see that you can edit in all sorts of formats, including long-GOP formats like .mts and mp4 files that I wouldn't dream of working with natively in FCP...I don't enjoy staring at spinning beachballs that much. 
    Now, remembering that he's working with a severely underpowered laptop, with 2 gig of RAM, and a USB2 connection to his 7200 rpm "video" drive...and also considering that most of the video he'll be using will come in two flavors (AVHCD from a Canon Vixia 100, and HDV from a Canon EX-something or other), what would be the best way to proceed to maximize the ease at which he can edit?  I'm thinking that transcoding to something like Motion-JPEG or some other inter-frame compressed AVI format would be the way to go...it's a short video and he won't have that much material so file size inflation isn't an issue...speed and ease of processing the video files on the timeline (or do they call it a "Sceneline") is.
    Any advice (besides "buy another computer") would be appreciated...

    Steve, thanks, this is helping me now.
    I mention MJPEG because, as an Interframe compression method, it's less processor-intensive than GOP style MPEG compressions.  Again, my point of reference is the Mac/FCP7 world (so open my eyes as to how an Intel processor running Win7 would work differently), but over there best practice says NOT to edit in a GOP-base codec (XDCAM being the exception which proves the rule, eg, render times), but to transcode everything from, say, h264 or AVCwhatever into ProRes.  YES, I know PrE (and PPRO) doesn't use ProRes...not asking that.  But, at least at this juncture, any sort of a hardware upgrade is out of the question...this is what he's gonna be using to edit.  Now if I was going to be using an underpowered Mac laptop to try and edit, I most certainly would not try to push native AVCHD .mts files or native h264 files through it...those don't even work well with the biggest MacPro towers.  What is it about PrE that allows it to efficiently work with these processor-intensive formats?  This is the crux of the issue, as I'm going to advise my friend to "work this way" and I don't want to send him down the garden path of render hell...
    And finally, your advice to run tests is well-given...since I have no experience with PrE and his computer, I guess that's where we'll start...

  • Best Practices for ASAP Inputs - As-Is Business Process Mapping

    I am new to the SAP world and my company is in the early phases of implementation.  I am trying to prepare the "as-is" business process maps for the Project Preparation and Business Blueprint phases and I am looking for some best practices.  I've been told that we don't want them to go too deep but are there best practices and/or examples that give more information on what we should be capturing and the format.
    I have searched the forums, WIKI, ASAP documentation, and other areas but have not found much at this level of detail.  I have reviewed the [SAP BPM Methodology|http://wiki.sdn.sap.com/wiki/display/SAPBPX/BPM+Methodology] but again I am looking for more detail if anyone can direct me to that.
    Thank you in advance for any assistance.
    Kevin

    Hello Kevin,
    You can try to prepare a word document for each of your As-Is processes first before going to As-Is Process Design in a flowchart.
    The word document can have 7 sections -
    The first section can include Name of the Process Owner, Designation, Process Responsibility, User Department(s) involved, Module Name and a Document number for reference.
    The second section can include Process Definition details - Name of the major process, Name of the minor process, Name of the sub process and a Process ID for future reference.
    The third section can be titled as Inputs - this contains details of - Input, Vendor for the input, Type of Input (Data / Activity / Process), Category of Input (Vital / Essential / Desirable) and Mode of Information (Hard / Soft Copy).
    The fourth section can be Process Details. Here you can write the process in detail.
    The fifth section to contain outputs of the process, customer to whom these outputs are sent, type of output (report / approval / plan / request / email / fax), Category of Output (Vital / Essential / Desirable) and Mode of Information (Hard / Soft Copy).
    The sixth section can be Issues / Pain Areas in this process - Issue Description, Remarks, Expectations, Priority (High / Medium / Low)
    The seventh section can be expected reports in future out of this process for internal and external reporting.
    Hope this helps your question.

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

  • Best practice for sqlldr -- direct to core or to stage first?

    We want to begin using sql loader to load simple (but big) tables that have, up to this point, been loaded via perl and it's DBI connection to Oracle. The target tables typically receive 10-20 million rows per day (parsed log data from many thousands of machines) and at any one time can hold more than a billion total records PER TABLE. These tables are pretty simple (typically 5-10 columns, 2 or 3 part primary keys). They are partitioned BY MONTH (DAY is always one of the primary key columns) and set up on very large SAN disk arrays, stripped, etc. I can use sqlldr to load the core tables directly, OR, I could use sqlldr to load a staging table on a daily basis, then PL/SQL and SQL+ to move data from the staging table to the core. My instinct tells me that the second route is SAFER, that is there is less chance that something catastrophic could corrupt the core table, but obviously this would (a) take more time to develop and (b) reduce our over-all throughput.
    If I go the first route, loading the core directly with sqlldr, what is the worst thing that could possibly happen? That is, in anyone's experience, can a sqlldr problem corrupt a very large table? Does the likelihood of a catastrophic problem increase in proportion to the number of rows already in the target table? Are there strategies that will mitigate potential catastrophies besides going to staging and then to core via pl/sql? For example, if my core is partitioned by month, might I limit potential damage only to the current month? Are there any known potential pitfalls to using sqlldr directly in this fashion?
    Thanks
    matthew rapaport
    [email protected]

    Wow, thanks everyone!
    1. External tables... I'd thought of this, but in our development group we have no direct access to the DBMS server so we'd have to do some workflow to move the data files to the dbms server and then write the merge. If sql loader will do the job directly (to the core) without risk, then that seems to be the most straight-forward way to go.
    2. The data in the raw files is very clean, this being done in the step that parses the raw logs (100-500mb each) to the "insert files" (~20mb each), and there would be no transformations in moving data from staging to core, so again that appears to argue for direct-to-core loading.
    3. The data is collected by DAY, but reported on mostly by MONTH (e.g., select day, sum(col), count(col), from TABLE where day between A and B, group by day, order by day, etc where A and B are usually the first and last day of the month) and that is why the tables are partitioned by month, but perhaps this is not the best practice (???). I'm not the DBA, but I can make suggestions... What do you think?
    4. Time to review my sqlldr docs! I haven't used it in a couple of years, and I'm keeping my fingers crossed that it can handle the particular delimiter used in these files (pipe-tab-pipe expressed in perl as "|\t|". If I recall it can, but I'm not sure how to express the tab...
    Meanwhile, thank you very much, you have all been a BIG help... Strange no one asked me how it was that a Microsoft company was using Oracle :-) ... I work for DANGER INC (was www.danger.com if anyone interested) which is now owned (about 9 months now) by Microsoft, and this is the legacy reporting system... :-)
    matthew rapaport
    [email protected]
    [email protected]

  • Best-practice for use of object styles to manage image text wrap issues when aiming at both print and EPUB output?

    I have a work-flow question about object styles, text-wrap, and preparing a long document with lots of images for dual print/EPUB output in InDesign CC 2014.
    I am sort of experienced with InDesign but new to EPUB export. I have hundreds of pages and hundreds of images so I'd like to make my EPUB learning curve, in particular, less painful.
    Let me talk you through what I'm planning and you tell me if it's stupid.
    It's kind of a storybook-look I'm going for. Single column of text (6" by 9" page) with lots of small-to-medium images on the page (one or two images per page), and the text flowing around, sometimes right, sometimes left. Sometimes around the bounding box, sometimes following the edges of the images. So in each case I'm looking to tweak image size and placement and wrap settings so that the image is as close to the relevant text as possible and the layout isn't all wonky. Lovely print page the goal. Lots of fussy trade-offs and deciding what looks best. Inevitably, this will entail local overrides of paragraph styles. So what I want to do, I guess, is get the images as closely placed as possible, before I do any of that overriding. Then I divide my production line.
    1) I set aside the uniformly-styled doc for later EPUB export. (This is wise, right? Start for EPUB export with a doc with pristine styles?)
    2) With the EPUB-bound version set aside, I finish preparing the print side, making all my little tweaks. So many pages, so many images. So many little nudges. If I go back and nudge something at the beginning everything shifts a little. It's broken up into lots of separate stories, but still ... there is no way to make this non-tedious. But what is best practice? I'm basically just doing it by hand, eyeballing it and dropping an inline anchor to some close bit of text in case of some storm, i.e. if there's a major text change my image will still be almost where it belongs. Try to get the early bits right so that I don't have to go back and change them and then mess up stuff later. Object styles don't really help me with that. Do they? I haven't found a good use for them at this stage (Obviously if I had to draw a pink line around each image, or whatever, I'd use object styles for that.)
    Now let me shift back to EPUB. Clearly I need object styles to prepare for export. I'm planning to make a left float style and a right float style and a couple of others for other cases. And I'm basically going to go through the whole doc selecting each image and styling it in whatever way seems likeliest. At this point I will change the inline anchors to above line or custom, since I'm told EPUB doesn't like the inline ones.
    I guess maybe it comes down to this. I realize I have to use object styles for images for EPUB, but for print, manual placement - to make it look just right - and an inline anchor seems best? I sort of feel like if I'm going to bother to use object styles for EPUB I should also use them for print, but maybe that's just not necessary? It feels inefficient to make so many inline anchors and then trade them for a custom thing just for EPUB. But two different outputs means two different workflows. Sometimes you just have to do it twice.
    Does this make sense? What am I missing, before I waste dozens of hours doing it wrong?

    I've moved your question to the InDesign EPUB forum for best results.

  • What is the best practice for package source locations?

    I have several remote servers (about 16) that are being utilized as file servers that have many binaries on them to be used by users and remote site admins for content. Can I have SCCM just use these pre-existing locations as package sources, or is this
    not considered best practice? 
    Or
    Should I create just one package source within close proximity to the Site Server, or on the Site Server itself?
    Thanks

    The primary site server is responsible for grabbing the source data and turning it into packages for Distribution points.  so while you can use ANY UNC to be a source location for content, you should be aware of where that content exists in regards
    to your primary site server.  If your source content is in Montana but your primary server is in California ... there's going to be a WAN hit ... even if the DP it's destined for is also in Montana.
    Second, I strongly recommend locking down your source UNC path so that only the servers and SCCM admins can access it.  This will prevent side-loading of content  as well as any "accidental changing" of folder structure that could cause
    your applications/packages to go crazy.
    Put the two together and I typically recommend you create a DSL (distributed source library) share and slowly migrate all your content into it as you create your packages/applications.  You can then safely create batch installers, manage content versions,
    and other things without fear of someone running something out of context.

  • Best Practice for DS6.2 disk usage

    Hi:
    We are installing DS6.2 on Windows 2003 SP1 configured with a 10gb OS drive and a 50gb data drive. Both are connected to EMC SAN but are not the same LUN, the data drive is higher end disk. Our initial idea was to install DS on the OS drive and the place the directory instances and the logs on the data drive. After mulling it over it may seem this is not the best practice. Does anyone have any knowledge of the best use of disk or know of any best practice documentation?
    Thanks for your help!
    Mike

    Those numbers indicate that your application is in fact doing
    something. Perhaps you have a timer still running, or work being
    done on an enterFrame event?

  • CF10 Production Best Practices

    Is there a document or additional information on the best way to configure multiple instances of CF10 in a production environment? Do most folks install CF10 as a ear/war J2EE deployment under JBoss or Tomcat with Apache as the webserver?

    There’s no such document that I know of, no.
    And here’s a perfect example where “best practices” is such a loaded phrase.
    You wonder if “install CF10 as a ear/war J2EE deployment under JBoss or Tomcat with Apache as the webserver”. I’d say the answer to that is “absolutely not”. Most folks do NOT deploy CF as a JEE ear/war. It’s an option, yes. And if you are running A JEE server already, then it does make great sense to deploy CF as an ear/war on said container.
    But would it be a recommended practice for someone installing CF10 without interest in JEE deployment? I’d say not likely, unless they already have familiarity with JEE deployment.
    Now, could one argue “but there are benefits to deploying CF on a JEE container”? Sure, they could. But would it be a “best practice”? Only in the minds of a small minority I think (those who appreciate the beenfits of native JEE deployment and containers). Of course, CF already deploys on a JEE container (Tomcat in CF10, JRun in CF 6-9), but the Standard and Enterprise Server forms of deployment hide all that detail, which is best for most. With those, we just have a ColdFusion directory and are generally none-the-wiser that it runs on JRun or Tomcat.
    That leads then to the crux of your first sentence: you mention multiple instances. That does change things quite a bit.
    First, a couple point of clarification before proceeding: in CF 7-9, such “multiple instance” deployment was for most folks enabled using the Enterprise Multiserver form of deployment, and created a Jrun4 directory where instances were installed (as distinguished from the Enterprise Server form I just mentioned above, which hid the JRun guts).
    In CF10, though, there is no longer a “multiserver” install option. It’s just that CF10 Enterprise (or Trial or Developer editions) does let you create new instances, using the same Instance Manager in the CF admin that existed for CF Enterprise Multiserver from 7-9. CF10 still only lets you create with the Enterprise (or trial or developer) edition, not Standard.
    (There is a change in CF10 about multiple instances, though: note that in CF10, you never see a Tomcat directory, even if you want “multiple instances”. When you create them, they are created right under the CF10 directory, as siblings to the cfusion directory (and while that cfusion directory previously existed only in the CF 7-9 multiserver form of deployment, it does not exist even in CF10 Standard, as the only instance it can use.)
    So all that is a lot of info, not any “best practices”, but you asked if there was any “additional info”, and I thought that helpful for you to have as you contemplate your options. (And of course, CF10 Enterprise does still let you deploy as a JEE ear/war if you want.)
    But no, doing it would not be a best practices. If someone asked for “the best way to configure multiple instances of CF10 in a production environment”, I’d tell them to just proceed as they would have in CF 7-9, using the same CF Admin Instance Manager capability to create them (and optionally cluster them).
    All that said, everything about CF10 does now run on Tomcat instead of JRun, and some things are improved under the covers, like clustering (and related things, like session replication), because those are now Tomcat-based features (which are actively updated and used by the Tomcat community), rather than JRun-based (which were pretty old and hardly used by anyone since JRun was EOL-ed several years ago).
    I’ll note that I offer a talk with a lot more detail contrasting CF10 on Tomcat to CF9 and earlier on JRun. That may interest you, snormo, so check out the presentations page at carehart.org.
    Hope all that’s helpful.
    /charlie
    PS You conclude with a mention of Apache as the web server. And sure, if one is on a *nix depoyiment or just favors Apache, it’s a fine option. But someone running CF 10 on Windows should not be discouraged from running on IIS. It’s come a long way and is now very secure, flexible, and capable, whether used for one or multiple instances of CF. 

  • Best practices about JTables.

    Hi,
    I'm programming in Java since 5 months ago. Now I'm developing an application that uses tables to present information from a database. This is my first time handling tables in Java. I've read Sun's Swing tutorial about JTable, and several information on other websites, but they limit to table's syntax and not in best practices.
    So I decided what I think is a proper way to handle data from a table, but I'm not sure that is the best way.Let me tell you the general steps I'm going through:
    1) I query employee data from Java DB (using EclipseLink JPA), and load it in an ArrayList.
    2) I use this list to create the JTable, prior transformation to an Object[][] and feeding this into a custom TableModel.
    3) From now on, if I need to search an object on the table, I search it on the list and then with the resulting index, I get it from the table. This is possible because I keep the same row order on the table and on the list.
    4) If I need to insert an item on the table, I do it also on the list, and so forth if I'd need to remove or modify an element.
    Is the technique I'm using a best practice? I'm not sure that having to keep synchronized the table with the list is the better way to handle this, but I don't know how I'd deal just with the table, for instance to efficiently search an item or to sort the table, without doing that first on a list.
    Are there any best practices in dealing with tables?
    Thank you!
    Francisco.

    Hi Joachim,
    What I'm doing now is extending DefaultTableModel instead of implementing AbstractTableModel. This is to save implementing methods I don't need and because I inherit methods like addRow from DefaultTableModel. Let me paste the private class:
    protected class MyTableModel extends DefaultTableModel {
            private Object[][] datos;
            public MyTableModel(Object[][] datos, Object[] nombreColumnas) {
                super(datos, nombreColumnas);
                this.datos = datos;
            @Override
            public boolean isCellEditable(int fila, int columna) {
                return false;
            @Override
            public Class getColumnClass(int col) {
                return getValueAt(0, col).getClass();
        }What you are suggesting me, if I well understood, is to register MyTableModel as a ListSelectionListener, so changes on the List will be observed by the table? In that case, if I add, change or remove an element from the list, I could add, change or remove that element from the table.
    Another question: is it possible to only use the list to create the table, but then managing everything just with the table, without using a list?
    Thanks.
    Francisco.

Maybe you are looking for

  • HT4847 Camera Roll

    Hi, if I store my entire Camera Roll on iCloud, and I have an additional storage plan that allows me to back up more data to iCloud then I can store on my iPhone, and then I delete pictures from my Camera Roll on my device, because I'm running out of

  • Errors in Migrating a Local WD to the NWDI.

    Hi Friends I have one doubt on Migrating a Local WD to the NWDI. 1.     I have created DC in DTR perspective. Now go to WDJ perspective  and switch to the Navigation frame, select and expand the local WD.Right click on the src folder and select the C

  • Need help signing out of Creative Cloud, but it's stuck wanting me to verify an email for an account I don't have

    Any help would be appreciated! It was an old email account and it wants me to confirm a verification, but I don't have that email anymore!

  • Abap-hr-2

    10 important latest question on hr-abap please

  • Unable to install updates that came with the macbook

    When I am trying to update like iPHoto and Imovie I am getting an error asking me to sign in with the account id that I used at the time of purchase. I have these installed with my Macbook pro and is trying to update after login with my apple id. Can