Ffmpeg question - best practice

I have a script i saved and have used for a while without any issues;
#!/bin/bash
for i in *.mkv
do
ffmpeg -i "$i" -acodec ac3 -vcodec copy "${i%.mkv}.mp4"
done
which gives me;
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (ac3 (native) -> ac3 (native))
My question is, is it better for me to copy the audio instead and is AC3 -> AC3 going to give me an issue?
Sometimes i get AAC souce audio which is why i specify AC3

psjbeisler wrote:is AC3 -> AC3 going to give me an issue?
No, other than wasting time re-encoding. You probably wouldn't notice a difference in quality. If something weird happens, like a change in channel layout, then it should be reported upstream.
psjbeisler wrote:Sometimes i get AAC souce audio which is why i specify AC3
AAC is the most common audio format for MP4 container, so stream copying it would be the best option.
qubodup wrote:
You could check what the codec is:
codec=`ffprobe video.mkv 2>&1 >/dev/null |grep Stream.*Audio | sed -e 's/.*Audio: //' -e 's/[, ].*//'`
You can avoid the redirection, grep, and sed (see FFmpeg Wiki: FFprobe Tips).
$ ffprobe -v error -select_streams a:0 -show_entries stream=codec_name -of default=nw=1:nk=1 input.mkv
aac
Note that only the first audio stream will be probed in this example. If there are others they will be ignored. Change "-select_streams a:0" to "-select_streams a" if you want to list all.
Last edited by DrZaius (2015-04-18 23:41:57)

Similar Messages

  • [CS5.5/6] - XML / Data Merge questions & Best practice.

    Fellow Countrymen (and women),
    I work as a graphic designer for a large outlet chain retailer which is constantly growing our base of centers.  This growth has brought a workload that used to be manageable with but two people to a never ending sprint with five.  Much of what we do is print, which is not my forte, but is also generally a disorganized, ad-hoc affair into which I am wading to try to help reduce overall strain.
    Upon picking up InDesign I noted the power of the simple Data Merge function and have added it to our repetoire in mass merging data sources.  There are some critical failures I see in this as a tool going forward for our purposes, however:
    1) Data Merge cannot handle information stored and categorized in a singular column well.  As an example we have centers in many cities, and each center has its own list of specific stores.  Data merge cannot handle a single column, or even multiple column list of these stores very easily and has forced us into some manual operations to concatenate the data into one cell and then, using delimiter characters, find and replace hard returns to seperate them.
    2) Data Merge offers no method of alternate alignment of data, or selection by ranges.  That is to say:  I cannot tell Data merge to start at Cell1 in one column, and in another column select say... Cell 42 as the starting point.
    3) Data merge only accepts data organized in a very specific, and generally inflexible pattern.
    These are just a few limitations.
    ON TO MY ACTUAL DILEMMA aka Convert to XML or not?
    Recently my coworker has suggested we move toward using XML as a repository / delivery system that helps us quickly get data from our SQL database into a usable form in InDesign. 
    I've watched some tutorials on Lynda.com and havent yet seen a clear answer to a very simple question:
    "Can XML help to 'merge' large, dynamic, data sets like a list of 200 stores per center over 40 centers based off of a single template file?"
    What I've seen is that I would need to manually duplicate pages, linking the correct XML entry as I go rather than the program generating a set of merged pages like that from Data Merge with very little effort on my part.  Perhaps setting up a master page would allow for easy drag and drop fields for my XML data?
    I'm not an idiot, I'm simply green with this -- and it's kind of scary because I genuinely want us to proceed forward with the most flexible, reliable, trainable and sustainable solution.  A tall order, I know.  Correct me if I'm wrong, but XML is that beast, no?
    Formatting the XML
    Currently I'm afraid our XML feed for our centers isnt formatted correctly with the current format looking as such:
    <BRANDS>
         <BRAND>
              • BrandID = xxxx
              [Brand Name]
              [Description]
              [WebMoniker]
              <CATEGORIES>
                   <CATEGORY>
                        • xmlns = URL
                        • WebMoniker = category_type
              <STORES>
                   <STORE>
                        • StoreID = ID#
                        • CenterID = ID#
    I dont think this is currently usable because if I wanted to create a list of stores from a particular center, that information is stored as an attribute of the <Store> tag, buried deep within the data, making it impossible to 'drag-n-drop'. 
    Not to mention much of the important data is held in attributes rather than text fields which are children of the tag.
    Im thinking of proposing the following organizational layout:
    <CENTERS>
         <CENTER>
         [Center_name]
         [Center_location]
              <CATEGORIES>
                   <CATEGORY>
                        [Category_Type]
                        <BRANDS>
                             <BRAND>
                                  [Brand_name]
    My thought is that if I have the <CENTER> tag then I can simply drag that into a frame and it will auto populate all of the brands by Category (as organized in the XML) for that center into the frame.
    Why is this important?
    This is used on multiple documents in different layout styles, and since our store list is ever changes as leases end or begin, over 40 centers this becomes a big hairy monster.  We want this to be as automated as possible, but I'd settle for a significant amount of dragging and dropping as long as it is simple and straightforward.  I have a high tollerance for druding through code and creating work arounds but my co-workers do not.  This needs to be a system that is repeatable and understandable and needs to be able to function whether I'm here or not -- Mainly because I would like to step away from the responsibility of setting it up every time
    I'd love to hear your raw, unadulterated thoughts on the subject of Data merge and XML usage to accomplish these sorts of tasks.  What are your best practices and how would you / do you accomplish these operations?
    Regards-
    Robert

    From what I've gleaned through watching Lynda tutorials on the subject is that what I'm hoping to do is indeed possible.
    Peter, I dont disagree with you that there is a steep learning curve for me as the instigator / designer of this method for our team, but in terms of my teammates and end-users that will be softened considerably.  Even so I'm used to steep learning curves and the associated frustrations -- but I cope well with new learning and am self taught in many tools and programs.
    Flow based XML structures:
    It seems as though as long as the initial page is set up correctly using imported XML, individual data records that cascade in a logical fashion can be flowed automatically into new pages.  Basically what you do is to create an XML based layout with the dynamic portion you wish to flow in a single frame, apply paragraph styles to the different tags appropriately and then after deleting unused records, reimport the XML with some specific boxes checked (depending on how you wish to proceed).
    From there simply dragging the data root into the frame will cause overset text as it imports all the XML information into the frame.  Assuming that everything is cascaded correctly using auto-flow will cause new pages to be automatically generated with the tags correctly placed in a similar fashion to datamerge -- but far more powerful and flexible. 
    The issue then again comes down to data organization in the XML file.  In order to use this method the data must be organized in the same order in which it will be displayed.  For example if I had a Lastname field, and a Firstname field in that order, I could not call the Firstname first without faulting the document using the flow method.  I could, however, still drag and drop content from each tag into the frame and it would populate correctly regardless of the order of appearance in the XML.
    Honestly either method would be fantastic for our current set of projects, however the flow method may be particularly useful in jobs that would require more than 40 spreads or simple layouts with huge amounts of data to be merged.

  • Question - Best practice data source for Vs2008 and Crystal Reports 2008

    I have posted a question here
    CR2008 using data from .NET data provider (ADO.NET DATASET from a .DLL)
    but think that perhaps I need general community advise on best practice with data sources.
    In Crystal reports I can choose the data source location from any number of connection types, eg ado.net(xml), com, oledb, odbc.
    Now in regard to the post, the reports have all been created in Crxi 6.3, upgraded to Crystal XI and now Im using the latest and greatest. I wrote the Crystal Reports 6.3/ XI reports back in the day to do the following: The Reports use a function from COM Object which returns an ADO recordset which is then consumed fine.
    So I don't want to rewrite all these reports, of which there are many.
    I would like to know if any developers are actually using .NET Class libraries to return ADO.NET datasets via the method call or if you are connecting directly to XML data via whatever source ( disk, web service, http request etc).
    I have not been able to eliminate the problem listed in the post mentioned above, which is that the Crystal Report is calling the .NET class library method twice before displaying the data. I have confirmed this by debugging the class lib.
    So any guidance or tips is appreciated.
    Thanks

    This is already being discuss in one of your other threads. Let's close this one out and concentrate on the one I've already replied to.
    Thanks

  • ISE policy creation question - best practices

    Ok, I am a rookie ISE user here and am trying to learn as I go. I have a 802.1x policy for our corporate users on both wired and wireless and a wireless guest policy that redirects to the guest portal to enter credentials created in the sponsor portal. The corporate user has access to corporate resources and the guest basically has access to just the internet.
    I need to make what I am calling a Vendor policy that is basically a hybrid of the corporate user and the guest user. These would be vendors that are on-site to assist with programming and need access longer than what the guest account can be created for. This would also have specific ACLs that grant them access to the specific resources they would nee. I would like to tie this into AD authentication since they have an AD account created to be able to access those corporate resources in most cases. My first question is do I have a single policy that is tweaked as vendors come and go or do I simply create a specific policy for each vendor? My second question is do I or should I create unique SSIDs for each vendor?
    As I said I am just now getting into getting ISE configured. I am just not sure of what is considered a best practice or what is considered a secure way to may things happen. In regards to the policies I have created, they work but I think I have a couple holes to address.
    Thanks ...
    Brent

    Mostly makes sense. I have the AD part just need to get an AD group created for my test subject.
    I created an Endpoint Identity Group to place the vendors devices into so that we can allow laptop to connect but not phone. Got that.
    I think I can handle the Authorization Profile. It will be something like if VendorAsset and AD1:ExternalGroups Equals VendorADGroup then VendorPermissions. VendorPermissions would be the ACL that limits where they can go. I also need to create a non 802.1x based SSID as well and add this to the Authorization profile but can still be generic enough to be useable by all vendors.
    I think it is my Authentication rules that I need to modify for Vendor as my Corporate based policies use Dot1x and I need a policy that does not use dot1x. Right?

  • Data warehousing question/best practices

    I have been given the task of copying a few tables from our production database to a data warehousing database on a once-a-day (overnight) basis. The number of tables will grow over time; currently it is 10. I am interested in not only task success but also best practices. Here's what I've come up with:
    1) drop the table in the destination database.
    2) re-create the destination table from the script provided by SQL Developer when you click on the 'SQL' tab while you're viewing the table.
    3) INSERT INTO the destination table from the source table using a database link. Note: I am not aware of any columns in the tables themselves which could be used to filter added/deleted/modified rows only.
    4) After data import, create primary key and indexes.
    Questions:
    1) SQL Developer included the following lines when generating the table creation script:
    <table creation DDL commands>
    then
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE (INITIAL 251658240 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_PGROW"
    it generated this code snippet for the table, the primary key and every index.
    Is this necessary to include in my code if they are all default values? For example, one of the indexes gets scripted as follows:
    CREATE INDEX "XYZ"."PATIENT_INDEX" ON "XYZ"."PATIENT" ("Patient")
    -- do I need the following four lines?
    PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 60817408 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "TBLSPC_IGROW"
    2) Anyone with advice on best practices for warehousing data like this, I am very willing to learn from your experience.
    Thanks in advance,
    Carl

    I would strongly suggest not dropping and recreating tables every day.
    The simplest option would be to create a materialized view on the destination database that queries the source database and to do a nightly refresh of that materialized view. You could then create a materialized view log on the source table and then do an incremental refresh of the materialized view.
    You can schedule the refresh of the materialized view either in the materialized view definition, as a separate job, or by creating a refresh group and adding one or more materialized views.
    Justin

  • Question: Best practices for dealing with multiple AM configurations

    Hello all,
    I have a project using ADF Business Components and ADF Faces. I would like to set up multiple configurations for the Application Modules to support the following scenarios:
    1). Local testing and debugging - using a connection defined in JDeveloper and AM Pooling turned off.
    2). Testing and debugging on an application server - using a JDBC Data Source and AM Pooling turned off
    3). Production deployment - using a JDBC Data Source and AM Pooling turned on.
    It is no problem to create multiple AM configurations to reflect this scenario. In order for the web part of the application to use the correct configurations, the DataBindings.cpx file must specify the correct ones. I was thinking to have 3 different DataBindings.cpx files and to change the CpxFileName context-param in the web.xml file as needed.
    My questions:
    1). Does this make sense as an approach? It should be better than having to change a single AM configuration every time I deploy or test. Is there any easy way to keep multiple DataBIndings.cpx files in synch, given that we may add new pages from time-to-time? Alternatively, can we do some type of "include" processing to include just the dataControlUsages section into a common DataBindings.cpx file?
    2). How would you manage the build-and-deploy process? For the most part, in JDev we would be using configuration #1. The only time to switch to configuration #2 or #3 would be to build an EAR file for deployment. Is this something that it would make sense to accomplish with ANT? I'm not an ANT expert at all. The ANT script would have "build-test-ear" and "build-prod_ear" targets which would swap in a correct web.xml file, recompile everything, build the EAR, then put the development web.xml file back. I'm relatively sure this is possible... comments?
    3). Is there some other recommended approach?
    I appreciate any insights from experience, or even just ideas or thoughts that I can test out.
    Best regards,
    John

    Hi K,
    Sorry for the long long delay in responding I've been traveling - and thanks for the e-mail tickler too...
    To answer your question in short, I do think that ANT is the right way to go; there is an extra ANT task called XMLTask that I was able to download and play with, and it seems it would make this manipulation of the cpx file (or the xcfg file, for that matter) pretty straightforward. I don't have any code to post; it's just in the conceptual stage for me right now. I didn't see anything magical in JDev 11 TP3 that solved this problem for me either.
    Having said all of that, it's more complicated than it might appear. In addition to the DataBindings.cpx file (stores, among other things, which AM configuration to use for each data control), it's certainly possible to programmatically access an AM (specifying the configuration either directly in the code or via a properties file/etc). I'm not sure what the most common use case for AM configurations is, but in my case, I have a Test configuration and a Prod configuration. The Test config, among other things, disables AM pooling. When I am developing/testing, I always use the Test config; in Production, I always use the Prod config. Perhaps the best way for me to do this would be to have an "Active" config and use ANT tasks to copy either Test or Prod to "Active." However, our Subversion repository is going to have a few complaints about this.
    John

  • Session question; best practice

    Hi,
    One of our high profile application's queries/updates are served to user sessions. But we wanted to improve user query performance and reduce general database activity.
    This piece of application cause an auto refresh to execute every 60 seconds. These queries execute against order tables looking for statuses on active orders, are user specific, and in some cases are not optimally tuned producing very high database buffer get and disk read activity. On average, 1,500 executions representing various flavors of these queries are executed hourly.
    my questions are:
    1) how can we get max performance ?
    2) can we cache these queries for like every 30 secs ?
    3) how can we cache ? so that user sessions would access the cache.
    -sharma

    well, you could load the data and put it in the application scope (in memory) with a timeout time so that it's not used after however long, in which case, a request would have to go to get the newer data from the DB.

  • Redirection question - best practice

    I have a managed session scoped bean, named UserBean, which as it's name implies stores user information. Now, if the session has expired (or was never created), a lot of it's methods will return null values and will result in an error. What I'd like to know is the best way to redirect to a login page if the UserBean is null. My first idea was the following:
    <navigation-rule>
            <from-view-id>*</from-view-id>
            <navigation-case>
                <from-outcome>#{UserBean == null}</from-outcome>
                <to-view-id>/login.xhtml</to-view-id>
                <redirect/>
            </navigation-case>
        </navigation-rule>However it didn't work. Am I onto something? If not -- whats the best solution?
    I appreciate your help.

    ServletRequest is an interface [1]. In a HTTP servlet environment the ServletRequest instance in the Filter is an implementation of HttpServletRequest [2]. So cast it back.
    [1] http://java.sun.com/javaee/5/docs/api/javax/servlet/ServletRequest.html
    [2] http://java.sun.com/javaee/5/docs/api/javax/servlet/http/HttpServletRequest.html

  • MiniDV work flow question-best practice

    I've got a client with a Cannon DC100 miniDV camcorder. This unit does not seem to have a firewire or USB port.
    What I have are 17 of these little puppies that I need up get into iMovie so i can teach him the iMovie basics.
    What I think I need is some freestanding reader that plugs into the firewire/usb port.
    Is there a better way?
    Thanks

    Michael:
    Take a look at this camera here:
    http://www.camcorderinfo.com/content/Canon-DC100-Camcorder-Review.htm
    It records in MPEG2 format into miniDVD. You can insert the miniDVDs into your G5's drive and take aout and convert the movies to DV. As far as I know, you can find problems inserting miniDVD/CDs in slot loading drives, but not in a standard one.
    The camera has a AV output but you need a A/D converter to digitalize the video. If your customer wants to learn to edit his home videos, he must change to any miniDV (tape) consumer camera in place of getting any other hardware to work with this one.
      Alberto

  • Database Primary Key Question - Best Practices

    I posted this in the ADDT forum, but I imagine I'll get more
    responses here:
    All you database developers - how do you deal with primary
    keys? Do you
    ALWAYS use an AutoIncrement/AutoNumber? Or only sometimes? Is
    there an
    argument to NOT use AutoIncrement? I know how I create
    databases and how
    I usually do things. I know how a few of my colleagues work.
    But how
    about the rest of the world? (Research for a MS Access book I
    am
    involved with.)
    Alec
    Adobe Community Expert

    .oO(Alec)
    >I posted this in the ADDT forum, but I imagine I'll get
    more responses here:
    >All you database developers - how do you deal with
    primary keys? Do you
    >ALWAYS use an AutoIncrement/AutoNumber?
    No.
    >Or only sometimes? Is there an
    >argument to NOT use AutoIncrement?
    AUTO_INCREMENT is a proprietary MySQL feature. For some
    people this
    might be an argument against it, but doesn't have to. Every
    DBMS has its
    own special features. You just have to decide whether you
    want to keep
    your code/queries as portable as possible or want to get the
    most out of
    your DB. Usually I prefer performance/features over
    portability, simply
    because for me and my projects it's very unlikely that I have
    to change
    the DBMS. I've chosen MySQL for good reasons and will stay
    with it for
    quite a while.
    >I know how I create databases and how
    >I usually do things. I know how a few of my colleagues
    work. But how
    >about the rest of the world? (Research for a MS Access
    book I am
    >involved with.)
    It always depends on the table itself, what data it contains,
    what I
    want to do with it and also some personal preferences. In n:m
    tables for
    example there's no need for an extra numeric PK, since the
    entire record
    already is the PK, built from two or more FKs.
    But if I need a numeric PK, I usually use sequences. Some
    DBMS support
    them natively, in MySQL they can be emulated with an extra
    table. It
    simply means, that the used PK number is generated _before_
    the record
    itself is inserted. For me and my framework this has some
    advantages
    (makes the internal work a bit easier), but of course in
    other cases an
    AUTO_INCREMENT might be more appropriate.
    So IMHO there's no general solution. If an AUTO_INCREMENT or
    something
    similar fits your needs, you should use it. I don't see a
    real problem
    with that.
    Micha

  • Best Practice Question

    I have 3 Areas for my DWH
    The first area is Staging then validation and core
    Staging is just do load date from the source systems
    validation is to validate data (every city has to have a countrie ....)
    core is my DWH shema.
    The First step in ETL is to load the data from core to validation, let's say my GEO_DIM Dimension goes to Countries, Cities and Regions in core. Additionaly I build a CRC SUM when I downlaod from Core to Validation and store the CRC Checksum in a Staging table.
    The second step is to load target from the source systems to staging, but only those date that are non equal to the previous downloadet CRC schecksum, so only changed or new data going to staging.
    The third step is do load that new/changed data from staging to core and proof some dependences. It's just validation.
    My Question is, what is the best practic to bring three tables (Countries, cities and region) to one Dimension
    thanks and regards
    Andreas

    Andreas,
    I guess the correct is depends... Without kidding, are you planning to use a flat star table for this dimension? If that is the case you would be joining the sources together and loading this into the table.
    Now this sounds way to simple, so I guess there is something more to the question...
    Jean-Pierre

  • Informatica and Essbase Best Practice questions

    We now have the Informatica adapter for Essbase installed and working. We have been able to get Informatica to upload data successfully. Now I have a few questions that I have not been able to find answers to in any documentation or forums for Informatica or Essbase. I have submitted these same questions to the Informatica Support but thought I would also post the questions here to see if many folks are using Informatica against Essbase.
    We are using:
    Informatica 8.6.1 (Linux)
    Essbase 11.1.1.3 (Windows 2003)
    1) I can see in Informtica that when we load data to Essbase (Target) it gives me the option to run a calc script AFTER it loads the data. However, if I need to run a Calc script BEFORE the load to Essbase (Target) what is the best practice? The work around I have found was to add the same session twice and for the 1st instance select the option to 'ONLY RUN THE CALC SCRIPT' on the mapping tab. The problem with this is the log shows that it will still run the query against the Source tables. This will impact run times and double to querying against the Source database. What is the Best Practice and proper way to build the workflow to Run a Calc Script BEFORE the load?
    2)Since you do not see the list of Calc Scripts for Essbase in Informatica (you have to manually type the Calc name), If I want to run the 'Default' calc for Essbase what is the syntax to run the 'Default' Calc Script? Tried 'Default' but didn't seem to work.
    3)I have other tasks in Essbase I want to do before actually having Informatica load the data. I would like to run the MAXL commands via a Command task. What is the Best Practice for doing this and the syntax to run MAXL commands in a Command Task in Informatica? I previously had Shell scripts built on the Informatica server that would be kicked off within Informatica, but we are trying to move away from shell scripts and instead have the scripting codes IN the workflows/sessions to make it easier to review the code and follow the logic, rather than having to find the scripts and open each of them.
    Any assistance you have with the two products working together I would GREATLY appreciate it!
    Robert

    AS i know addUser(User user){ ... } is much more useful for several reasons:
    1.Its object oriented
    2.its easy to write , because if Object has many parameters its very painful to write method with comma seperated parameters

  • Best Practices Question: How to send error message to SSHR web page.

    Best Practices Question: How to send error message to SSHR web page from custom PL\SQL procedure called by SSHR workflow.
    For the Manager Self-Service application we’ve copied various workflows which were modified to meet business needs. Part of this exercise was creating custom PL\SQL Package Procedures that would gather details on the WF using them on custom notification sent by the WF.
    What I’m looking for is if/when the PL\SQL procedure errors, how does one send an failure message back and display it on the SS Page?
    Writing information into a log or table at the database level works for trouble-shooting, but we’re looking for something that will provide the end-user with an intelligent message that the workflow has failed.
    Thanks ahead of time for your responses.
    Rich

    We have implemented the same kind of requirement long back.
    We have defined our PL/SQL procedures with two OUT parameters
    1) Result Type (S:Success, E:Error)
    2) Result Message
    In the PL/SQL procedure we always use below construct when we want to raise any message
    hr_utility.set_message(APPL_NO, 'FND_MESSAGE_NAME');
    hr_utility.raise_error;
    In Exception block we write below( in successful case we just set the p_result_flag := 'S';)
    EXCEPTION
    WHEN APP_EXCEPTION.APPLICATION_EXCEPTION THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    WHEN OTHERS THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    fnd_message.set_name('PER','FFU10_GENERAL_ORACLE_ERROR');
    fnd_message.set_token('2',substr(sqlerrm,1,200));
    fnd_msg_pub.add;
    p_result_message := fnd_msg_pub.get_detail;
    After executing the PL/SQL in java
    We have written some thing similar to
    orclStmt.execute();
    OAExceptionUtils.checkErrors (txn);
    String resultFlag = orclStmt.getString(provide the resultflag bind no);
    if ("E".equalsIgnoreCase(resultFlag)){
    String resultMessage = orclStmt.getString(provide the resultMessage bind no);
    orclStmt.close();
    throw new OAException(resultMessage, OAException.ERROR);
    It safely shows the message to the user with all the data in the page.
    We have been using this construct for a long time for all our projects. They are all working as expected.
    Regards,
    Peddi.

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • Best Practice EJB 3.0 Question

    I have a web application consisting of 3 projects:
    - Model (EJB 3.0 Session Beans connected to two different databases)
    - TagLibrary (custom tag library)
    - ViewController (Web App / GUI)
    Currently I am connecting to the EJB Beans using code that Jdeveloper generates for a test client:
    env.put( Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory" );
    env.put(Context.PROVIDER_URL, "t3://localhost:7101");
    However would like to move these to a properties file (I believe jndi.properties) such that they can be modified based on app server.
    My question is following:
    What is best practice for Session beans in the Model project to access other session beans in the same project? Do I also need to specify JNDI prop file and settings? (This occurs when Bean from one database needs to access bean from another database).
    Or should I really put these in two separate projects / EJB libraries?
    Thanks,
    Kris

    You have two options, first is to use JNDI lookup (you should be able to use just new InitialContext(), without the environment map).
    Second one is more elegant and, as far as I'm concerned, should be referred to as best practice, that is using dependency injection:
    @EJB
    YourSesionBeanInterface yourEJB;
    If you get stuck, there is plenty of documentation about this on the internet.
    Pedja

Maybe you are looking for

  • Some songs will not sync to my iPhone 4s, looks like they are loading but they never do.

    Recently I downloaded some new music and now when I try to sync to my iPhone 4S none of the songs i downloaded are syncing to my phone and even some of the songs that were on my phone are not anymore. On iTunes on "On this phone" or whatever the song

  • IPhoto crashes every time i try to import!!!

    I just finished uninstalling and reinstalling iPhoto and then updating all the software updates because the application was crashing every time I opened it.  Well, now it'll open but crashes when I try to import something.

  • Q10 sync with Microsft Outlook

    To replace my Tourch 9800, I bought a Q10 when they first came out, but I had to return it when I could not get it to sync with Outlook. (Unless I had and enterprose Server - which I don't). I understand that there is now Link software that will allo

  • I cant start firefox after updating to version 36

    If i try to start the program " Profil missing" is appearing, I have restarted my PC tried all the things suggested , but nothing helps , installed again but nothing helps

  • Design issue with similar machines, but different Globals

    Hi, I know my questions about Global Variables has been up before, but maybe not in this kind of issue. The original coding has been done by someone else than me, but i want to change it. Now every machine has a folder and each folder consist of simi