Is FLV more efficient than m2ts as a proxy?

FLV is considerable smaller than m2ts but I wonder if it is less taxing on the system.

The strategy that is better is writing the code in the clearest way possible. If you're creating 10 new objects, call new 10 times.

Similar Messages

  • Why is Array more efficient than flexible-size collection

    it is a bit more tricky to understand the two advantges of Array as following:
    1. access to items held in an array is often more efficient than access to the items in a comparable flexible-size collection,
    2. arrays are able to store objects or primitive -type values, but flexible -size collection can store only object.
    thanks in advance

    it is a bit more tricky to understand the two
    wo advantges of Array as following:
    1. access to items held in an array is often more
    e efficient than access to the items in a comparable
    flexible-size collection,The standard dynamic data structure (with standard I mean member of the collections framework) comparable to the array is the ArrayList. In principle the access to an element is equally efficient in both cases. (There's some overhead because you have to do a method call in the ArrayList case). It's only in very low-level algoritms you need to use arrays. In most cases ArrayList is fine.
    2. arrays are able to store objects or primitive
    ve -type values, but flexible -size collection can
    store only object.This is true with the standard collection, but there are alternative collections that can store primitives, for example,
    http://pcj.sourceforge.net/
    In the next version of Java there's a new concept call autoboxing. This means there's an implicit conversion taking place between primitive types and their corresponding wrapper class, say between int and Integer. This means you will be able to write your program AS IF you could store primitives in the standard collections.

  • Is a WITH...SELECT query more efficient than a SELECT query ?

    Hi folks,
    Is the WITH...SELECT just a convenience or is it really efficient than a simple SELECT with UNION ALL ? e.g. is the following:
    with rset as (select dname,empno,ename,hiredate,sal from emp e,dept d where e.deptno=d.deptno)
    select dname,empno,ename,hiredate,sal,
    case
    when trunc(hiredate) < to_date('19800101','yyyymmdd') then 'Hired before 1980'
    when trunc(hiredate) between to_date('19800101','yyyymmdd') and to_date('19851231','yyyymmdd') then 'Hired between 1980 and 1985'
    else 'Hired after 1985'
    end as notes
    from rset
    union all
    select dname,empno,ename,hiredate,sal,
    case
    when sal < 500 then 'Salary less than 500'
    when sal between 501 and 1500 then 'Salary between 501 and 1500'
    else 'Salary greater than 1500'
    end as notes
    from rset;
    better than the following:
    select dname,empno,ename,hiredate,sal,
    case
    when trunc(hiredate) < to_date('19800101','yyyymmdd') then 'Hired before 1980'
    when trunc(hiredate) between to_date('19800101','yyyymmdd') and to_date('19851231','yyyymmdd') then 'Hired between 1980 and 1985'
    else 'Hired after 1985'
    end as notes
    from emp e,dept d where e.deptno=d.deptno
    union all
    select dname,empno,ename,hiredate,sal,
    case
    when sal < 500 then 'Salary less than 500'
    when sal between 501 and 1500 then 'Salary between 501 and 1500'
    else 'Salary greater than 1500'
    end as notes
    from emp e,dept d where e.deptno=d.deptno;
    I am a newbie at sql tuning. Apparently, the first query should be faster because it runs the actual query only once and then just works on the resultset obtained. Is this thinking correct ?
    Thanks a lot!!
    JP

    Also I tried a test here with a ten million row emp table queried five times, and explain plan showed the optimizer would read emp five times and not once.
    Re: Intresting question
    Apparently, the first query should be faster because it runs the actual query only once
    and then just works on the resultset obtained.But my test combined with Jonathan's article made me question whether materializing ten million rows somewhere would be faster than querying them five times. Somehow I doubt it.

  • Under what circumstances are recursion more efficient than loops

    Although recursion do provide a cleaner code and often suggests a more interesting approach to a problem, it nonetheless is relatively more resource consuming for solving some of the more novice programming feats (such as facorial calculation or binary conversion). Can somebody give me an example of a more practical and efficient use of recursion?

    Can somebody give me an example
    of a more practical and efficient use of recursion?What do you mean by practical or efficient? Most tree traversal implementations are recursive since it's much easier to traverse trees using recursion.
    Kaj

  • Is selecting from a view more efficient than selecting from multiple tables

    Hi heres the problem
    Lets say i created a view from 2 tables (person and info). both have a ID column
    create view table_view (age,name,status,id) as
    select a.age, a.name, b.status, b.id
    from person a, info.b
    where a.id=b.idif i want to select a given range of values from these 2 tables which of the following queries would be more effective.
    select a.age, a.name, b.status, b.id
    from person a, info.b
    where a.id=b.id
    and a.id <1000
    select age, name, status, id
    from table_view
    where  id <1000

    Bear in mind that this concept of views storing the SQL text is something relative to Oracle databases and not necessarily other RDBMS products. For example, Ingres databases create "views" as tables of data on the database and therefore there is a difference between selecting from the view and selecting from the base tables.
    Oracle also has "materialized views" which differ from normal "views" because they are actually created, effectively, as tables of data and will not use the indexes of the base tables.
    In Oracle, you cannot create indexes against "views" but you can create indexes against "materialized views".

  • Is there something more efficient than EXISTS statement?

    Hi All,
    I have this query:
    select
    waco,
    ltrim(rtrim(wammcu)) as wammcu,
    wadoco,
    walitm, 
    wadl01,
    case 
    when wastrx = 0 
    then null 
    else 
    proddta.JulianToDate(wastrx) 
    end 
    AS wastrx,
    wauom,
    cast(wasoqs / 10000 as numeric (15,4)) as wasoqs,
    cast(wasocn / 10000 as numeric (15,4)) as wasocn,
    --THIS IS THE PART TO OPTIMIZIE
    (select
     sum(cast(glaa / 100 as numeric (15,2)))
    from
     proddta.f0911
    where
    glkco=waco and
    glco =waco and
    gldct='IV' and
    glsblt = 'W' and
    glsbl = right(replicate('0',8) + CONVERT(varchar,CONVERT(int,wadoco)),8)  and
    exists
    (select * from  proddta.f4095
     where
     mlanum in (3220,3260,3270,3280) and
     mlco=glco and
     mldcto=wadcto and
     mldct=gldct and
     mlcost='A1' and
     mlobj=globj)) as A1_Account 
    --END OF THE PART TO OPTIMIZE
    from  
    proddta.f4801
    where
    waco='00010' and
    wadcto='WO' and
    wasrst='99' and
    wastrx >=  114001
    and exists
    select 
    from 
    proddta.f0911
    where
    glkco=waco and
    glco =waco and
    gldct='IV' and
    glsblt = 'W' and
    glsbl = right(replicate('0',8) + CONVERT(varchar,CONVERT(int,wadoco)),8)  and
    gldgj between  114001 and  114031
    It takes a very long time to execute, my T-SQL is almost rusted, is there a way to improve the query with new costruct T-SQL has?

    This is the query plan:
    StmtText
      |--Compute Scalar(DEFINE:([Expr1008]=CASE WHEN [JDE_PROD].[PRODDTA].[F4801].[WASTRX]=(0.) THEN NULL ELSE [JDE_PROD].[PRODDTA].[JulianToDate](CONVERT_IMPLICIT(int,[JDE_PROD].[PRODDTA].[F4801].[WASTRX],0)) END, [Expr1020]=[Expr1018]))
           |--Nested Loops(Inner Join, OUTER REFERENCES:([JDE_PROD].[PRODDTA].[F4801].[WADCTO], [JDE_PROD].[PRODDTA].[F4801].[WADOCO], [JDE_PROD].[PRODDTA].[F4801].[WACO]))
                |--Hash Match(Right Semi Join, HASH:([JDE_PROD].[PRODDTA].[F0911].[GLSBL])=([Expr1021]), RESIDUAL:([JDE_PROD].[PRODDTA].[F0911].[GLSBL]=[Expr1021]))
                |    |--Clustered Index Seek(OBJECT:([JDE_PROD].[PRODDTA].[F0911].[F0911_PK]), SEEK:([JDE_PROD].[PRODDTA].[F0911].[GLDCT]=N'IV'),  WHERE:([JDE_PROD].[PRODDTA].[F0911].[GLDGJ]>=(114001.) AND [JDE_PROD].[PRODDTA].[F0911].[GLDGJ]<=(114031.)
    AND [JDE_PROD].[PRODDTA].[F0911].[GLKCO]=N'00010' AND [JDE_PROD].[PRODDTA].[F0911].[GLCO]=N'00010' AND [JDE_PROD].[PRODDTA].[F0911].[GLSBLT]=N'W') ORDERED FORWARD)
                |    |--Compute Scalar(DEFINE:([Expr1007]=ltrim(rtrim([JDE_PROD].[PRODDTA].[F4801].[WAMMCU])), [Expr1009]=CONVERT(numeric(15,4),[JDE_PROD].[PRODDTA].[F4801].[WASOQS]/(1.000000000000000e+004),0), [Expr1010]=CONVERT(numeric(15,4),[JDE_PROD].[PRODDTA].[F4801].[WASOCN]/(1.000000000000000e+004),0),
    [Expr1021]=CONVERT_IMPLICIT(nvarchar(8),right('00000000'+CONVERT(varchar(30),CONVERT(int,[JDE_PROD].[PRODDTA].[F4801].[WADOCO],0),0),(8)),0)))
                |         |--Clustered Index Scan(OBJECT:([JDE_PROD].[PRODDTA].[F4801].[F4801_PK]), WHERE:([JDE_PROD].[PRODDTA].[F4801].[WASTRX]>=(114001.) AND [JDE_PROD].[PRODDTA].[F4801].[WACO]=N'00010' AND
    [JDE_PROD].[PRODDTA].[F4801].[WADCTO]=N'WO' AND [JDE_PROD].[PRODDTA].[F4801].[WASRST]=N'99'))
                |--Compute Scalar(DEFINE:([Expr1018]=CASE WHEN [Expr1031]=(0) THEN NULL ELSE [Expr1032] END))
                     |--Stream Aggregate(DEFINE:([Expr1031]=COUNT_BIG([Expr1022]), [Expr1032]=SUM([Expr1022])))
                          |--Hash Match(Right Semi Join, HASH:([JDE_PROD].[PRODDTA].[F4095].[MLOBJ])=([JDE_PROD].[PRODDTA].[F0911].[GLOBJ]), RESIDUAL:([JDE_PROD].[PRODDTA].[F4095].[MLOBJ]=[JDE_PROD].[PRODDTA].[F0911].[GLOBJ]))
                               |--Clustered Index Seek(OBJECT:([JDE_PROD].[PRODDTA].[F4095].[F4095_PK]), SEEK:([JDE_PROD].[PRODDTA].[F4095].[MLANUM]=(3.220000000000000e+003) AND [JDE_PROD].[PRODDTA].[F4095].[MLCO]=[JDE_PROD].[PRODDTA].[F4801].[WACO]
    AND [JDE_PROD].[PRODDTA].[F4095].[MLDCTO]=[JDE_PROD].[PRODDTA].[F4801].[WADCTO] AND [JDE_PROD].[PRODDTA].[F4095].[MLDCT]=N'IV' OR [JDE_PROD].[PRODDTA].[F4095].[MLANUM]=(3.260000000000000e+003) AND [JDE_PROD].[PRODDTA].[F4095].[MLCO]=[JDE_PROD].[PRODDTA].[F4801].[WACO]
    AND [JDE_PROD].[PRODDTA].[F4095].[MLDCTO]=[JDE_PROD].[PRODDTA].[F4801].[WADCTO] AND [JDE_PROD].[PRODDTA].[F4095].[MLDCT]=N'IV' OR [JDE_PROD].[PRODDTA].[F4095].[MLANUM]=(3.270000000000000e+003) AND [JDE_PROD].[PRODDTA].[F4095].[MLCO]=[JDE_PROD].[PRODDTA].[F4801].[WACO]
    AND [JDE_PROD].[PRODDTA].[F4095].[MLDCTO]=[JDE_PROD].[PRODDTA].[F4801].[WADCTO] AND [JDE_PROD].[PRODDTA].[F4095].[MLDCT]=N'IV' OR [JDE_PROD].[PRODDTA].[F4095].[MLANUM]=(3.280000000000000e+003) AND [JDE_PROD].[PRODDTA].[F4095].[MLCO]=[JDE_PROD].[PRODDTA].[F4801].[WACO]
    AND [JDE_PROD].[PRODDTA].[F4095].[MLDCTO]=[JDE_PROD].[PRODDTA].[F4801].[WADCTO] AND [JDE_PROD].[PRODDTA].[F4095].[MLDCT]=N'IV'),  WHERE:([JDE_PROD].[PRODDTA].[F4095].[MLCOST]=N'A1') ORDERED FORWARD)
                               |--Index Spool(SEEK:([JDE_PROD].[PRODDTA].[F0911].[GLKCO]=[JDE_PROD].[PRODDTA].[F4801].[WACO] AND [JDE_PROD].[PRODDTA].[F0911].[GLCO]=[JDE_PROD].[PRODDTA].[F4801].[WACO]
    AND [JDE_PROD].[PRODDTA].[F0911].[GLSBL]=CONVERT_IMPLICIT(nvarchar(8),right('00000000'+CONVERT(varchar(30),CONVERT(int,[JDE_PROD].[PRODDTA].[F4801].[WADOCO],0),0),(8)),0) AND [JDE_PROD].[PRODDTA].[F0911].[GLDCT]=N'IV' AND [JDE_PROD].[PRODDTA].[F0911].[GLSBLT]=N'W'))
                                    |--Compute Scalar(DEFINE:([Expr1022]=CONVERT(numeric(15,2),[JDE_PROD].[PRODDTA].[F0911].[GLAA]/(1.000000000000000e+002),0)))
                                         |--Clustered Index Scan(OBJECT:([JDE_PROD].[PRODDTA].[F0911].[F0911_PK]))
    it returns 1734 rows, the tables queried contain milions of roiw

  • All things being equal, does 4th gen quad core i7 (2.5GHz) need more RAM than 5th gen dual core i5 (2.7GHz) for same tasks?

    Bought mid/late 2014 15" MBPr with the i7 2.5GHz and 16GB RAM. I'm within 2 week exchange period at Best Buy, and thinking of switching to 2015 13" MBPr with i5 2.7GHz and 8GB RAM for battery life and weight, and they don't offer the upgraded 16GB RAM option for any 13" models.
    Needs to be Best Buy because of gift cards
    Use docked at office 75% of time vs 25% out of office, so portability and battery life are nice to have but not as important as if I had 8 hour flights on weekly basis
    See appeal of having 5th gen i5 in 13" vs my 4th gen i7 that will be updated this summer
    Not a gamer and don't use Final Cut or other video editing, but I tend to leave several applications open and usually have between 1-3 GB free RAM of the 16GB total at any time according to my Memory Cleaner readings
    Therefore, worried that 8GB is a non-starter as simple math says that I would have shortfall of 5-7GB RAM based on above readings, but wondering if dual core i5 will use RAM more efficiently than quad core i7
    Any help greatly appreciated!

    Also to note, would like this to last 4 years, which was reason that I bought the higher spec'd model even though I don't run games or video editing

  • Lib in different drives, is it more efficient?

    How much more efficient is it for the CPU to have your lib spread out in different external drives? ex. strings in one drive, perc in another drive etc...
    Thanks,
    Guy

    Thanks for the reply, but I wanted to go one step further since I often write in a symphonic texture,
    so having each section, strings-WW-brass-perc- synth etc in different drives, will that make my system more efficient than if I put all the samples in one drive. I'm concerned because it involves some investments, but if it's confirmed that it will make a difference than I don't mind investing in a couple of more drives.

  • 3 Table Joins -- Need a more efficient Query

    I need a 3 table join but need to do it more efficiently than I am currently doing. The query is taking too long to execute (in excess of 20 mins. These are huge tables with 10 mil + records). Here is what the query looks like right now. I need 100 distinct acctnum from the below query with all the conditions as requirements.
    THANKS IN ADVANCE FOR HELP!!!
    SELECT /*+ parallel  */
      FROM (SELECT  /*+ parallel  */  DISTINCT (a.acctnum),
                                  a.acctnum_status,
                                  a.sys_creation_date,
                                  a.sys_update_date,
                                  c.comp_id,
                                  c.comp_lbl_type,
                                  a.account_sub_type
                  FROM   account a
                         LEFT JOIN
                            company c
                         ON a.comp_id = c.comp_id AND c.comp_lbl_type = 'IND',
                         subaccount s
                 WHERE       a.account_type = 'I'
                         AND a.account_status IN ('O', 'S')
                        and s.subaccount_status in ('A','S')
                         AND a.account_sub_type NOT IN ('G', 'V')
                         AND a.SYS_update_DATE <= SYSDATE - 4 / 24)
    where   ROWNUM <= 100 ;

    Hi,
    Whenever you have a question, post CREATE TABLE and INSERT statements for a little sample data, and the results you want from that data.  Explain how you get those results from that data.
    Simplify the problem, if possible.  If you need 100 distinct rows, post a problem where you only need, say, 3 distinct rows.  Just explain that you really need 100, and you'll get a solution that works for either 3 or 100.
    Always say which version of Oracle you're using (e.g. 11.2.0.3.0).
    See the forum FAQ: https://forums.oracle.com/message/9362002
    For tuning problems, also see https://forums.oracle.com/message/9362003
    Are you sure the query you posted is even doing what you want?  You're cross-joining s to the other tables, producing all possible combinations of rows, and then picking 100 of those in no particular order (not even random order).  That's not necessarily wrong, but it certainly is suspicious.
    If you're only interested in 100 rows, there's probably some way to write the query so that it picks 100 rows from the big tables first. 

  • Determine the NEW QUERY IS EFFICIENT THAN THE OLD QUERY

    How we can determine the new query is more efficient than the old query?

    Hi,
    See the explain plan and compare both of them
    you should find which one is more efficient than other
    cheers

  • Creating a time channel in the data portal and filling it with data - Is there a more efficient way than this?

    I currently have a requirement to create a time channel in the data portal and subsequently fill it with data. I've shown below how I am currently doing it:
    Time_Ch = ChnAlloc("Time channel", 271214           , 1      ,           , "Time"         ,1                  ,1)              'Allocate time channel
    For intLoop = 1 to 271214
      ChD(intLoop,Time_Ch(0)) = CurrDateTimeReal          'Create time value
    Next
    I understand that the function to create and allocate memory for the time channel is extremely quick. However the time to store data in the channel afterwards is going to be highly dependent on the length I have assigned to the Time_Ch. In my application the length of Time_Ch is variable but could easily be in the order of 271214 or higher. Under such circumstances the time taken to fill Time_Ch is quite considerable. I am wondering whether this is the most appropriate way of doing things or whether there is a more efficient way of creating a time channel and filling it.
    Thanks very much for any help.
    Regards
    Matthew

    Hi Matthew,
    You are correct that there is a more efficient way to do this.  I'm a little confused about your "CurrDateTimeReal" assignment-- is this a constant?  Most people want a Time channel that counts up linearly in seconds or fractions of a second over the duration of the measurement.  But that looks like you would assign the same time value to all the rows of the new Time channel.
    If you want to create a "normal" Time channel that increases at a constant rate, you can use the ChnGenTime() function:
    ReturnValue = ChnGenTime(TimeChannel, GenTimeUnit, GenTimeXBeg, GenTimeXEnd, GenTimeStep, GenTimeMode, GenTimeNo)
    If you really do want a Time channel filled with all the same values, you can use the ChnLinGen() function and simply set the GenXBegin and GenXEnd parameters to be the same value:
    ReturnValue = ChnLinGen(TimeChannel, GenXBegin, GenXEnd, XNo, [GenXUnitPreset])
     In both cases you can use the Time channel you've already created (which as you say executes quickly) and point the output of these functions to that Time channel by using the Group/Channel syntax of the Time channel you created for the first TimeChannel parameter in either of the above functions.
    Brad Turpin
    DIAdem Product Support Engineer
    National Instruments

  • A more efficient way to assure that a string value contains only numbers?

    Hi ,
    I'm using Oracle 9.2.0.6.
    I was curious to know if there was any way I could write a more efficient query to determine if a string value contains only numbers.
    Here's my current query. This SQL is from a sub query in a Join clause.
    select distinct cta.CUSTOMER_TRX_ID, to_number(cta.SALES_ORDER) SALES_ORDER
                from ra_customer_trx_lines_all cta
                where length(cta.SALES_ORDER) = 6
                and cta.SALES_ORDER is not null
                and substr(cta.SALES_ORDER,1,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,2,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,3,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,4,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,5,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,6,1) in('1','2','3','4','5','6','7','8','9','0')This is a string where I'm finding A-Z-a-z characters and '/' and '-' characters in all 6 positions, plus there are values that are longer than 6 characters. That's what the length(cta.SALES_ORDER) = 6 is for. Also, of course. some cells are NULL.
    So the question is, is there a more efficient way to screen out only the values in this field that are 6 character numbers or is what I have the best I can do?
    Thanks,

    I appreciate all of your very helpfull workarounds. The cost is a little better in all cases than my original where clause.
    To address the discussion that's popped up about design from this question, I can say a few things that should clear , at least, my situation up.
    First of all this custom quoting , purchase order , and sales order entry system WAS written by a bunch a of 'bad' coders who didn't document their work and then left. We don't even have an ER diagram
    The whole project that I'm only a small part of is literally trying to put Humpty Dumpty together again and then move it from a bad custom solution into Oracle Applications.
    We're rebuilding, documenting, and doing ETL. This is one of your prototypical projects from hell.
    It's a huge database project so we're taking small bites as a time. Hopefully, somewhere right before Armageddon hits, this thing will be complete.
    But until then,..., well,..., you know the drill.
    Thanks Again.

  • More-efficient keyboard

    Anyone have a way to get a better onscreen keyboard going than the QWERTY one that is stock on the iPhone? I love the FITALY (fitaly.com) keyboard on my old Palm, but that company says Apple won't let them do a system keyboard. One would like to think that a Dvorak or other non-18th century keyboard could be available among the international keyboards, but I don't find one.
    And yes, I've already tried the alternatives, e.g., TikiNotes, which takes me three times as long to type on than the Qwerty.
    If not, this would be a great suggestion for a new feature for Apple to incorporate.

    Just to let everybody know I am now 28 years old. I first learned how to type in typing class in junior high. I was using the Qwerty layout. The only nice thing I can say about the Qwerty layout is that it's available at any computer you want to use without any configuration.
    Then I looked online for a better way to type and more efficient. That's when I learned about Dvorak keyboard layout. This was about four years ago. I stuck with it for about two years. I felt my right hand was doing a lot more typing than my left hand. It felt too lopsided for me. But that's just my opinion. I went on the hunt for something better than Dvorak and I found the glorious Colemak keyboard layout.
    I have been typing with it ever since. My hands are a lot more comfortable and I can type faster now. It took me a month to actually get comfortable with the keyboard layout. If you actually go to this Java applet on Colemak's website.
    www.colemak.com/Compare
    You can just copy and paste a body of text and click on Calculate it will analyze the typing and compare the three different keyboard layouts. I just hope it becomes an ANSI standard like Dvorak has. I hope that happens in the future.
    I just want everybody to know there is a third option out there and its great. If ever Colemak goes away I will be going back to Dvorak. I will never learn the Qwerty keyboard layout ever again.
    Just wanted to give my two cents worth.

  • I need a more efficient method of transferin​g data from RT in a FP2010 to the host.

    I am currently using LV6.1.
    My host program is currently using Datasocket to read and write data to and from a Field Point 2010 system. My controls and indicators are defined as datasockets. In FP I have an RT loop talking to a communication loop using RT-FIFO's. The communication loop is using Publish to send and receive via the Datasocket indicators and controls in the host program. I am running out of bandwidth in getting data to and from the host and there is not very much data. The RT program includes 2 PID's and 2 filters. There are 10 floats going to the Host and 10 floats coming back from the Host. The desired Time Critical Loop time is 20ms. The actual loop time is about 14ms. Data is moving back and forth between Host and FP several times a second without regularity(not a problem). If I add a couple more floats each direction, the communications goes to once every several seconds(too slow).
    Is there a more efficient method of transfering data back and forth between the Host and the FP system?
    Will LV8 provide faster communications between the host and the FP system? I may have the option of moving up.
    Thanks,
    Chris

    Chris, 
    Sounds like you might be maxing out the CPU on the Fieldpoint.
    Datasocket is considered a pretty slow method of moving data between hosts and targets as it has quite a bit of overhead assosciated with it.  There are several things you could do. One, instead of using a datasocket for each float you want to transfer (which I assume you are doing), try using an array of floats and use just one datasocket transfer for the whole array.  This is often quite a bit faster than calling a publish VI for many different variables.
    Also, as Xu mentioned, using a raw TCP connection would be the fastest way to move data.  I would recommend taking a look at the TCP examples that ship with LabVIEW to see how to effectively use these. 
    LabVIEW 8 introduced the shared variable, which when network enabled, makes data transfer very simple and is quite a bit faster than a comparable datasocket transfer.  While faster than datasocket, they are still slower than just flat out using a raw TCP connection, but they are much more flexible.  Also, the shared variables can fucntion in the RT fifo capacity and clean up your diagram quite a bit (while maintaining the RT fifo functionality).
    Hope this helps.
    --Paul Mandeltort
    Automotive and Industrial Communications Product Marketing

  • OLAP taking 5 time more space than RDBMS?

    Hello,
    I have been investigating Oracle's OLAP capabilities for about 2 weeks now in an effort to determine whether OLAP would be a feasible solution for my company's data warehousing needs.
    I have created an extremely simple analytic workspace that is based on a single relational fact table. The fact table contains both the dimension values (integers) and the measuers (floats). There are a total of about 300,000 dimension values all of which reference measures (the data is perfectly dense).
    I created a dimension and a data cube using the Enterprise Manager OLAP interface. Next, I created the analytic workspace using the Analytic Workspace Manager wizard. This AW building step took an extraordinary amount of time considering the simplicity of the data. In fact, I started it on a Friday afternoon, monitored it for about 2 hours, and it still hadn't finished, so I let it run over the weekend (for this reason, I am not sure how much time exactly it took, but it's definitely quite a few hours). Is this normal? Copying the relational table takes 10 seconds, and I find it hard to believe that building an AW with the same data would take so much time.
    Finally, coming back on Monday I see that the AW was created successfully. I went into the tablespace map to look at how big the binary blob is. To my great surprise, the AW was taking up about 5 times more extents than my relational table (contrary to what I've read so far about OLAP and its supposedly very storage-efficient data cubes).
    I was wondering if anybody could advise me on what I could have done wrong and/or inefficiently? Am I missing something here, or is OLAP really consuming much more space comapred to relational tables?
    Thanks!
    Dobo Radichkov

    Gotcha.
    Well, here are a few ideas:
    1) There is a bug in the 9i OLAP where dimension loads take WAY too long. I have a dimension with approx 77,000 members. On 9i, it took hours to load. On 10g 10.1.0.3 it loads in under 1 minute. You might want to open a TAR, the bug was something about one of the patch sets not being applied right
    2) From a space standpoing, I wouldn't worry about this too much. When the AW wizard creates an "empty" workspace, it already has lots of items in it that support the metadata, etc. Also, there is the concept of "free pages" that get allocated but end up not being used (they are "temp")
    To check this out, attach your AW and then fire up the OLAP worksheet. To see "free" pages vs. "used" pages, do the following:
    show aw(pages)
    show aw(freepages)
    One last thing you can do, to see the size of all items in the AW:
    attach AWname ro first
    limit name to all
    sort name d obj(disksize)
    rpr w 40 obj(disksize)
    This will show the "relative" size of the items, I think in "pages" (to see how big a page is just do a
    show aw(pagesize)
    Hope this helps!

Maybe you are looking for

  • Aperture 3 and Flickr syncing back and forth

    I posted some pics to flickr using Aperture 3, and later added keywords, again using Aperture 3. I couldn't get the new keywords to sync up to my flickr page, something I'd been able to do previously. I hit the broadcast sync button, but it is not ad

  • Data base program with Leopard?

    On my old G5 I used Apple Works data base program. I still have it on my new machine but wonder if there is a new data base program like Pages is the new WP program. Thanks! TMW

  • Amount conversion in ABAP...

    Hello, I want the following kind of conversion. 0000000000.53   ->  0.53 0000000123.00   ->  123.00 0000000001.23   ->  1.23 0000000005400  ->   5400 Please help. Regards, Rajesh.

  • Installing CR VS2010 on Windows 2008 Core Server

    Hello, We are attempting to re-build our web servers and reduce their footprint. In this effort we have chosen to use Windows 2008 Core Server with all of the web server installation features. In addition we upgraded our application to CR for VS2010

  • Adobe Captivate 7 will not open after successful installation

    After successful installtion of Captivate 7, I am unable to open/run the program.  The error message below is received when attempting to open the program.  I've installed/uninstalled the program twice both with the same results.  Any thoughts on why