Make queries more efficient

-- I need the week numbers and beginning of week for the current year along with its corresponding date last year
-- This code works but I have seen an example using LEVELS which was a lot easier to understand ( I can't find it)
With LastYear as
  select year,
       week_nbr,
       next_day(  to_date( '04-jan-' || year, 'dd-mon-yyyy' ) + (week_nbr-2)*7, 'mon' )  as week_date
from ( 
       select '2008' year,
               rownum week_nbr
        from all_objects where rownum <= 53 )
, CurrYear as
select year,
       week_nbr,
       next_day(  to_date( '04-jan-' || year, 'dd-mon-yyyy' ) + (week_nbr-2)*7, 'mon' )  as week_date
from ( 
       select '2009' year,
               rownum week_nbr
        from all_objects where rownum <= 53 )
--=====  Dates user by report
, Report_Dates as
select   ly.week_nbr  as  week_nbr
       , ly.week_date as  last_year_week_date
       , cy.week_date as  curr_year_week_date    
from   lastYear ly
inner join  CurrYear cy on cy.week_nbr = ly.week_nbr
where cy.week_date < to_date('23-Nov-09', 'dd-Mon-yy') + 6
select * from report_Dates;I also need to get current week sales as well as YTD sales
I do so by selecting all data from beginning of year to the end of reporting week (in example data, I use week 48, begin of week Nov-23-2009)
Then I group on week nbr where any date less than beginning of reporting week = 0 , otherwise groups on the week_nbr
This gets messy when I have to do a complex calculation with the year-to-date amount because I have to add the partial YTD + WTD to get true
YTD amount. How can I avoid having to do this?
begin
  execute immediate 'drop table my_hotdog_sales';
  exception when others then null;
end;
  CREATE TABLE MY_HOTDOG_SALES
   (     "STOREID" NUMBER NOT NULL ENABLE,
     "BUSI_DATE" DATE,
     "SALES" NUMBER,
     "GUESTS" NUMBER
delete from my_hotdog_sales;
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('29-DEC-08','DD-MON-RR'),1595.62,370);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('05-JAN-09','DD-MON-RR'),1732.74,406);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('12-JAN-09','DD-MON-RR'),1708.01,406);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('19-JAN-09','DD-MON-RR'),1849.55,433);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('26-JAN-09','DD-MON-RR'),1641.34,378);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('02-FEB-09','DD-MON-RR'),2037.7,459);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('09-FEB-09','DD-MON-RR'),2209.37,504);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('16-FEB-09','DD-MON-RR'),1899.38,407);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('23-FEB-09','DD-MON-RR'),2072.33,446);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('02-MAR-09','DD-MON-RR'),1990.67,432);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('09-MAR-09','DD-MON-RR'),1940.02,430);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('16-MAR-09','DD-MON-RR'),1987.58,445);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('23-MAR-09','DD-MON-RR'),1873.04,428);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('30-MAR-09','DD-MON-RR'),2102.02,457);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('06-APR-09','DD-MON-RR'),1877.87,415);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('13-APR-09','DD-MON-RR'),1861.76,415);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('20-APR-09','DD-MON-RR'),1962.48,440);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('27-APR-09','DD-MON-RR'),2032.6,447);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('04-MAY-09','DD-MON-RR'),2213.7,481);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('11-MAY-09','DD-MON-RR'),2017.93,446);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('18-MAY-09','DD-MON-RR'),2025.67,452);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('25-MAY-09','DD-MON-RR'),1904.92,428);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('01-JUN-09','DD-MON-RR'),2219.46,471);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('08-JUN-09','DD-MON-RR'),2167.82,478);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('15-JUN-09','DD-MON-RR'),2056.53,468);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('22-JUN-09','DD-MON-RR'),2028.29,453);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('29-JUN-09','DD-MON-RR'),1870.82,392);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('06-JUL-09','DD-MON-RR'),2161.31,463);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('13-JUL-09','DD-MON-RR'),1929.36,419);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('20-JUL-09','DD-MON-RR'),1960.85,425);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('27-JUL-09','DD-MON-RR'),1722.19,399);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('03-AUG-09','DD-MON-RR'),1904.76,425);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('10-AUG-09','DD-MON-RR'),2108.2,473);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('17-AUG-09','DD-MON-RR'),1803.95,413);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('24-AUG-09','DD-MON-RR'),1889.63,431);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('31-AUG-09','DD-MON-RR'),1937.33,415);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('07-SEP-09','DD-MON-RR'),1749.52,395);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('14-SEP-09','DD-MON-RR'),1964.21,437);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('21-SEP-09','DD-MON-RR'),1954.87,436);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('28-SEP-09','DD-MON-RR'),2110.1,474);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('05-OCT-09','DD-MON-RR'),1879.43,409);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('12-OCT-09','DD-MON-RR'),1878.7,393);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('19-OCT-09','DD-MON-RR'),2101.44,473);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('26-OCT-09','DD-MON-RR'),1822.08,405);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('02-NOV-09','DD-MON-RR'),2167.69,467);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('09-NOV-09','DD-MON-RR'),1938.87,420);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('16-NOV-09','DD-MON-RR'),2069.88,460);
Insert into MY_HOTDOG_SALES (STOREID,BUSI_DATE,SALES,GUESTS) values (243,to_date('23-NOV-09','DD-MON-RR'),1545.23,328);
With LastYear as
  select year,
       week_nbr,
       next_day(  to_date( '04-jan-' || year, 'dd-mon-yyyy' ) + (week_nbr-2)*7, 'mon' )  as week_date
from ( 
       select '2008' year,
               rownum week_nbr
        from all_objects where rownum <= 53 )
, CurrYear as
select year,
       week_nbr,
       next_day(  to_date( '04-jan-' || year, 'dd-mon-yyyy' ) + (week_nbr-2)*7, 'mon' )  as week_date
from ( 
       select '2009' year,
               rownum week_nbr
        from all_objects where rownum <= 53 )
--=====  Dates used by report
, Report_Dates as
select   ly.week_nbr  as  week_nbr
       , ly.week_date as  last_year_week_date
       , cy.week_date as  curr_year_week_date    
from   lastYear ly
inner join  CurrYear cy on cy.week_nbr = ly.week_nbr
where cy.week_date < to_date('23-Nov-09', 'dd-Mon-yy') + 6
--====  Curr Year Totals 
,curr_year_totals as
select      storeid
          ,  case when report_dates.curr_year_week_date < trunc(to_date('23-Nov-09', 'dd-Mon-yy'), 'iw') then 0
                  else week_nbr
            end   as    week_nbr 
          , SUM(sales)      as    sales
          , SUM(guests)      as    guests
     from  my_hotdog_sales  csh
     inner join report_dates  on report_dates.curr_year_week_date = trunc(to_date(busi_date, 'dd-Mon-yy'), 'iw')   
     where    trunc(busi_date ,  'iy')   = trunc(to_date('23-Nov-09', 'dd-Mon-yy'), 'iy')
    group by storeid, 
             case when report_dates.curr_year_week_date < trunc(to_date('23-Nov-09', 'dd-Mon-yy'), 'iw') then 0
                  else week_nbr
             end
--====  Curr Year Totals Pivoted to include week and year data
, pivoted_curr_year_totals as
select     storeid
          ,  max( decode( week_nbr, 48 ,  sales,   0 ))                as  wtd_sales
          ,  max( decode( week_nbr, 0,    sales  , 0 ))                as  ytd_sales 
          ,  max( decode( week_nbr, 48 ,  guests,   0 ))               as  wtd_guests
          ,  max( decode( week_nbr, 0,    guests  , 0 ))               as  ytd_guests 
from  curr_year_totals
group by storeid
Select 
    storeid
     ,  sum(curr.wtd_sales)                           as curr_wtd_sales
     ,  sum(curr.ytd_sales   +  curr.wtd_sales)       as curr_ytd_sales
     ,  sum(curr.wtd_guests)                          as curr_wtd_guests
     ,  sum( curr.ytd_guests  - curr.wtd_guests)      as wtd_guests_diff   
from pivoted_curr_year_totals  curr  
group by storeid
order by storeid
;   Result:
STOREID                CURR_WTD_SALES         CURR_YTD_SALES         CURR_WTD_GUESTS        WTD_GUESTS_DIFF       
243                    1545.23                93478.82               328                    20091       

The driving table is the anchor point of the execution plan i.e. the databse says "to satisfy the query, first of all I must get the row(s) from _this_ table and that data will allow me to get the row(s) from the next table."
So the driving table ought to be the table which will return the highest percentage of successful hits. This is usually (but always) the smallest table.
An example: to find all the items sold by salesmen in a given department requires accessing four tables - PRODUCTS, ORDER_ITEMS, ORDERS and EMP. EMP is the natural candidate for driving table because that is the only table that can answer the question "which SALESMEN are assigned to dept 10?" Once you have their emp_nos you can find all the ORDERS that have been closed by those employees, and so on.
In the RULE-BASED OPTIMIZER the driving table is the one on the right of the FROM clause ie SELECT ... FROM PRODUCTS, ORDER_ITEMS, ORDERS , EMP WHERE...
From this you see that you need to understand your data in order to write the query properly.
In the COST-BASED OPTIMIZER the database will assess the statistics to decide which should be the driving table. For this to work you need to have up to date statisitics for all the tables. Otherwise the CBO might decide to drive from the PRODUCTS table and that would have a serious impact on the peformance of your query.
HTH, APC

Similar Messages

  • Make Code More Efficient

    I got this code to play some audio clips and it works alright. The only issue is that when I call the play method it lags the rest of my game pretty badly. Is there anything in the play method you guys think could be moved to the constructor to make it more efficient?
    package main;
    import java.io.*;
    import javax.sound.sampled.*;
    public class Sound
         private AudioFormat format;
        private byte[] samples;
        private String name;
         public Sound(String filename)
              name=filename;
              try
                AudioInputStream stream =AudioSystem.getAudioInputStream(new File("sounds/"+filename));
                format = stream.getFormat();
                samples = getSamples(stream);
            }catch (Exception e){System.out.println(e);}
         public byte[] getSamples()
            return samples;
        private byte[] getSamples(AudioInputStream audioStream)
            int length=(int)(audioStream.getFrameLength()*format.getFrameSize());
            byte[] samples = new byte[length];
            DataInputStream is = new DataInputStream(audioStream);
            try
                is.readFully(samples);
            }catch (Exception e){System.out.println(e);}
            return samples;
        public void play()
             InputStream stream =new ByteArrayInputStream(getSamples());
            int bufferSize = format.getFrameSize()*Math.round(format.getSampleRate() / 10);
            byte[] buffer = new byte[bufferSize];
            SourceDataLine line;
            try
                DataLine.Info info=new DataLine.Info(SourceDataLine.class, format);
                line=(SourceDataLine)AudioSystem.getLine(info);
                line.open(format, bufferSize);
            }catch (Exception e){System.out.println(e);return;}
            line.start();
            try
                int numBytesRead = 0;
                while (numBytesRead != -1)
                    numBytesRead =
                        stream.read(buffer, 0, buffer.length);
                    if (numBytesRead != -1)
                       line.write(buffer, 0, numBytesRead);
            }catch (Exception e){System.out.println(e);}
            line.drain();
            line.close();
        public String getName()
             return name;
    }

    I don't know much about the guts of flex, but I assume it's
    based on Java's design etc.
    Storing event.target.selectedItem in an objet should not be
    anymore efficient than calling event.target.selectedItem. The objet
    will simply be a pointer of sorts to event.target.selectedItem. At
    no point in the event.target.selectedItem call are you doing a
    search or something, so storing the result will not result in any
    big savings.
    Now, if you were doing something like
    array.findItem(something) 4 times, then yes, it would be to your
    advantage to store the data.
    Keep in mind that storing event.target.selectedItem in an
    object will probably break bindings....that may or may not be a
    problem. Objet doesn't support binding. There is a subclass of
    Object that does, but I forget which.
    Just a suggestion based on my knowledge of how data is stored
    in an object oriented language...this may not be the case in
    flex.

  • How to make it more efficient

    Hi,
    I am working on AQ where I am just sending and receiving simple messages. But the performance is very poor. It takes around 35 seconds to send (enqueue) just 100 messages which is not acceptable for our project. Can someone help me how to make it more efficient. I am using JMS for sending and receiving messages.
    Thanks,
    Sateesh

    Bhagath,
    Thanks for your help.
    Oracle server we are using is 8.1.7. We are using JDBC client that ships with Oracle client (classes12.zip).
    Right now we are working on point to point messages.
    I am just wondering whether I need to do any tuning on server.
    Your help is greately appreciated.
    Here I am pasting sample code that I wrote which may help in finding the problem.
    Thank you so much once again for your help.
    -Sateesh
    import java.sql.*;
    import javax.jms.*;
    import java.io.*;
    import java.util.Properties;
    import oracle.AQ.*;
    import oracle.jms.*;
    public class CDRQueueSender {
    private final String DB_CONNECTION = "jdbc:oracle:thin:@dev1:1521:dev";
    protected final String DB_AQ_ADMIN_NAME = "dev78";
    private final String DB_AQ_ADMIN_PASSWORD = "dev78";
    /** DB AQ user agent name and password */
    private final String DB_AQ_USER_NAME = "dev78";
    private final String DB_AQ_USER_PASSWORD = "dev78";
    private QueueConnectionFactory queueConnectionFactory = null;
    private QueueConnection connection = null;
    private QueueSession session = null;
    private Queue sendQueue;
    private QueueSender qSender;
    public CDRQueueSender() {
    try {
    Properties info = new Properties();
    info.put(DB_AQ_USER_NAME, DB_AQ_USER_PASSWORD);
    queueConnectionFactory = AQjmsFactory
    .getQueueConnectionFactory(DB_CONNECTION, info);
    connection = queueConnectionFactory.
    createQueueConnection(DB_AQ_USER_NAME,
    DB_AQ_USER_PASSWORD);
    session = connection.createQueueSession(
    true, Session.AUTO_ACKNOWLEDGE);
    connection.start();
    sendQueue = ((AQjmsSession) session).getQueue (DB_AQ_ADMIN_NAME,"CDR_QUEUE");
    qSender = session.createSender(sendQueue);
    catch (Exception ex) {
    ex.printStackTrace();
    public boolean sendCDRMessage(CDRMessage messageData)
    throws JMSException, SQLException {
    ObjectMessage objectMessage = session.createObjectMessage(messageData);
    try {
    qSender.send(objectMessage);
    session.commit();
    catch (Exception e) {
    System.out.println(e.getMessage());
    e.printStackTrace();
    return true;
    public void close() throws JMSException {
    session.close();
    connection.close();
    public static void main(String[] args) throws SQLException, JMSException {
    int count = 0;
    CDRQueueSender qSender = new CDRQueueSender();
    long startTime = System.currentTimeMillis();
    long endTime;
    CDRMessage message;
    while(count < 100) {
    message = new CDRMessage("filename", 20, "This is testing", count);
    qSender.sendCDRMessage(message);
    count++;
    //qSender.sessionCommit();
    endTime = System.currentTimeMillis();
    System.out.println("time taken to process 100 records is " +
    ((endTime - startTime)/1000) + " seconds");
    qSender.close();

  • How to write queries more efficiently? Please Help a sinking guy

    Hello Query Guru's,
      I am having issue writing the SQL statements efficiently, please help/guide me to learn how to write SQL’s more efficiently and logically. At my work I am being a fun every day by my team when comes the coding please guide me to become an efficient query writer. I am starving and ready to do hard work but need to know the correct path.
    Thanks in Advance.

    You could pick up examples from introductory books on Oracle.
    For example the Certification Guides for the Oracle SQL Fundamentals Guide  (1Z0-051 http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=5001&get_params=p_exam_id:1Z0-051&p_org_id=&lang=    ) .  Search bookstores / amazon.com  for books for 1Z0-051
    You could also look at Jason Price's book Oracle Database 11g SQL  :  http://www.amazon.com/Oracle-Database-11g-SQL-Press/dp/0071498508/ref=sr_1_1?ie=UTF8&qid=1372742972&sr=8-1&keywords=Jason+Price
    Hemant K Chitale

  • Making Sound More Efficient

    I got this code to play some audio clips and it works alright. The only issue is that when I call the play method it lags the rest of my game pretty badly. Is there anything in the play method you guys think could be moved to the constructor to make it more efficient?
    btw I know I posted this in the sound section already but it had like 17 views there and no responses after like a week on the front page so I decided to bring it here.
    package main;
    import java.io.*;
    import javax.sound.sampled.*;
    public class Sound
         private AudioFormat format;
        private byte[] samples;
        private String name;
         public Sound(String filename)
              name=filename;
              try
                AudioInputStream stream =AudioSystem.getAudioInputStream(new File("sounds/"+filename));
                format = stream.getFormat();
                samples = getSamples(stream);
            }catch (Exception e){System.out.println(e);}
         public byte[] getSamples()
            return samples;
        private byte[] getSamples(AudioInputStream audioStream)
            int length=(int)(audioStream.getFrameLength()*format.getFrameSize());
            byte[] samples = new byte[length];
            DataInputStream is = new DataInputStream(audioStream);
            try
                is.readFully(samples);
            }catch (Exception e){System.out.println(e);}
            return samples;
        public void play()
             InputStream stream =new ByteArrayInputStream(getSamples());
            int bufferSize = format.getFrameSize()*Math.round(format.getSampleRate() / 10);
            byte[] buffer = new byte[bufferSize];
            SourceDataLine line;
            try
                DataLine.Info info=new DataLine.Info(SourceDataLine.class, format);
                line=(SourceDataLine)AudioSystem.getLine(info);
                line.open(format, bufferSize);
            }catch (Exception e){System.out.println(e);return;}
            line.start();
            try
                int numBytesRead = 0;
                while (numBytesRead != -1)
                    numBytesRead =
                        stream.read(buffer, 0, buffer.length);
                    if (numBytesRead != -1)
                       line.write(buffer, 0, numBytesRead);
            }catch (Exception e){System.out.println(e);}
            line.drain();
            line.close();
        public String getName()
             return name;
    }

    BammRocket wrote:
    I got this code to play some audio clips and it works alright. The only issue is that when I call the play method it lags the rest of my game pretty badly. Is there anything in the play method you guys think could be moved to the constructor to make it more efficient?you got this code from chapter 4 of [Developing Games in Java|http://www.brackeen.com/javagamebook/] right?
    are you playing the sounds in a different thread?
    if not, your program will just freeze until the sound is finished.

  • Making code more efficient

    I am having a lot of trouble getting my code to work fast enough. I have 4 sonic anemometers and currently my code is only efficient enough to collect data from one. I have programs that run 2 sonic anemometers and save the data, but bites pile up at the port. The instruments are in unprompted mode and send data at 10hz. I find that using the wait command dose not work well for some reason so I have the loop continuously running. The first version of my code (V3a) worked for one sonic and bites did not pile up at the port. So I made (V3b) and tried to make a more efficient program. I tried separating things into multiple loops but, it still does not work well and was hoping to get some ideas to make things work better.
    I attached the 2 versions of my code. I am not sure if I should attach the subVIs, let me know.
    Thanks!
    Attachments:
    fo3csat_unprompted_v3a.vi ‏23 KB
    fo3csat_unprompted_v3b.vi ‏27 KB

    I'm going to ask you a very important question about that occurrence in the top loop: by using the occurrence the way you have, have you eliminated the possibility of a race condition? The answer is NO... study it, and you'll see why. If you can't figure it out, post back and I'll tell you why the race condition is still present.
    Also, if you ever are coding and thinking to yourself, "WOW, I can't believe the guys who developed LabVIEW made it so hard to do this simple task!", odds are, you're making it hard yourself! Rather than making 4 parallel branches of a numeric, converting to an ASCII string, then reinterpreting as 4 separate numerics, consider the following code. It's nearly equivalent, except my seconds has more significant digits (maybe good, maybe not):
    I'm going to argue that even splitting the discrete components of time is unnecessary, unless your logging protocol specifically requires that format. Instead, simply write the timestamp directly to file with the data points.
    Also, remember to use a standard 4x2x2x4 connector pane on your SubVIs. Refer to the LabVIEW Style Guide (search, and you will find it).
    Finally, I'm going to disagree with the other guys, it's not evident why you split the one loop into three loops. The only "producer/consumer" architecture has the top loop as the "producer", and all it's producing is a timestamp! This is not a typical or intended use of the producer/consumer architecture. Your VI is intended to only save a data point once every 30 minutes (presumably), so it's no big deal of both of your serial devices are in the same loop.
    The single biggest problem why your VI is completely railing out a CPU core (you didn't state this, but I'm guessing the reason you posting is because you noticed a core running at 100%!) is the unmetered loop rate... like the other guys say, drop a "Wait Until Next ms Multiple" and slow the loop rate down significantly. 10msec is probably too fast for your application.... actually, a loop rate of once every 30 minutes (that's 1800000msec) might be best.
    Let us know how it goes!
    a.lia-user-name-link[href="/t5/user/viewprofilepage/user-id/88938"] {color: black;} a.lia-user-name-link[href="/t5/user/viewprofilepage/user-id/88938"]:after {content: '';} .jrd-sig {height: 80px; overflow: visible;} .jrd-sig-deploy {float:left; opacity:0.2;} .jrd-sig-img {float:right; opacity:0.2;} .jrd-sig-img:hover {opacity:0.8;} .jrd-sig-deploy:hover {opacity:0.8;}

  • Making it more efficient

    I will be using this function
                   private function inter_1a(evt:MouseEvent):void {
                    var popUpInterDisplay:interactionexporter;
                    popUpInterDisplay = new interactionexporter();
                    popUpInterDisplay.source = "lessons/lessonone/interactions/inter1a.swf";
                    PopUpManager.addPopUp(popUpInterDisplay, this, true);
    But will use it at least 30 times. For example:
                   private function inter_1b(evt:MouseEvent):void {
                    var popUpInterDisplay:interactionexporter;
                    popUpInterDisplay = new interactionexporter();
                    popUpInterDisplay.source = "lessons/lessonone/interactions/inter1b.swf";
                    PopUpManager.addPopUp(popUpInterDisplay, this, true);
               private function inter_1c(evt:MouseEvent):void {
                     var popUpInterDisplay:interactionexporter;
                     popUpInterDisplay = new interactionexporter();
                     popUpInterDisplay.source = "lessons/lessonone/interactions/inter1c.swf";
                     PopUpManager.addPopUp(popUpInterDisplay, this, true);
        ETC......
    Any ideas as to make this more efficient?
    Thanks

    This can be written more efficient indeed. How are these functions called, by Buttons or by an item in a datagrid?
    In case it are buttons, you can store the name of the swf in their id properties:
    private function inter(evt:MouseEvent):void
          var popUpInterDisplay:interactionexporter = new interactionexporter();
          popUpInterDisplay.source = "lessons/lessonone/interactions/" + evt.target.id + ".swf";
          PopUpManager.addPopUp(popUpInterDisplay, this, true);
    In case of a datagrid, you can use a property of the selectedItem:
    private function inter(evt:MouseEvent):void
           var popUpInterDisplay:interactionexporter = new interactionexporter();
           popUpInterDisplay.source = "lessons/lessonone/interactions/" + evt.currentTarget.selectedItem.propName + ".swf";
           PopUpManager.addPopUp(popUpInterDisplay, this, true);
    Dany

  • Premiere Pro is so slow to export - make more efficient?

    So I'm shooting 1080p 30fps Canon T2i footage and want to export 1080p for youtube.
    I have maybe 25 five-second clips and each requires:
    1. correct barrel distortion (curvature -3)
    2. increase color saturation
    3. luminance curve
    4. sharpen
    5. fade in / fade out transitions
    6. titles
    For a 3 minute video it takes 1hr to export! This is on an i5 and SSD.
    Anything I can do to speed this up? Maybe some more efficient presets? Right now I'm using seperate presets for each of the effects above. Thanks!

    FAQ: How do I make Premiere Pro faster?

  • Suggests for a more efficient query?

    I have a client (customer) that uses a 3rd party software to display graphs of their systems. The clients are constantly asking me (the DBA consultant) to fix the database so it runs faster. I've done as much tuning as I can on the database side. It's now time to address the application issues. The good news is my client is the 4th largest customer of this 3rd party software and the software company has listened and responded in the past to suggestions.
    All of the tables are setup the same with the first column being a DATE datatype and the remaining columns are values for different data points (data_col1, data_col2, etc.). Oh, that first date column is always named "timestamp" in LOWER case so got to use double quotes around that column name all of the time. Each table collects one record per minute per day per year. There are 4 database systems, about 150 tables per system, averaging 20 data columns per table. I did partition each table by month and added a local index on the "timestamp" column. That brought the full table scans down to full partition index scans.
    All of the SELECT queries look like the following with changes in the column name, table name and date ranges. (Yes, we will be addressing the issue of incorporating bind variables for the dates with the software provider.)
    Can anyone suggest a more efficient query? I've been trying some analytic function queries but haven't come up with the correct results yet.
    SELECT "timestamp" AS "timestamp", "DATA_COL1" AS "DATA_COL1"
    FROM "T_TABLE"
    WHERE "timestamp" >=
    (SELECT MIN("tb"."timestamp") AS "timestamp"
    FROM (SELECT MAX("timestamp") AS "timestamp"
    FROM "T_TABLE"
    WHERE "timestamp" <
    TO_DATE('2006-01-21 00:12:39', 'YYYY-MM-DD HH24:MI:SS')
    UNION
    SELECT MIN("timestamp")
    FROM "T_TABLE"
    WHERE "timestamp" >=
    TO_DATE('2006-01-21 00:12:39', 'YYYY-MM-DD HH24:MI:SS')) "tb"
    WHERE NOT "timestamp" IS NULL)
    AND "timestamp" <=
    (SELECT MAX("tb"."timestamp") AS "timestamp"
    FROM (SELECT MIN("timestamp") AS "timestamp"
    FROM "T_TABLE"
    WHERE "timestamp" >
    TO_DATE('2006-01-21 12:12:39', 'YYYY-MM-DD HH24:MI:SS')
    UNION
    SELECT MAX("timestamp")
    FROM "T_TABLE"
    WHERE "timestamp" <=
    TO_DATE('2006-01-21 12:12:39', 'YYYY-MM-DD HH24:MI:SS')) "tb"
    WHERE NOT "timestamp" IS NULL)
    ORDER BY "timestamp"
    Here are the queries for a sample table to test with:
    CREATE TABLE T_TABLE
    ( "timestamp" DATE,
    DATA_COL1 NUMBER
    INSERT INTO T_TABLE
    (SELECT TO_DATE('01/20/2006', 'MM/DD/YYYY') + (LEVEL-1) * 1/1440,
    LEVEL * 0.1
    FROM dual CONNECT BY 1=1
    AND LEVEL <= (TO_DATE('01/25/2006','MM/DD/YYYY') - TO_DATE('01/20/2006', 'MM/DD/YYYY'))*1440)
    Thanks.

    No need for analytic functions here (they’ll likely be slower).
    1. No need for UNION ... use UNION ALL.
    2. No need for <quote>WHERE NOT "timestamp" IS NULL</quote> … the MIN and MAX will take care of nulls.
    3. Ask if they really need the data sorted … the s/w with the graphs may do its own sorting
    … in which case take the ORDER BY out too.
    4. Make sure to have indexes on "timestamp".
    What you want to see for those innermost MAX/MIN subqueries are executions like:
    03:19:12 session_148> SELECT MAX(ts) AS ts
    03:19:14   2  FROM "T_TABLE"
    03:19:14   3  WHERE ts < TO_DATE('2006-01-21 00:12:39', 'YYYY-MM-DD HH24:MI:SS');
    TS
    21-jan-2006 00:12:00
    Execution Plan
       0   SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2.0013301108 Card=1 Bytes=9)
       1    0   SORT (AGGREGATE)
       2    1     FIRST ROW (Cost=2.0013301108 Card=1453 Bytes=13077)
       3    2       INDEX (RANGE SCAN (MIN/MAX))OF 'T_IDX' (INDEX) (Cost=2.0013301108 Card=1453 Bytes=13077)

  • Implicit Join or Explicit Join...which is more efficient???

    Which is more efficient?
    An IMPLICIT JOIN
    SELECT TableA.ColumnA1,
    TableB.ColumnB2
    FROM TableA,
    TableB
    WHERE TableA.ColumnA1 = TableB.ColumnB1
    Or....An EXPLICIT JOIN
    SELECT TableA.ColumnA1,
    TableB.ColumnB2
    FROM TableA
    INNER JOIN TableB
    ON TableA.ColumnA1 = TableB.ColumnB1
    I have to write a pretty extensive query and there will be many parts and I just want to try and make sure it is efficient as possible. Can I EXPLAIN this in SQL Navigator as well to find out???
    Thanks in advance for your review and hopeful for a reply.
    PSULionRP

    Alex Nuijten wrote:
    The Partition Outer Join is very handy, but it's an Oracle-ism - Not ANSI ...Ooh, "New thing learnt today" - check.
    but then again who cares? ;)Oracle roolz! *{;-D                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Linking from one PDF to another: Is there a more efficient way?

    Some background first:
    We make a large catalog (400pages) in Indesign and it's updated every year. We are a wholesale distributor and our pricing changes so we also make a price list with price ref # that corresponded with #s printed in the main catalogue.  Last year we also made this catalog interactive so that a pdf of it could be browsed using links and bookmarks. This is not too difficult using Indesign and making any adjustments in the exported PDF. Here is the part that becomes tedious and is especially so this year:
    We also set up links in the main catalog that go to the price list pdf - opening the page with the item's price ref # and prices... Here's my biggest issue - I have not found any way to do this except making links one at a time in Acrobat Pro (and setting various specifications like focus and action and which page (in the price list) to open) Last year this wasn't too bad because we used only one price list. It still took some time to go through and set up 400-500 links individually.
    This year we've simplified our linking a little by putting only one link per page but that is still 400 links. And this year I have 6 different price lists (price tiers...) to link to the main catalogue pdf. (That's in the neighborhood of 1200-1500 double clicking the link(button) to open Button Properties, click Actions tab, click Add..."Go to page view" , set link to other pdf page, click edit, change Open in to "New Window" and set Zoom.  This isn't a big deal if you only have a few Next, Previous, Home kind of buttons....but it's huge when you have hundreds of links. Surely there's a better way?
    Is there anyway in Acrobat or Indesign to more efficiently create and edit hundreds of links from one pdf to another?
    If anything is unclear and my question doesn't make sense please ask. I will do my best to help you answer my questions.
    Thanks

    George, I looked at the article talking about the fdf files and it sounds interesting. I've gathered that I could manipulate the pdf links by making an fdf file and importing that into the PDF, correct?
    Now, I wondered - can I export an fdf from the current pdf and then change what is in there and import it back into the pdf.  I've tried this (Forms>More Form Options>Manage Form Data>Export Data) and then opened the fdf in a text editor but I see nothing related to the documents links... I assume this is because the links are 'form' data to begin with - but is there away to export something with link data like that described in the article link you provided?
    Thanks

  • I need a more efficient method of transferin​g data from RT in a FP2010 to the host.

    I am currently using LV6.1.
    My host program is currently using Datasocket to read and write data to and from a Field Point 2010 system. My controls and indicators are defined as datasockets. In FP I have an RT loop talking to a communication loop using RT-FIFO's. The communication loop is using Publish to send and receive via the Datasocket indicators and controls in the host program. I am running out of bandwidth in getting data to and from the host and there is not very much data. The RT program includes 2 PID's and 2 filters. There are 10 floats going to the Host and 10 floats coming back from the Host. The desired Time Critical Loop time is 20ms. The actual loop time is about 14ms. Data is moving back and forth between Host and FP several times a second without regularity(not a problem). If I add a couple more floats each direction, the communications goes to once every several seconds(too slow).
    Is there a more efficient method of transfering data back and forth between the Host and the FP system?
    Will LV8 provide faster communications between the host and the FP system? I may have the option of moving up.
    Thanks,
    Chris

    Chris, 
    Sounds like you might be maxing out the CPU on the Fieldpoint.
    Datasocket is considered a pretty slow method of moving data between hosts and targets as it has quite a bit of overhead assosciated with it.  There are several things you could do. One, instead of using a datasocket for each float you want to transfer (which I assume you are doing), try using an array of floats and use just one datasocket transfer for the whole array.  This is often quite a bit faster than calling a publish VI for many different variables.
    Also, as Xu mentioned, using a raw TCP connection would be the fastest way to move data.  I would recommend taking a look at the TCP examples that ship with LabVIEW to see how to effectively use these. 
    LabVIEW 8 introduced the shared variable, which when network enabled, makes data transfer very simple and is quite a bit faster than a comparable datasocket transfer.  While faster than datasocket, they are still slower than just flat out using a raw TCP connection, but they are much more flexible.  Also, the shared variables can fucntion in the RT fifo capacity and clean up your diagram quite a bit (while maintaining the RT fifo functionality).
    Hope this helps.
    --Paul Mandeltort
    Automotive and Industrial Communications Product Marketing

  • Lib in different drives, is it more efficient?

    How much more efficient is it for the CPU to have your lib spread out in different external drives? ex. strings in one drive, perc in another drive etc...
    Thanks,
    Guy

    Thanks for the reply, but I wanted to go one step further since I often write in a symphonic texture,
    so having each section, strings-WW-brass-perc- synth etc in different drives, will that make my system more efficient than if I put all the samples in one drive. I'm concerned because it involves some investments, but if it's confirmed that it will make a difference than I don't mind investing in a couple of more drives.

  • Make pacman more verbose?

    Is there a way to make pacman more verbose in showing what packages are being installed or updated?
    For example, when doing a system update, you get a long list of packages that will be downloaded and installed. Most of them are packages you already have, but some may be new packages, pulled in as a new dependency of something else. I would like to more easily see if a package is a new install or an update, and what is causing the new packages to be installed.
    I used to use FreeBSD, and it's ports system did this. You do a system update, and it would list all of the packages to be installed, one per line. Each line indicated whether it was an update or a new install, if it was an update it showed the old -> new version numbers, and if it was a new install it showed what package depended on it.
    I haven't been able to find a way to make pacman do this, though I would be happy to be proven wrong. I've toyed with the idea of getting my hands dirty and writing some pacman patches to add this as an option, but I'd like to see if there are other solutions other people know about.

    Using yaourt -Syua:
    ==> Package upgrade only (new release):
    extra/libgnome-keyring 3.6.0-1 1 -> 2
    extra/mesa 9.1-2 2 -> 3
    ==> Software upgrade (new version) :
    core/filesystem 2013.01-3 -> 2013.03-2
    core/iptables 1.4.16.3-1 -> 1.4.18-1
    core/iproute2 3.7.0-1 -> 3.8.0-1
    core/libffi 3.0.11-1 -> 3.0.12-1
    core/systemd 197-4 -> 198-1
    core/systemd-sysvcompat 197-4 -> 198-1
    core/tzdata 2013a-1 -> 2013b-1
    extra/dbus-glib 0.100-1 -> 0.100.2-1
    extra/gconf 3.2.5-3 -> 3.2.6-1
    extra/git 1.8.1.5-1 -> 1.8.2-1
    aur/google-chrome 25.0.1364.160-1 -> 25.0.1364.172-1
    ==> Continue upgrade ? [Y/n]
    ==> [V]iew package detail [M]anually select packages
    ==> --------------------------------------------------
    Also if there's any new package installed as a dependency to some other package, you are also acknowledged about it.

  • A more efficient way to assure that a string value contains only numbers?

    Hi ,
    I'm using Oracle 9.2.0.6.
    I was curious to know if there was any way I could write a more efficient query to determine if a string value contains only numbers.
    Here's my current query. This SQL is from a sub query in a Join clause.
    select distinct cta.CUSTOMER_TRX_ID, to_number(cta.SALES_ORDER) SALES_ORDER
                from ra_customer_trx_lines_all cta
                where length(cta.SALES_ORDER) = 6
                and cta.SALES_ORDER is not null
                and substr(cta.SALES_ORDER,1,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,2,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,3,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,4,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,5,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,6,1) in('1','2','3','4','5','6','7','8','9','0')This is a string where I'm finding A-Z-a-z characters and '/' and '-' characters in all 6 positions, plus there are values that are longer than 6 characters. That's what the length(cta.SALES_ORDER) = 6 is for. Also, of course. some cells are NULL.
    So the question is, is there a more efficient way to screen out only the values in this field that are 6 character numbers or is what I have the best I can do?
    Thanks,

    I appreciate all of your very helpfull workarounds. The cost is a little better in all cases than my original where clause.
    To address the discussion that's popped up about design from this question, I can say a few things that should clear , at least, my situation up.
    First of all this custom quoting , purchase order , and sales order entry system WAS written by a bunch a of 'bad' coders who didn't document their work and then left. We don't even have an ER diagram
    The whole project that I'm only a small part of is literally trying to put Humpty Dumpty together again and then move it from a bad custom solution into Oracle Applications.
    We're rebuilding, documenting, and doing ETL. This is one of your prototypical projects from hell.
    It's a huge database project so we're taking small bites as a time. Hopefully, somewhere right before Armageddon hits, this thing will be complete.
    But until then,..., well,..., you know the drill.
    Thanks Again.

Maybe you are looking for

  • Master data extraction from R/3 to BW

    Hi all, I have to extract master data for characteristic 0CUSTOMER from R/3 system. I have two options for doing that namely 'direct update' and 'flexible update'. Can anyone suggest which method is used in which scenario? I need the data in 0SOLD_TO

  • Why jdbc api ?why not use odbc api from java application?

    hi guys....am pretty confused abt these jdbc drivers...am unable to understand why we r using jdbc api when odbc api has been in market prior to jdbc.....please clarify my doubt

  • Zipping the ORACLE_HOME as backup before applying a patch

    Hi, I'm reading the and OPatch User's Guide 10g Release 2 (10.2) for Windows and it states there to backup the oracle home before patching. +"You can back up the ORACLE_HOME using your preferred method. You can use any method such as zip, cp -r, tar,

  • Stateless Session EJB Bean Example please

    Dear Friends, Develop a stateless session EJB bean and deploy it in WebLogic 8.1 Please proivde me a weblogic8.1 example along with the deployment steps. Also mention the supporting softwares needed. I have weblogic 8.1, JDK 1.5 Advance Thanks. Renga

  • TS4303 thunderbolt not supported in Mac Air 2013

    Mac Air 11" 2013 is connected to Airport Extreme but connection rate is only 50MB/s . That is no difference when I connected only to WiFi.I am using a Cat5E cable. Please advice.