Code optimizing

Hi, i have a question:
Im going to make a 2DGame, and so i need some layers(player layers, scenario layers,decorative layer etc etc),how can I make this layer?I think i can use transparent JPanel,is true?Or JPanel isn't fast for this?

JPanel is likely to be fast enough, yes. Anyway, my advise: code first, optimize later...
Whatever the choice you make, updates won't take long as soon as you nicely divide your application into classes (i.e. MVC should be fine).

Similar Messages

  • I need help, I have alot of questions too (swing, code, optimizing...)

    Hi, I'm having a little trouble with aligning my GUI correctly. Here are screenshots from my laptop which runs OS 10.4 & my PC running Windows 2000:
    http://kavon89.googlepages.com/clipper.png <= On the laptop
    http://kavon89.googlepages.com/clipper_windows2000.jpg <= On the PC
    I don't know why the OS 10.4 looks almost perfect and the win 2k is a disaster. I played with it on my laptop and once i came over to my PC to post a problem about somthing else, i noticed this issue on my windows box. My guess is that either the JButton on OS 10.4 is smaller than the one for win2k and causing the automatic organizing to malfunction... or maybe the pixels are larger on my PC then on my laptop.
    I was thinking that I should make diffrent .jar's for each OS since there are GUI issues.... or is there a way to make it universal? (i read somwhere about GridBag?)
    Next off, Is there any way to reduce the size of the space between the JList box closest to the top of the frame & the buttons & text box below it? I tried all sorts of resizing and it seems stuck on some sort of space there which i would like to make compact.
    Those are all the swing questions i have ^
    About my code:
    I recently added 2 buttons which are at the bottom of the OS 10.4 screenshot, Clear Box & Drop All. Clear Box is the one I made work, or so I thuoght. I wrote this for my Clear Box button (by the way, the button is sapose to just clear the text box directly to the upper left of itself.)
           if(e.getSource() == buttonc)
               textfield.setText("");
           }I thought that was the best way to go about clearing the text box. just setting it to blank. But after some testing to ensure there were no reprocussions, I found a problem... When i select somthing from my JList and hit Clear Box, it deletes the entry in the JList when it is not sapose to. It also does it when there is text in the textbox and i have selected somthing in my JList. I haven't been able to figure out why. Here is my full source code:
    import java.awt.*;
    import java.awt.event.*;
    import javax.swing.*;
    public class winclipstart
      JButton buttona, buttond, buttonc, buttonda;
      DefaultListModel clippedtxt = new DefaultListModel();
      JTextArea textfield;
      JList ClippedLines;
      public static void main(String[] cheese){new winclipstart().buildGUI();}
      public void buildGUI()
        JFrame frame;
        Container contentPane;
        JPanel Bottom = new JPanel();
        JPanel Top = new JPanel();
        frame = new JFrame();
        frame.setTitle("Clipper 0.1 Beta");
        contentPane = frame.getContentPane();
        ClippedLines = new JList(clippedtxt);
        JScrollPane scroll2 = new JScrollPane(ClippedLines, JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDED, JScrollPane.HORIZONTAL_SCROLLBAR_AS_NEEDED);
        scroll2.setPreferredSize(new Dimension(400,150));
        textfield = new JTextArea(3,20);
        JScrollPane scroll1 = new JScrollPane(textfield, JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDED, JScrollPane.HORIZONTAL_SCROLLBAR_NEVER);
        buttona = new JButton("Clip");
        buttond = new JButton("Drop");
        buttonc = new JButton("Clear Box");
        buttonda = new JButton("Drop All");
        Bottom.add(scroll1);
        Bottom.add(buttona);
        Bottom.add(buttond);
        Bottom.add(buttonc);
        Bottom.add(buttonda);
        Top.add(scroll2);
        Box All = Box.createVerticalBox();
            All.add(Top);
            All.add(Box.createVerticalStrut(5));
            All.add(Bottom);
        contentPane.add(All);
        buttonL cl = new buttonL();
        buttona.addActionListener(cl);
        buttond.addActionListener(cl);
        buttonc.addActionListener(cl);
        frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
        frame.pack();
        frame.setResizable(false);
        frame.setLocationRelativeTo(null);
        frame.setVisible(true);
        frame.setSize(424,314);
        ClippedLines.setDragEnabled(true);
      public class buttonL implements ActionListener
        public void actionPerformed(ActionEvent e)
           if(e.getSource() == buttona)
              String elementToAdd = textfield.getText();
              if(elementToAdd.equals("")==false) clippedtxt.addElement(textfield.getText());
           else
             int index = ClippedLines.getSelectedIndex();
             if(index > -1) clippedtxt.remove(index);
           if(e.getSource() == buttonc)
               textfield.setText("");
    }There is a stray dragenabled true line in there which on my ibook showed that it drags after holding the mouse button down a little longer than expected, and it shows im dragging , but obiously not dropping because i haven't configured dropping yet. but on my windows machine it shows nothing of it. oh and, on the ibook it shows it with a green "+" and then makes it look like im trying to put it inbetween two other lines but does nothing when dropped like expected
    Another question: In the beginning of my code i have it loading the entire library with the *, and i noticed after turning it into a .jar that it loads slower than i expected... would a way to speed it up be to specify exactly what i need loaded after the program is finished or would it make no diffrence?
    :-/ seems my program isn't as cross platform as i exptected java to be

    I solved the code problem myself after a careful run through of the code, the "else" in my subclass made it so that every button other than the Add button deleted a selected entry, i quickly fixed it and made it an if statement.
    Thank you for the link cotton.m
    My question bout optimizing still remains though:In the beginning of my code i have it loading the entire library with the *, and i noticed after turning it into a .jar that it loads slower than i expected... would a way to speed it up be to specify exactly what i need loaded after the program is finished or would it make no diffrence?

  • PLSQL code optimizing

    Hi
    My oracle db version is 11g R1/AIX5.3
    how following plsql code can be re-written for better performance
    CREATE OR REPLACE PROCEDURE CSCOMMON.POST_SEQ_PROCESS
    AS
       -- Declare variables to hold sequenced infomration.
       pragma autonomous_transaction;
       V_RECORD_ID           POST_SEQUENCING_PROCESS.RECORD_ID%TYPE;
       V_RECEIVED            POST_SEQUENCING_PROCESS.RECEIVED%TYPE;
       V_PROCESS_NAME        POST_SEQUENCING_PROCESS.PROCESS_NAME%TYPE;
       V_PROCESS_ID          POST_SEQUENCING_PROCESS.PROCESS_ID%TYPE;
       V_PROCESS_SEQ         POST_SEQUENCING_PROCESS.PROCESS_SEQ%TYPE;
       V_RECEIVER            POST_SEQUENCING_PROCESS.RECEIVER%TYPE;
       V_RECEIVER_ID         POST_SEQUENCING_PROCESS.RECEIVER_ID%TYPE;
       V_RECEIVER_SEQ        POST_SEQUENCING_PROCESS.RECEIVER_SEQ%TYPE;
       V_SENDER              POST_SEQUENCING_PROCESS.SENDER%TYPE;
       V_SENDER_ID           POST_SEQUENCING_PROCESS.SENDER_ID%TYPE;
       V_MESSAGE_ID          POST_SEQUENCING_PROCESS.MESSAGE_ID%TYPE;
       V_INSTANCE_ID         POST_SEQUENCING_PROCESS.INSTANCE_ID%TYPE;
       V_STATUS              POST_SEQUENCING_PROCESS.STATUS%TYPE;
       V_STEP_ID             POST_SEQUENCING_PROCESS.STEP_ID%TYPE;
       V_INTERNAL_ID         POST_SEQUENCING_PROCESS.INTERNAL_ID%TYPE;
       V_LOCK_ID             POST_SEQUENCING_PROCESS.LOCK_ID%TYPE;
       V_ERROR_HANDLING      POST_SEQUENCING_PROCESS.ERROR_HANDLING%TYPE;
       V_STARTED             POST_SEQUENCING_PROCESS.STARTED%TYPE;
       V_WARNINGS            POST_SEQUENCING_PROCESS.WARNINGS%TYPE;
       V_DOCUMENTTYPE_NAME   POST_SEQUENCING_PROCESS.DOCUMENTTYPE_NAME%TYPE;
       V_DOCUMENTTYPE_ID     POST_SEQUENCING_PROCESS.DOCUMENTTYPE_ID%TYPE;
       V_DOCUMENTTYPE_SEQ    POST_SEQUENCING_PROCESS.DOCUMENTTYPE_SEQ%TYPE;
       v_current             VARCHAR2 (600);
       v_sql_error           VARCHAR2 (600);
       lv_count              NUMBER;
       CURSOR c_first500
       IS
          SELECT RECORD_ID,
                 RECEIVED,
                 PROCESS_NAME,
                 PROCESS_ID,
                 PROCESS_SEQ,
                 RECEIVER,
                 RECEIVER_ID,
                 RECEIVER_SEQ,
                 SENDER,
                 SENDER_ID,
                 MESSAGE_ID,
                 INSTANCE_ID,
                 STATUS,
                 STEP_ID,
                 INTERNAL_ID,
                 LOCK_ID,
                 ERROR_HANDLING,
                 STARTED,
                 WARNINGS,
                 DOCUMENTTYPE_NAME,
                 DOCUMENTTYPE_ID,
                 DOCUMENTTYPE_SEQ
            FROM (  SELECT RECORD_ID,
                           RECEIVED,
                           PROCESS_NAME,
                           PROCESS_ID,
                           PROCESS_SEQ,
                           RECEIVER,
                           RECEIVER_ID,
                           RECEIVER_SEQ,
                           SENDER,
                           SENDER_ID,
                           MESSAGE_ID,
                           INSTANCE_ID,
                           STATUS,
                           STEP_ID,
                           INTERNAL_ID,
                           LOCK_ID,
                           ERROR_HANDLING,
                           STARTED,
                           WARNINGS,
                           DOCUMENTTYPE_NAME,
                           DOCUMENTTYPE_ID,
                           DOCUMENTTYPE_SEQ
                      FROM CSCOMMON.SEQUENCING_PROCESS
                  ORDER BY RECEIVED)
           WHERE ROWNUM < 101;
       P_RAND                NUMBER;
       V_LID                 NUMBER;
    BEGIN
       v_current := 'BEFORE CURSOR OPENING';
       SELECT COUNT (*) INTO lv_count FROM POST_SEQUENCING_PROCESS;
       OPEN c_first500;          
       LOOP
          SELECT CSCOMMON.SEQ_LID_SEQUENCING_PROCESS_NU.NEXTVAL
            INTO V_LID
            FROM DUAL;
         UPDATE CSCOMMON.SEQUENCING_PROCESS A
             SET A.LOCK_ID = V_LID ,
                 A.STATUS = 1,
                 A.STARTED = SYSDATE,
                 A.WARNINGS = 0
           WHERE NOT EXISTS
                        (SELECT 1
                           FROM CSCOMMON.SEQUENCING_PROCESS B
                          WHERE ( (A.RECEIVER_ID = B.RECEIVER_ID
                                   AND A.RECEIVER_SEQ = 1)
                                 OR (A.PROCESS_ID = B.PROCESS_ID
                                     AND A.PROCESS_SEQ = 1)
                                 OR (A.DOCUMENTTYPE_ID = B.DOCUMENTTYPE_ID
                                     AND A.DOCUMENTTYPE_SEQ = 1))
                                AND (A.ERROR_HANDLING = 0 OR B.STATUS != 4)
                                AND B.RECORD_ID < A.RECORD_ID)
                 AND A.STATUS = 2
                 AND 1024 =
                                        (SELECT WM1.STATUS
                           FROM WMLOG610.WMPROCESS WM1,
                                (  SELECT MAX (AUDITTIMESTAMP) AUDITTIMESTAMP,
                                          INSTANCEID
                                     FROM WMLOG610.WMPROCESS WM2
                                    WHERE INSTANCEID IN
                                             (SELECT INSTANCE_ID
                                                FROM CSCOMMON.SEQUENCING_PROCESS where rownum<101)
                                 GROUP BY INSTANCEID
                                 ORDER BY instanceid) WM2
                          WHERE     A.INSTANCE_ID = WM1.INSTANCEID
                                AND WM1.INSTANCEID = WM2.INSTANCEID
                                AND WM1.AUDITTIMESTAMP = WM2.AUDITTIMESTAMP
                                AND ROWNUM = 1)
                 AND A.LOCK_ID IS NULL
                 AND A.DOCUMENTTYPE_NAME != 'FxHaulage';
    commit;
          FETCH c_first500
          INTO V_RECORD_ID,
               V_RECEIVED,
               V_PROCESS_NAME,
               V_PROCESS_ID,
               V_PROCESS_SEQ,
               V_RECEIVER,
               V_RECEIVER_ID,
               V_RECEIVER_SEQ,
               V_SENDER,
               V_SENDER_ID,
               V_MESSAGE_ID,
               V_INSTANCE_ID,
               V_STATUS,
               V_STEP_ID,
               V_INTERNAL_ID,
               V_LOCK_ID,
               V_ERROR_HANDLING,
               V_STARTED,
               V_WARNINGS,
               V_DOCUMENTTYPE_NAME,
               V_DOCUMENTTYPE_ID,
               V_DOCUMENTTYPE_SEQ;
          EXIT WHEN c_first500%NOTFOUND;
          BEGIN
             v_current := 'INSERT INTO POST_SEQUENCING_PROCESS';
             IF (lv_count = 0)
             THEN
                INSERT INTO POST_SEQUENCING_PROCESS (RECORD_ID,
                                                     RECEIVED,
                                                     PROCESS_NAME,
                                                     PROCESS_ID,
                                                     PROCESS_SEQ,
                                                     RECEIVER,
                                                     RECEIVER_ID,
                                                     RECEIVER_SEQ,
                                                     SENDER,
                                                     SENDER_ID,
                                                     MESSAGE_ID,
                                                     INSTANCE_ID,
                                                     STATUS,
                                                     STEP_ID,
                                                     INTERNAL_ID,
                                                     LOCK_ID,
                                                     ERROR_HANDLING,
                                                     STARTED,
                                                     WARNINGS,
                                                     DOCUMENTTYPE_NAME,
                                                     DOCUMENTTYPE_ID,
                                                     DOCUMENTTYPE_SEQ)
                     SELECT *
                       FROM cscommon.sequencing_process A
                      WHERE lock_id IS NOT NULL
                            AND A.DOCUMENTTYPE_NAME != 'FxHaulage' order by lock_id;
                            commit;
                INSERT INTO CSCOMMON.PRE_SEQUENCING_PROCESS
                   (SELECT * FROM CSCOMMON.POST_SEQUENCING_PROCESS);
    commit;
                DELETE FROM CSCOMMON.POST_SEQUENCING_PROCESS;
                COMMIT;
                v_current := 'DELETE FROM SEQUENCING_PROCESS';
                DELETE FROM CSCOMMON.SEQUENCING_PROCESS
                      WHERE LOCK_ID IS NOT NULL
                            AND DOCUMENTTYPE_NAME != 'FxHaulage';
                COMMIT;
             ELSE
                RETURN;
             END IF;
          END;
       END LOOP;
       CLOSE c_first500;                    
       COMMIT;
    EXCEPTION
       WHEN OTHERS
       THEN
          v_sql_error := SQLERRM || ' - ' || v_current;
          ROLLBACK;
    END;
    /May be by using forall /bulk collections
    Thanks
    Raj

    You need to understand transactional consistency. Not only are your commits
    slowing things down, they are not good for maintaining transactional consistency.
    Ask yourself what would happen if there was a problem in your procedure after
    the first commit? How would you recover from that with some records
    updated but the rest of your procedure not having been run?
    In general you should only have one commit at the topmost level. By that I mean if
    there is a program that kicks of other ones and is the 'master' controller, that
    should be the one that decides to commit or rollback the whole transaction.
    Also you need to understand exception handling. Your exception processing is dangeroulsly wrong.
    Remove it.
    Finally, try and do this processing without using cursor loops: pure (set-based) SQL is much faster than
    slow cursor based PL/SQL and SQL together.

  • Cube to Cube Poor Performance

    Our situation is that we have our current situation - a long running weekly process -  and a project that is in progress. 
    We are experiencing a long running load from 1 cue to another.  I am looking for suggestions on decreasing this run time.  Here are the details of the situation and what we have done to date.
    We are on BW 3.5.
    Our current long running process:
    Cube A has approximately 258,000,000 records right now.  (yes, we know it’s a lot, but we aren’t ready to redesign just yet – we want to exhaust all possibilities before redesigning).  It is dimensioned by material, plant, Sales Organization, and Customer, and Calendar Week.
    Cube A is loaded daily with a delta from ODSs on APO.  It contains forecasts for 2007, 2008, 2009.  
    Weekly, Cube A is loaded into Cube B.  Cube B is dimensioned by material, plant, Sales Organization, and Customer, and Calendar Week.  The data is the snapshot of forecasts for the current month + future 19 months.
    The load from Cube A to Cube B takes 16 hours.   The index to Cube B is deleted, Cube B is loaded from Cube A, the Cube B index is generated and overlapping requests are deleted.  The load from Cube A to Cube B runs approximately 15 hours and is slowly increasing.    Approximately 54,000,000 records get loaded from Cube A to Cube B.
    Our Project:
    Cube C will replace Cube B.  Weekly, Cube A will load into Cube C.  The grain of Cube C is higher than that of Cube B.  Cube C is dimensioned by material, Sales Organization, and Customer, and Calendar Month.  The data is the snapshot of forecasts for the current month + future 24 months.  Also, when loading Cube C, in the start routine, we go and look up the price of a material in an ODS and then in the update rules, calculate a forecast sales amount by multiplying the price by the forecast cases.  This extra step was not thought to have much of an effect on the processing. 
    Because our QA environment had limitations, we moved the new cube, Cube C, into production.  When we attempted to load Cube A to Cube C for 0FISCPER = 2007011 – 2009011, it failed due to temp space.  ORA-01652: unable to extend temp segment by 2560 in tablespace PSAPTEMP. Our DBA suggested we chunk our data loads.  That was a week ago Sunday.  Since then we have done all of the following:
    1. Data package size was 50,000.  Selection FISCPER = 2007011 – 2008001 from Cube A to Cube C.  Results: Manually cancelled after 1 day and 3 hours.
    2. Changed data package size to 20,000.  Selection FISCPER = 2007011 – 2008001 from Cube A to Cube C.  We optimized our code Code optimized to load data package into internal table to get unique materials before reading ZMATERIAL table.  Results: Completed in 7h 1m 29s.  So for 3 months it took 7 hours.  We need to load 24 months so estimated that would take almost 60 hours….not acceptable.
    3. BASIS changed GLOBAL PARAMETER… Maximum number of dialog processes for sending data … changed from 3 to 5
    4. We created Secondary indexes created for /BIC/PZMATERIAL and ZFSOPO51 on Material.  Data package was at 20,000.  Selection FISCPER = 2008002 – 2008003.  Results: Completed in 5h 26m 6s.  Still too long.
    5.  Loaded another 2 months and it completed successfully in a little over 5 hours.
    6. We met with BASIS, DBA and Network/Disk team.  Our conversations with them had us investigating the disk layout, spoke of LUNS, “hot spots”, Tier 2 … all things that we thought would lead us to rearrange our disk. 
    7. We reviewed graphs showing I/O and CPU usage and compared them to our runs.  We saw that that on the selection from Cube A our I/O spiked.  Once the data was all selected, I/O dropped and it appeared that CPU was hit heavy then. 
    8. Conversations continued … much of which was over my developer head
    a. There appears that there is 9GB of unallocated memory.  There’s action to split the 9GB of unallocated memory between the application (BW) and the database (Oracle).
    b. Review the performance of the HBA
    c. Has the Oracle systems processes been disabled in BWP? If not, disable that process.
    9. We reviewed the index on Cube A and determined that it wasn’t being used.  We implemented OSS Note 561961 to never USE_FACTVIEW.  That decreased the selection time on Cube A when we tested on our QA environment from 60 minutes to 15 minutes.  However, the whole infopackage ended in the same time as when we hadn’t known about the OSS Note. Selection FISCPER = 2007012
    My next step is to build an aggregate off of Cube A that looks like the grain of Cube C and kick off a load of 1 month again - Selection FISCPER = 2007012
    I will then compare this time against the other runs.  If it is improved, I’d like to run several infopackages at the same time, using different FISCPER Selections in each.
    Other conversations/suggestions we’ve had was about compressing Cube A.
    We’ve also reviewed our weekly generated Service Reports.  Nothing obvious jumps out at me there.  I have also reviewed many of the performsnce presenatations on SDN and see that the CPUs usage can be reviewed, too.  At this point I don't have access to some of the transactions.
    Thanks you for reading and please repsond with your advice.  I appreciate it.

    Hi mary,
    Check whether the Cube partition as per calday and Compress the cube.
    And you can create Aggregates for the cube.
    Let us know the status for the further info.
    Reg
    Pra

  • How to improve the execution time of my VI?

    My vi does data processing for hundreds of files and takes more than 20 minutes to commplete. The setup is firstly i use the directory LIST function to list all the files in a dir. to a string array. Then I index this string array into a for loop, in which each file is opened one at a time inside the loop, and some other sub VIs are called to do data analysis. Is there a way to improve my execution time? Maybe loading all files into memory at once? It will be nice to be able to know which section of my vi takes the longest time too. Thanks for any help.

    Bryan,
    If "read from spreadsheet file" is the main time hog, consider dropping it! It is a high-level, very multipurpose VI and thus carries a lot of baggage around with it. (you can double-click it and look at the "guts" )
    If the files come from a just executed "list files", you can assume the files all exist and you want to read them in one single swoop. All that extra detailed error checking for valid filenames is not needed and you never e.g. want it to popup a file dialog if a file goes missing, but simply skip it silently. If open generates an error, just skip to the next in line. Case closed.
    I would do a streamlined low level "open->read->close" for each and do the "spreadsheet string to array" in your own code, optimized to the exact format of your files. For example, notice that "read from spreadheet file" converts everything to SGL, a waste of CPU if you later need to convert it to DBL for some signal processing anyway.
    Anything involving formatted text is not very efficient. Consider a direct binary file format for your data files, it will read MUCH faster and take up less disk space.
    LabVIEW Champion . Do more with less code and in less time .

  • Top Link Special Considerations in moving to Cost Based Optimizer....

    Our current application architecture consists of running a Java based application with Oracle 9i as the database and toplink as the object relational mapping tool. This is a hosted application about 5 years old with stringent SLA requirements and high availability needs. We are currently using Rule Based Optimizer (RBO) mode and do not collect statistics for the schemas. We are planning a move to Cost Based Optimizer (CBO)
    What are the special considerations we need to be aware of from moving RBO to CBO from top link perspective. Is top link code optimized for one mode over the other ?. What special parameter settings are needed ?. Any of your experience in moving Top Link based applications to RBO and best practices will be very much appreciated.
    -Thanks
    Ganesan Maha

    Ganesan,
    Over the 10 years we have been delivering TopLink I do not recall any issues with customizing TopLink for either approach. You do have the ability to customize how the SQL is generated and even replace the generated SQL with custom queries should you need to. This will not require application changes but simply modifications to the TopLink metadata.
    As of 9.0.4 you can also provide hints in the TopLink query and expression framework that will be generated into the SQL to assist the optimizer.
    Doug

  • How to integrate KXEN result with own apps?

    Dear Experts,
    When I was trying the Association Rule of KXEN (SAP infinityinsight 7.0), I do not find a good way to integrate the trained result rules into my own App. When I got a group of rules, the only way I could think is to save the rule list into the html file and then manually (or write a program) to convert the rules in html file into my database tables;
    Is there any better way that I could save the rules directly into my table? or even to generate SQL (like in classification) would be  much better?
    Thanks

    Hi Richard,
    As you noticed, implementing rules produced from SAP InfiniteInsight Modeler - Association Rules is not simple.
    This component is deprecated and has been replaced by another one which is much easier to use and implement.
    I suggest you use SAP InfiniteInsight Recommendation (should be accessible on your current instance of SAP InfiniteInsight).
    You'll be able to generate source code (SQL codes optimized for many different databases) and will be able to specify how many recommendations to push to each customer, whether or not best sellers must be included in recommendations and if it makes sense to recommend again a product that was already purchased by a customer. 
    Besides, Recommendation is using a different technique than traditional Association rules (which use A priori algorithm) and it scales on huge volumes of data.
    On the licensing side, SAP InfiniteInsight Recommendation is an add-on that can be purchased on top of  InfiniteInsight Engine.
    Armelle

  • Performance, Benchmark for Oracle XE

    My buisness would like to use Oracle XE as a starter base installation for customers, so that they can easly upgrade to an full Oracle SE or EE later. However I fail to find benchmark tests (like tpc.org) which give an image of the performance of the Oracle XE and how it compares to other databases such as PostgreSQL or MySQL in the smaller application market segment that XE is targeting.
    We have been running som tests in house, but the currently I must say they are not in the Oracle XE favour. This is disapointing, as we would want to avoid for our customers to start of with a DB other than Oracle, then do an transistion to Oracle SE or EE when the need is there.
    We are working on tuning/optimizing the database to perform better. Nevertheless it would be really interesting to see some benchmarks / comparison of the performance between Oracle XE an other DBMS in this segment of the DB market. Beeing well aware that their is not necessarely one database that fits each application/segment best, such a test would be informative. And also great input for my company when choosing which DB to use. Does anyone have any experience on this subject any feedback is highly appreciated. If anyone is interested I would be glad to post details regarding our tesing in house.
    Cheers

    Hi,
    Thanks for the reply. Being well aware of the complexity of optimizing application performance I am simply trying to find as many inputs as possible to make a decision. A benchmark test would for me not yield "the truth" in any way, but simply be another parameter to consider. I am also aware that Oracle DB has several features that other DBs don't have.
    You hit the nail on you assumption:
    2) use only the very basic features of both DBsThis is correct. We are using a EJB3.0 (Hibernate) environment togheter with JBOSS 4.0.5 application server, using no specific DB implementation. I realize that at some point, to acheive max performance, one should would have to implement code optimized for a specific DB. But right now I am interested in maximizing the performance with no specific DB code. We started out using Oracle XE, and it was doing well. We than switched to Mysql mainly for curiousity, and the performance was much better "out-of-the-box". ( x 2,5!) using less CPU -20%) an memory than Oracle XE. This was a suprising result for me, and now I would like to know why.
    If the reason is that Oracle uses more resource for background processes like gathering statistics etc. this is a feasable explanation. Also in a larger system, if these processes take more or less the same resources, they would make out a smaller part of the total available resources. Right now I can't explain the performance gap, I simply observe the test results but cannot explain them.
    As my demands to the DB features are limited (basic ER datatypes, incremental backup), is there perhaps a way to "disable advanced features" of Oracle XE, so that more CPU power is available to the application?
    Also, how well does the JBOSS app. server integrate with the Oracle DB? Would the Oracle XE perform better with an Oracle App.server?
    My application has a typical web-app, having mainly read queries with a high peak load (burst). I have tried tuning the connection pool etc. but no improvment. It is quite possible that I have reached the limit of performance of the Oracle XE, perhaps I am trying to use it for something it was not intended for? Having said this I would prefer to continue using Oracle XE, but it would be nice to know the reasons for the difference in performance.
    Any feedback is highly appreciated.

  • Re: Batchs

    Hello,
    1) I use Forte for batch applications. But the definition of a batch may
    be
    different with that kind of architecture : you have events. In fact, I
    think
    that classic batch (long processing done by night to close the day
    activity
    for instance) should be different : you may work on 24 hours a day
    applications.
    So the batch and the TP should run at the same time. Also, if you have
    application events, you can imagine to structure the jobs differently :
    prepare
    the work at real time and store the result in a temporary structure.
    Then the
    batch should be only a confirmation of the prepared job with a
    management of
    a degraded mode on the TP during the treatment. The "real batch" should
    then
    last a very short time (you can imagine 30 minutes instead of several
    hours
    with classical batch architectures).
    2) a) It will depend on your processing and business. You should manage
    a context
    on your policy managers and Sharing managers and have some specific code
    optimized
    for massive treatment (even usefull at run time).
    b) In my own case, without having optimized specifically the treatments
    for massive
    treatments I have an average response time of 100 ms per row. This will
    depend on
    the infrastructure you use also. I use Oracle 7.3 on Aix 4.2. You can
    genarally
    optimize significantly the response time of massive treatment by using
    cursors.
    c) I use my own protocol, so I can manage a application context on the
    message and
    be able to give some routage service such as directing an action to a
    specific
    method regarding the context. By separating communication protocol and
    application
    protocol you can also offer advanced functionnalities to users such as
    performance
    agents on your services and dialog trace between services. This is very
    usefull for
    optimisation of your partitionning.
    Remarks :
    - In my own case, I have observed 30% of gain of ressources by using C++
    code generation
    and 30% of gain on response time also.
    - You need to test on large amount of rows and use a test application
    that can simulate
    the average charge of your system to be efficient.
    - You can imagine that a batch should not be a process but an event on a
    scheduler which
    can spool and manage journaling.
    - For Database treatment, don't forget that Forte uses Dynamic Sql to
    access to databases.
    So It can be more efficient to develop your own optimization for your
    specific treatment
    by using cursors. In most cases (not specific to Forte) it may be more
    efficient to work
    in two phases :
    1) Preparation with temporary storage in a file.
    2) Validation of all the process by writing in the Database from the
    file.
    This can be really more efficient to minimize the locks and update on
    indexes. It can also
    be very usefull if you need to manage a single transaction for all the
    batch. You can then
    manage easely the conflict between TP and batch (phase 2) and also be
    able to manage
    partial retry on the batch if problems occur.
    - By developping a small scheduler on Forte you can manage some simple
    functionnalities as
    batch dependencies (look at Forte Sharewares for persistent queuing).
    Hope this helps.
    Daniel Nguyen
    Freelance Forte Consultant
    Kelsey Petrychyn wrote:
    >
    I have some questions regarding using Forte batch applications.
    1) Who out there in "Forte land" uses Forte for Batch processing?
    2) If you use Forte for batch processing:
    a) What is the size of the job? 100 000+ rows of data processed/ Batch
    job?
    b) What is it's efficiency? 1hr to process 100 000 rows of data?
    c) Could you please briefly describe your Architecture?
    Basically, we want to know:
    Is Forte a good tool to use to build a batch application? And, if so, What
    are the best architectures?
    Kelsey PetrychynSaskTel Forte System Administrator
    ITM - Business Solns-consult Stds & Support (OTC)
    Tel (306) 777 - 4906, Fax (306) 359 - 0857
    Internet:[email protected]
    Quality is not job 1. It is the only job!

    hello, alan.
    at the bottom of the outbound delivery processing screen there should buttons for batch determination.  highlight the line item and click on the batch split button.  on the next screen you could re-determine batch determination.
    since the scenario is MTO, the system will source from sales order stocks.
    regards.

  • General question for LabVIEW+iMAQ application productivity

    We have received new biotech robotic system with LabVIEW control software. One of the software task is image recognition (robotic vision system). During images processing tasks memory is not used hard, but the CPU is always at 100 % utilization. Now the software is running at the computer with ordinary P4 2.4GHz.
    Question: is there any sense to use Xeon system or multiprocessors system for the images processing acceleration? How deep is the LabVIEW code optimized for the different processors?

    > We have received new biotech robotic system with LabVIEW control
    > software. One of the software task is image recognition (robotic
    > vision system). During images processing tasks memory is not used
    > hard, but the CPU is always at 100 % utilization. Now the software is
    > running at the computer with ordinary P4 2.4GHz.
    > Question: is there any sense to use Xeon system or
    > multiprocessors system for the images processing acceleration? How
    > deep is the LabVIEW code optimized for the different processors?
    The LV code isn't very optimized specific to different CPU
    architectures. A machine with bigger chip caches will probably give the
    biggest advantage.
    As for multiCPU, this really depends on how a LV diagram is written. I
    saw a presentat
    ion several years ago showing the gains a multiprocessor
    system would give you in a vision system. If the diagrams are written
    with parallelism in mind and the IMAQ VIs are made reentrant, the
    multiple was good, close to the number of processors. But of course, if
    there is no parallelism on the diagram, or the subVIs aren't reentrant,
    the other processors have little to work on.
    I'd ask the manufacturer of the biotech system if they have tried it or
    designed it to scale. If I'm misunderstanding and you bought it from
    NI, then I assume you have access to enough of the source code to make
    things reentrant and program for parallelism. If you have trouble
    writing your code for parallelism, ask more questions.
    Greg McKaskle

  • How to improve the load time of my swf group

    Hi,
    I need help to have some tricks to improve my load time on my swf captivate online traning. My training has 6 sections and it takes 3 minutes to download each time I open the window of the training. It takes too much time and if there are 50 users at the same time, it will take lots of my website bandwidth. Do you have any tips on captivate settings or other tips to help reduce my training download time? I do not understand why the 6 modules loading simultaneously and not every time I click to start a new part of training.
    Can you help me with my problem?
    Thank you

    Bryan,
    If "read from spreadsheet file" is the main time hog, consider dropping it! It is a high-level, very multipurpose VI and thus carries a lot of baggage around with it. (you can double-click it and look at the "guts" )
    If the files come from a just executed "list files", you can assume the files all exist and you want to read them in one single swoop. All that extra detailed error checking for valid filenames is not needed and you never e.g. want it to popup a file dialog if a file goes missing, but simply skip it silently. If open generates an error, just skip to the next in line. Case closed.
    I would do a streamlined low level "open->read->close" for each and do the "spreadsheet string to array" in your own code, optimized to the exact format of your files. For example, notice that "read from spreadheet file" converts everything to SGL, a waste of CPU if you later need to convert it to DBL for some signal processing anyway.
    Anything involving formatted text is not very efficient. Consider a direct binary file format for your data files, it will read MUCH faster and take up less disk space.
    LabVIEW Champion . Do more with less code and in less time .

  • How to Buy MB for BIOS Compatibility

    I have been reading here of some instances where MSI motherboards were purchased but would not support the target CPU, and required extra activity for a BIOS upgrade.
    How can I buy a MB and assure that I will get a BIOS that will run the 3930K?

    Bernhard,
    Thank you.  I found:
    - Update CPU Micro Code
    - Optimized CPU OverColocking capability
     Version 1.2
    Release Date 2012-02-14
    the version 1.2 is the latest on the list.  Looking at the different boards and CPUs, for any CPU, the compatibility table always points to the latest BIOS.   :nono:I guess that must meand the MSI only recommends to buy the latest BIOS.  Now if I can just find a supplier that has someone that can look to see what BIOS is installed and that MSI has labeled the BIOS.
    I wonder if MSI labels the BIOS at all and if they have an encoding. 
    I wonder what is meant by - Update CPU Micro Code.  This seems to say that the BIOS is going to update the microcode in the CPU.  Well, I just looked it up and it is typical for a BIOS to be able to update Intel's microcode.  Not that I know anything about the details of the CPU's microcode, but I wonder what is being changed and why.

  • Algorithm comparison FLOPS

    Hi Everybody,
    Is there a method to compare two algorithms in Labview(v.7) when it comes to their floating point calculations or number of FLOPS?
    I am looking to compare two algorithm (roughly),  something like FLOPS command in Matlab if any. I know one way for performance comparison might be comparing their looped execution time. But any FLOPS alike command ?
    Thanks

    Hed wrote:
    FLOPS is just the number of floating point operations.
    The more common definition is "FLoating point Operations Per Second", comparing different hardware with the same algorithm.
    http://en.wikipedia.org/wiki/FLOPS
    (You seem to be using the plural definition of FLOP, which is not as common, comparing the number of operations needed to perform a certain task, irrespecitve of hardware).
    Both definitions are not really useful to compare two algorithms in LabVIEW. The only thing you should do is compare the timed performance between the two algorithms. Be aware that even the same algorithm can be implemented more or less efficient, depending on the skill and knowledge of the programmer. Make yourself a small bechmarking operation as a three-frame flat sequence. Make sure that nothing can run in parallel with the middle frame. Pure coding cosiderations such as "inplaceness" and avoidance of extra datacopes are crucial for very efficient code.
    Take a tick count in each edge frame and place your code in the middle. If it is a fast code, put it in a loop for a few million iterations. Take the difference in tick count and divide it by the number of iterations, convert to seconds, and display it in SI units e.g. 45u (=45microseconds) per loop.
    Watch out for constant folding. If your loop is folded into a constant, you might get false ultrafast readings. If you have LabVIEW 8.5, try the new inplace structure.
    If you are dealing with variable size arrays, measure the speed as a function of array size: Is the execution time linear with N, with NlogN, with N*N, etc. How is the memory use? Plot log(time) vs. log(size). What is the slope of the curve? Are there any breaks? (e.g. when you exceed the cache size or when you start swapping).
    If you are running on a multipurose OS (e.g. windows, mac, linux), there are many other things running at any given time, so the speed will have some variations. Some people are tempted to e.g. take the average, while the fastest speed is probably a better measure of true speed.
    You can norrow the variation by raising the priority of the subVI (careful!). If the computation is within a subVI, you should make sure that the front panel of the subVI is closed. Often, you gain speed by disabling debugging.
    If you have multiple CPUs/cores, watch the task manager? Are both being used? Code optimized for multicore might have a slight penalty on a single code system.
    LabVIEW RT has much tighter controls on execution and you can debug down to the clock tick using the execution trace toolkit(http://sine.ni.com/nips/cds/view/p/lang/en/nid/13746). I am not familiar with RT, though.
    Anyway, I am curious what kind of algorithms you are trying to test. Maybe it is of general interest. You could even start an informal "coding callenge" to tap into the collective wisdom of the forum members.
    For some ideas, here is a link to the coding challenge archive: http://zone.ni.com/devzone/cda/tut/p/id/5174
    LabVIEW Champion . Do more with less code and in less time .

  • Mac Pro v's G5 Quad (Not again? But a real time battle not benchmarks)

    I have an 8GB G5 Quad with the 7800GT graphics card. I have recently bought a stock Mac Pro and added 1 GB Crucial RAM giving it 2GB RAM in total, with the stock 7300GT card.
    Having installed PhaseOne's CaptureOne Pro (universal) on both machines, I set about Batch Processing 204 Canon 1Ds MKII RAW files into 16 bit TIFFS. No corrections were made, it was just a straight forward batch.
    Results: 2GB Mac Pro: 204 RAW - 16 Bit TIFFS (@ 96MB each) total time: 35 mins 14 seconds.
    8GB Powermac G5 Quad: 204 RAW - 16 Bit TIFFS (@96MB each) total time: 58 minutes 22 seconds.
    Never thought I'd hear myself (well read myself saying/typing this) but that's over 23 minutes slower for the Quad G5 with 4 times as much memory as the Mac Pro! Mac Pro 1.7 times faster. When time is money. . .need I say more?
    Both machines were started up, nothing running other than whatever runs after start up in the background, nothing launched other than Capture One Pro. The files used were located in folders on the internal HD supplied with the machines (both have 250GB's: MP: Seagate, Quad G5: WD)
    The Mac Pro toasted the G5 Quad - I'm gutted. Anyone want to buy a G5 Quad? lol
    What's your experience with the Mac Pro?

    As it is, most apps don't use 4 cores, never mind 8.
    I'm lucky to see most apps use TWO cores effectively,
    never mind 4 now. I see a precious few apps
    benefiting from 8-core until developers start
    thinking in a more multithreaded multicore way.
    This problem is much much worse in the Windows world,
    where aside from server apps multi cpu/core was very
    uncommon in the consumer market.
    Well said. Software engineers can do SO much with Multcore/cpu configs, but it's time and resources, and LOTS of code optimizing.
    On the x86, at least it's now got everyone's full (well almost full) attention.
    Pro app side will keep getting better. Cross platform software companies (adobe,etc) can focus more on x86 optimization without the worry about time/resources for optimizing on a powerpc as well. Can't wait for 10.5!

  • Pfi armv7

    Hi.
    Is there any way I can make pfi compile code optimized for armv7 instead ov armv6?
    Thanks

    In LabVIEW 7.0, the DAQmx Connect Terminals.vi is here:  All Functions>>NI Measurements>>DAQmx - Data Acquisition>>DAQmx Advanced>>DAQmx Signal Routing palette.  If you don't have it there, you probably don't have a recent enough version of NI-DAQmx, which you can download here.
    -Alan A.

Maybe you are looking for

  • PI 7.1 problems publishing a WSDL of a sender soap adapter

    We have upgraded our dev system from XI 3.0 to PI 7.1 and now I'm facing a problem with SOAP adapter. I have searched for answers here for a while but couldnu2019t find them. That why I start an own thread. I thought with PI 7.1 it is very easy to pu

  • ABAP dump on authorization issue

    hello, I am not sure if this is the correct forum for this or not. I have an ABAP program that was written before I got here that performs the following statement <b>OPEN DATASET w_file FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.</b> where w_file is a

  • "Maximum package size for data packages was exceded".

    Hi, We are getting the below error. "Maximum package size for data packages was exceded". In our scenario we are loading the data product key wise (which is a semantic key as well) to the DSO thro' a start routine. The logic in the start routine is s

  • Replacing Xserve, how to deal with RAID5+0 array

    We're finally looking at replacing our 8+ year old Xserve, which has served us faithfully, with a new Nehalem model. The question is what to do about the two XRAID arrays that we have attached via fibrechannel. Both are fully stocked with 14 disks ap

  • Color coding of bookmarks

    Is using URL Manager Pro the only way to be able to color code bookmarks in Safari 4.0.4? I hope Apple adds that capability to Safari in the future. Color coding of bookmarks would be extremely useful, just like the current capability to color code f