10,000 Recorc Per Second (In EJB 3.0)

hi all,
i have some mission critical tasks into my project, is it possible to persist 10 000 record per seconds,
1. AS - JBoss Application Server 4.0.4GA
2. Database - Oracle 10G 10.2.0.1
3.EJB - 3.0 Framework
4.OS - SunOS 5.10
4.Server - Memory: 16G phys mem, 31G swap, 16 CPU,
i know that i need performace
here is my configurations about performance
1. JVM Config Into JBoss
JAVA_OPTS="-server -Xmx3168m -Xms2144m -Xmn1g -Xss256k -d64 -XX:PermSize=128m -XX:MaxPermSize=256m
   -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000
    -XX:ParallelGCThreads=20 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
    -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=31 -XX:+AggressiveOpts
    -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+PrintTenuringDistribution2. also i configure my database.xml file
<?xml version="1.0" encoding="UTF-8"?>
<datasources>
  <xa-datasource>
    <jndi-name>XAOracleDS</jndi-name>
    <track-connection-by-tx/>
    <isSameRM-override-value>false</isSameRM-override-value>
    <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class>
    <xa-datasource-property name="URL">jdbc:oracle:thin:@192.168.9.136:1521:STR</xa-datasource-property>
    <xa-datasource-property name="User">SRVPROV</xa-datasource-property>
    <xa-datasource-property name="Password">SRVPROV</xa-datasource-property>
    <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name>
    <min-pool-size>50</min-pool-size>
    <max-pool-size>200</max-pool-size>    
    <metadata>
         <type-mapping>Oracle9i</type-mapping>
      </metadata>
  </xa-datasource>
  <mbean code="org.jboss.resource.adapter.jdbc.vendor.OracleXAExceptionFormatter"
         name="jboss.jca:service=OracleXAExceptionFormatter">
    <depends optional-attribute-name="TransactionManagerService">jboss:service=TransactionManager</depends>
  </mbean>
</datasources>3. Also i have one simple Stlateless Session Bean
@Stateless
@Remote(UsageFasade.class)
public class UsageFasadeBean implements UsageFasade {
     @PersistenceContext(unitName = "CustomerCareOracle")
     private EntityManager oracleManager;
     @TransactionAttribute(TransactionAttributeType.REQUIRED)
     public long createUsage(UsageObject usageObject, UserContext context)
               throws UserManagerException, CCareException {
          try {
               oracleManager
                         .createNativeQuery("INSERT INTO USAGE "
                                   + " (ID, SESSION_ID, SUBSCRIBER_ID, RECDATE, STARTDATE, APPLIEDVERSION_ID, CHARGINGPROFILE_ID, TOTALTIME, TOTALUNITS, IDENTIFIERTYPE_ID, IDENTIFIER, PARTNO, CALLTYPE_ID, USAGETYPE, APARTY, BPARTY, CPARTY, IMEI, SPECIFICCALLTYPE, APN, SOURCELOCATION, SMSCADDRESS, MSC_ID, ENDREASON, USAGEORIGIN, BILL_ID, CONTRACT_ID) "
                                   + " VALUES(SEQ_USAGE_ID.NEXTVAL, NULL, NULL, SYSDATE, SYSDATE, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL) ");                              
               return 1;
          } catch (Exception e) {
}3. and into client side i have 200 Threads, each of them tried to call this method 50 times
my result is that i can persist 10000 record in 20 seconds, without hibernate, with hibernate i got worst result :(,
also i hear that it is good idea to use JDBC 3.0 driver for performance,
i download newest oracle jdbc jar file from oracle site
http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/htdocs/jdbc_10201.html
is this jar file JDBC 3.0 driver ?
is there any hibernate performance configuration?
is it any more performance tuning into JBoss or EJB with entity beans?
can anybody help me ? or is there any doc which can help me ?
Regards,
Paata,
Message was edited by:
paata
Message was edited by:
paata

What makes you think that your database, just the database (with the box that it is on) can handle that rate?
What makes you think that your network can handle that?
While this is going on is this the ONLY traffic that will be on the network?

Similar Messages

  • 1,000,000 updates per second?

    How could you configure a coherence cluster to handle processing a million stock quotes per sec? The datafeed could be configured as a single app spewing out all 1,000,000/sec or it could be many apps producing proportionately fewer ticks/sec but in any case it's going to total a million/sec. Fractions of the feed spread among multiple physical servers sounds smartest. The quote Map.Entry would probably have a Key of String (or char[] if that's more efficient - i know the max length). The Value would be a price and a size so maybe just those two elements byte[]{Float,Integer} or a java object with Float and Integer member variables. I'd want to trigger actions based on market conditions when the planets align just right so I'm not simply ignoring these values or pub/sub'ing them out to client apps, I'm evaluating many of them simultaneously and using them as event triggers. Is something like that remotely possible? On how much hardware?
    Thanks,
    Andrew

    Andrew,
    Using partitioning, Coherence can handle 1 million updates per second, but the big question is how many updates per second do you need on the hottest instrument at the hottest time?
    The other question is related to "the planets lining up", because that may imply a global view of the market, which becomes more difficult in a partitioned system.
    To provide a high rate of change to data in a partitioned system, the data providers (those with a large amount of data or a high rate of change) should be in the cluster (not coming in over *Extend) to eliminate one hop. To avoid blocking on the tick update from the data provider, it should locally enqueue the update. The queue servicer (a separate thread) should either coalesce whatever ticks are in the queue into a single putAll(), or if every tick needs to be recorded (i.e. all three ticks in the queue like "change to 3.5", "change to 3.55", "change to 3.6" have to be published, instead of just the latest "change to 3.6") then it would batch up everything in the queue until it hits an item that it already has in its batch, and then do a putAll().
    The use of that async publishing mode is what allows for the much higher throughput, particularly when a data provider is producing a huge number of ticks in a given period of time. You can make it even smoother (e.g. avoid outliers caused by some servers being slower) by having more local queues+services (partitioned by Coherence partition, or at the extreme by instrument). You can determine the Coherence partition using the KeyPartitioningStrategy returned from the PartitionedService for the ticks cache.
    Peace,
    Cameron Purdy | Oracle Coherence
    http://coherence.oracle.com/

  • Benchmark report: obtain 17,000 searches per second, DS52p6, x4250 8-core

    The benchmark described in this post involved a requirement for a Consumer Directory Server for 13,8000,000 user entries with Directory Server 5.2 patch 6 running on Sun Netra x4250 hardware and Solaris 10 update 7. -- http://bit.ly/8p8RI

    Frederic's right... you're summing up too many statspacks ..
    Can't see anything specific apart from the fact that Statspack itself is showing up in the top statements.
    First : define 'Slow' . What is your goal for 'Ok' ?
    Start thinking about sql_trace, tkprof and 10046 traces

  • What is the most Frames Per Second NI-CAN can do?

    My goal is to send 1000 Frames per Second on my CAN bus using the NI-CAN PCI 2 slot card I have.  However the closest I have been able to do is 666 frames per second.  This is sending 8 frames every 12 MS using an edited readmult example.  Is there a way to do this with writemult?  Or is there a hardware limit that I am trying to go past?
    What can I mess with to get more frames?  Increase Baudrate?  Decrease the size of the frames?  (I've tried both of those)
    Other questions that should probably go in other posts  (Frame API):
    Is there a way to send/read the frames at the bit-level?  I have found ways to manipulate Arbitration ID, Remote Frame, Data Length, and Data, but there are several other bits in a frame.
    Is there a way to send a bad frame, one that would raise/cause an error frame?

    Yes, I did break 1,000 Frames Per Second.  I got up to 1,714 and 1,742 using two different methods.  This is at 250 kbps, if you used 500 or 1 Mbps, you could get more frames.  If you have 125 kbps, you might not be able to break 1,000 Frames per Second.
    ncWriteMult is the key.  You can load 512 frames in a queue at a time.  I would put 256 on at a time and check to see if there was less than 256 frames left in the queue and if there was, load it up, that way the queue would never be empty.  I went about it 2 ways, one was using ncGetAttribute to determine space left, and that got the faster method, however, I was also trying to read the messages to verify that it worked, and I had problems with logging every frame.  It would also send the first 40 ncWriteMults quickly, as if the queue it was filling was much larger than 512.
    The other way, was using trial and error to determine how many MS to sleep before writing the next batch of 256 frames.  There are variables outside of my control that determined the time it would take and it would vary a few ms.  I wanted a stable environment that could send forever without filling the queue so I went with a value that would wait 2 or 3 ms, depending on conditions before writing again.  The value I used was 142 ms, I think.  Your Mileage May Vary.
    There is also a way to do some error handling that I did not try to utilize.  Instead of the process crashing, there is a way to tell it to wait if this error is returned.  That might be the best way for me to implement what I wanted to do, but I was assigned another task before I tried to get that to work.
    There is a timing element in ncWriteMult's documentation I didn't look into very much, but that would space the frames out and could send 1,000 frames a second evenly distributed, instead of sending them as quickly as possible, wait some ms then send another batch.
    If anyone could link us, or provide some code snippets of the error handling, or proper usage of ncGetAttribue, or some way to read faster, that would be greatly appreciated.

  • ASA events per second

    Looking for a model comparison for the ASA family. I need to know how many events per second the ASA is capable of sending to a syslog server.
    Thanks.

    Among the many products and technologies that make the Self-Defending Network possible is the Cisco Security Monitoring, Analysis, and Response System (Cisco Security MARS). This appliance, which comes in six different models to accommodate from 50 to 10,000 events per second
    http://cisco.com/en/US/netsol/ns643/netbr0900aecd80591d44.html

  • How many of these objects should I be able to insert per second?

    I'm inserting these objects using default (not POF) serialization with putAll(myMap). I receive about 4000 new quotes per second to put in the cache. I try coalescing them to various degrees but my other apps are still slowing down when these inserts are taking place. The applications are listening to the cache where these inserts are going using CQCs. The apps may also be doing get()s on the cache. What is the ideal size for the putAll? If I chop up myMap into batches of 100 or 200 objects then it increases the responsiveness of other apps but slows down the overall time to complete the putAll. Maybe I need a different cache topology? Currently I have 3 storage enabled cluster nodes and 3 proxy nodes. The quotes go to a distributed-scheme cache. I have tried both having the quote inserting app use Extend and becoming a TCMP cluster member. Similar issues either way.
    Thanks,
    Andrew
    import java.io.Serializable;
    public class Quote implements Serializable {
        public char type;
        public String symbol;
        public char exch;
        public float bid = 0;
        public float ask = 0;
        public int bidSize = 0;
        public int askSize = 0;
        public int hour = 0;
        public int minute = 0;
        public int second = 0;
        public float last = 0;
        public long volume = 0;
        public char fastMarket; //askSource for NBBO
        public long sequence = 0;
        public int lastTradeSize = 0;
        public String toString() {
            return "type='" + type + "'\tsymbol='" + symbol + "'\texch='" + exch + "'\tbid=" +
                    bid + "\task=" + ask +
                    "\tsize=" + bidSize + "x" + askSize + "\tlast=" + lastTradeSize + " @ " + last +
                    "\tvolume=" + volume + "\t" +
                    hour + ":" + (minute<10?"0":"") + minute + ":" + (second<10?"0":"") + second + "\tsequence=" + sequence;
        public boolean equals(Object object) {
            if (this == object) {
                return true;
            if ( !(object instanceof Quote) ) {
                return false;
            final Quote other = (Quote)object;
            if (!(symbol == null ? other.symbol == null : symbol.equals(other.symbol))) {
                return false;
            if (exch != other.exch) {
                return false;
            return true;
        public int hashCode() {
            final int PRIME = 37;
            int result = 1;
            result = PRIME * result + ((symbol == null) ? 0 : symbol.hashCode());
            result = PRIME * result + (int)exch;
            return result;
        public Object clone() throws CloneNotSupportedException {
            Quote q = new Quote();
            q.type=this.type;
            q.symbol=this.symbol;
            q.exch=this.exch;
            q.bid=this.bid;
            q.ask = this.ask;
            q.bidSize = this.bidSize;
            q.askSize = this.askSize;
            q.hour = this.hour;
            q.minute = this.minute;
            q.second = this.second;
            q.last = this.last;
            q.volume = this.volume;
            q.fastMarket = this.fastMarket;
            q.sequence = this.sequence;
            q.lastTradeSize = this.lastTradeSize;
            return q;
    }

    Well, firstly, I surprised you are using "float" objects in a financial object, but that's a different debate... :)
    Second, why aren't you using pof? Much more compact from my testing; better performance too.
    I've inserted similar objects (but with BigDecimal for the numeric types) and seen insert rates in the 30-40,000 / second (single machine, one node). Obviously you take a whack when you start the second node (backup's being maintained, plus that node is probably on a separate server, so you are introducing network latency.) Still, I would have thought 10-20,000/second would be easily doable.
    What are the thread counts on the service's you are using?; I've found this to be quite a choke point on high-throughput caches. What stats are you getting back from JMX for the Coherence components?; what stats from the server (CPU, Memory, swap, etc)?; What spec of machines are you using? Which JVM are you using? How is the JVM configured? What's are the GC times looking like? Are you CQC queries using indexes? Are your get()'s using indexes, or just using keys? Have you instrumented your own code to get some stats from it? Are you doing excessive logging? So many variables here... Very difficult to say what the problem is with so little info./insight into your system.
    Also, maybe look at using a multi-threaded "feeder" client program for your trades. That's what I do (as well as upping the thread-count on the cache service thread) and it seems to run fine (with smaller batch sizes per thread, say 50.) We "push" as well as fully "process" trades (into Positions) at a rate of about 7-10,000 / sec on a 4 server set-up (two cache storage nodes / server; two proxies / server.) Machines are dual socket, quad-core 3GHz Xeons. The clients use CQC and get()'s, similar to your set-up.
    Steve

  • Final Cut Pro X Image Sequence Export missing frames per second option?

    I am using the trial version of Final Cut Pro X.  I can export an image sequence but it will only allow me to do so at 30 frames per second -every single frame! 
    So a 10 min movie takes half an hour to export 20,000+ frames that no one on earth has time to look thru.
    In my old Final Cut Express I was able to choose 1 or 2 frames per second, which was just right.
    What am I missing?
    Do I need to buy Quicktime Pro or Compressor to allow me to export image sequences without exporting every frame?
    Why would Apple even have an image sequence export if it only allows you to export every frame, or it that only in the trial version?
    Thank you

    I found a free option for you, if you don't already have Compressor 4.
    In FCPX create your movie. Then SHARE as a Master File.
    Get MPEG Streamclip, which is a free app available here.
    http://www.squared5.com/svideo/mpeg-streamclip-mac.html
    Drag your Master File into MPEG Streamclip.
    In MPEG Streamclip, select FILE/EXPORT TO OTHER FORMATS.
    In FORMAT: choose IMAGE SEQUENCE
    In OPTIONS, choose your frame rate.

  • Logical reads per second

    I have two databases - one is a clone of the other, amde a few months ago. Database A has somewhat more data, since it's the active production database, but not significantly more - perhaps 10% greater. They are on different boxes. Database A is on a Sun 280R 2-processor box. Database B is on a Dell 2950 with 2 dual-core processors. So this isn't exactly comparing apples to apples. However, when I run the same query on the two databases, I get radically different results. Against Database A, the query takes about 7 minutes. On Database B, it takes about 2 seconds. Logical reads per second on Database A reach 80,000-90,000; on Database B, they're about 3,000. There are a few configuration differences (both databases use automatic memory management):
    Database A Database B
    db_file_multiblock_read_count 64 16
    log_buffer 14290432 2104832
    open_cursors 1250 300
    sga_max_size 4194304000 536870912
    sga_target 2634022912 536870912
    shared_pool_reserved_size 38587596 7340032
    The timings were taken off-hours so neither database would be busy. I'm baffled by the extreme difference in execution times. Any help appreciated!
    Thanks,
    Harry
    Edited by: harryb on Apr 8, 2009 7:26 PM

    OK, let's start here....
    Database A (TEMPOP)
    SQL> show parameter optimizer
    NAME TYPE VALUE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 10.2.0.3
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean TRUE
    SQL> show parameter db_file_multi
    NAME TYPE VALUE
    db_file_multiblock_read_count integer 64
    SQL> show parameter db_block_size
    NAME TYPE VALUE
    db_block_size integer 8192
    ===================================================
    Database B (TEMPO11)
    SQL> show parameter optimizer
    NAME TYPE VALUE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 10.2.0.1
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean TRUE
    SQL> show parameter db_file_multi
    NAME TYPE VALUE
    db_file_multiblock_read_count integer 16
    SQL> show parameter db_block_size
    NAME TYPE VALUE
    db_block_size integer 8192
    =================================================================
    Now for the query that's causing the problem:
    SELECT dsk_document_attribute.value_text inspect_permit_no,
              NVL (activity_task_list.revised_due_date,
                   activity_task_list.default_due_date
                 inspect_report_due_date,
              agency_interest.master_ai_id agency_interest_id,
              agency_interest.master_ai_name agency_interest_name,
              get_county_code_single (agency_interest.master_ai_id)
                 parish_or_county_code,
              agency_interest_address.physical_address_line_1 inspect_addr_1,
              agency_interest_address.physical_address_line_2 inspect_addr_2,
              agency_interest_address.physical_address_line_3 inspect_addr_3,
              agency_interest_address.physical_address_municipality inspect_city,
              agency_interest_address.physical_address_state_code state_id,
              agency_interest_address.physical_address_zip inspect_zip,
              person.master_person_first_name person_first_name,
              person.master_person_middle_initial person_middle_initial,
              person.master_person_last_name person__last_name,
              SUBSTR (person_telecom.address_or_phone, 1, 14) person_phone,
              activity_task_list.requirement_id
       FROM dsk_document_attribute,
            agency_interest,
            activity_task_list,
            agency_interest_address,
            dsk_central_file dsk_aaa,
            dsk_central_file dsk_frm,
            person,
            person_telecom
       WHERE agency_interest.int_doc_id = 0
             AND agency_interest.master_ai_id =
                   agency_interest_address.master_ai_id
             AND agency_interest.int_doc_id = agency_interest_address.int_doc_id
             AND agency_interest.master_ai_id = dsk_frm.master_ai_id
             AND dsk_aaa.int_doc_id = activity_task_list.int_doc_id
             AND dsk_frm.int_doc_id = dsk_document_attribute.int_doc_id
             AND dsk_frm.doc_type_specific_code =
                   dsk_document_attribute.doc_type_specific_code
             AND dsk_frm.activity_category_code = 'PER'
             AND dsk_frm.activity_class_code = 'GNP'
             AND dsk_frm.activity_type_code IN ('MAB', 'NAB', 'REB')
             AND dsk_frm.program_code = '80'
             AND dsk_frm.doc_type_general_code = 'FRM'
             AND dsk_frm.doc_type_specific_code = 'PERSET'
             AND dsk_aaa.doc_template_id = 2000
             AND dsk_frm.master_ai_id = dsk_aaa.master_ai_id
             AND dsk_frm.activity_category_code = dsk_aaa.activity_category_code
             AND dsk_frm.program_code = dsk_aaa.program_code
             AND dsk_frm.activity_class_code = dsk_aaa.activity_class_code
             AND dsk_frm.activity_type_code = dsk_aaa.activity_type_code
             AND dsk_frm.activity_year = dsk_aaa.activity_year
             AND dsk_frm.activity_num = dsk_aaa.activity_num
             AND dsk_document_attribute.doc_attribute_code = 'PERMIT_NO'
             AND activity_task_list.requirement_id IN ('3406', '3548', '3474')
             AND activity_task_list.reference_task_id = 0
             AND NVL (activity_task_list.status_code, '$$$') <> '%  '
             AND person.master_person_id(+) =
                   f_get_gp_contact (agency_interest.master_ai_id)
             AND person.int_doc_id(+) = 0
             AND person.master_person_id = person_telecom.master_person_id(+)
             AND person.int_doc_id = person_telecom.int_doc_id(+)
             AND person_telecom.telecom_type_code(+) = 'wp';Here's the explain plan for Database A, where the query takes 7-8 minutes or more:
    PLAN_TABLE_OUTPUT
    | Id  | Operation                           | Name                       | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT                    |                            |     1 |   253 |    34   (3)|
    |   1 |  NESTED LOOPS                       |                            |     1 |   253 |    34   (3)|
    |   2 |   NESTED LOOPS                      |                            |     1 |   224 |    32   (0)|
    |   3 |    NESTED LOOPS OUTER               |                            |     1 |   169 |    31   (0)|
    |   4 |     NESTED LOOPS OUTER              |                            |     1 |   144 |    29   (0)|
    |   5 |      NESTED LOOPS                   |                            |     1 |   122 |    27   (0)|
    |   6 |       NESTED LOOPS                  |                            |     1 |    81 |    26   (0)|
    PLAN_TABLE_OUTPUT
    |   7 |        NESTED LOOPS                 |                            |     1 |    48 |    19   (0)|
    |   8 |         INLIST ITERATOR             |                            |       |       |            |
    |*  9 |          TABLE ACCESS BY INDEX ROWID| ACTIVITY_TASK_LIST         |     1 |    21 |    17   (0)|
    |* 10 |           INDEX RANGE SCAN          | ACTIVITY_TASK_LIST_FK11    |   106 |       |     4   (0)|
    |* 11 |         TABLE ACCESS BY INDEX ROWID | DSK_CENTRAL_FILE           |     1 |    27 |     2   (0)|
    |* 12 |          INDEX UNIQUE SCAN          | PK_DSK_CENTRAL_FILE        |     1 |       |     1   (0)|
    |* 13 |        TABLE ACCESS BY INDEX ROWID  | DSK_CENTRAL_FILE           |     1 |    33 |     7   (0)|
    |* 14 |         INDEX RANGE SCAN            | CF_MASTER_AI_ID_IND        |     9 |       |     2   (0)|
    |  15 |       TABLE ACCESS BY INDEX ROWID   | AGENCY_INTEREST            |     1 |    41 |     1   (0)|
    |* 16 |        INDEX UNIQUE SCAN            | PK_AGENCY_INTEREST         |     1 |       |     0   (0)|
    |  17 |      TABLE ACCESS BY INDEX ROWID    | PERSON                     |     1 |    22 |     2   (0)|
    PLAN_TABLE_OUTPUT
    |* 18 |       INDEX UNIQUE SCAN             | PK_PERSON                  |     1 |       |     1   (0)|
    |  19 |     TABLE ACCESS BY INDEX ROWID     | PERSON_TELECOM             |     1 |    25 |     2   (0)|
    |* 20 |      INDEX UNIQUE SCAN              | PK_PERSON_TELECOM          |     1 |       |     1   (0)|
    |  21 |    TABLE ACCESS BY INDEX ROWID      | AGENCY_INTEREST_ADDRESS    |     1 |    55 |     1   (0)|
    |* 22 |     INDEX UNIQUE SCAN               | PK_AGENCY_INTEREST_ADDRESS |     1 |       |     0   (0)|
    |  23 |   TABLE ACCESS BY INDEX ROWID       | DSK_DOCUMENT_ATTRIBUTE     |     1 |    29 |     1   (0)|
    |* 24 |    INDEX UNIQUE SCAN                | PK_DSK_DOCUMENT_ATTRIBUTE  |     1 |       |     0   (0)|
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       9 - filter("ACTIVITY_TASK_LIST"."REFERENCE_TASK_ID"=0 AND
                  NVL("ACTIVITY_TASK_LIST"."STATUS_CODE",'$$$')<>'%  ')
      10 - access("ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3406 OR
                  "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3474 OR "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3548)
      11 - filter("DSK_AAA"."DOC_TEMPLATE_ID"=2000 AND "DSK_AAA"."ACTIVITY_CLASS_CODE"='GNP' AND
                  "DSK_AAA"."PROGRAM_CODE"='80' AND "DSK_AAA"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  ("DSK_AAA"."ACTIVITY_TYPE_CODE"='MAB' OR "DSK_AAA"."ACTIVITY_TYPE_CODE"='NAB' OR
                  "DSK_AAA"."ACTIVITY_TYPE_CODE"='REB'))
      12 - access("ACTIVITY_TASK_LIST"."INT_DOC_ID"="DSK_AAA"."INT_DOC_ID")
      13 - filter("DSK_FRM"."ACTIVITY_CLASS_CODE"='GNP' AND "DSK_FRM"."PROGRAM_CODE"='80' AND
    PLAN_TABLE_OUTPUT
                  "DSK_FRM"."DOC_TYPE_SPECIFIC_CODE"='PERSET' AND "DSK_FRM"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  "DSK_FRM"."DOC_TYPE_GENERAL_CODE"='FRM' AND ("DSK_FRM"."ACTIVITY_TYPE_CODE"='MAB' OR
                  "DSK_FRM"."ACTIVITY_TYPE_CODE"='NAB' OR "DSK_FRM"."ACTIVITY_TYPE_CODE"='REB') AND
                  "DSK_FRM"."ACTIVITY_TYPE_CODE"="DSK_AAA"."ACTIVITY_TYPE_CODE" AND
                  "DSK_FRM"."ACTIVITY_YEAR"="DSK_AAA"."ACTIVITY_YEAR" AND
                  "DSK_FRM"."ACTIVITY_NUM"="DSK_AAA"."ACTIVITY_NUM")
      14 - access("DSK_FRM"."MASTER_AI_ID"="DSK_AAA"."MASTER_AI_ID")
      16 - access("AGENCY_INTEREST"."MASTER_AI_ID"="DSK_FRM"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST"."INT_DOC_ID"=0)
      18 - access("PERSON"."MASTER_PERSON_ID"(+)="F_GET_GP_CONTACT"("AGENCY_INTEREST"."MASTER_AI_ID
                  ") AND "PERSON"."INT_DOC_ID"(+)=0)
    PLAN_TABLE_OUTPUT
      20 - access("PERSON"."MASTER_PERSON_ID"="PERSON_TELECOM"."MASTER_PERSON_ID"(+) AND
                  "PERSON_TELECOM"."TELECOM_TYPE_CODE"(+)='wp' AND
                  "PERSON"."INT_DOC_ID"="PERSON_TELECOM"."INT_DOC_ID"(+))
      22 - access("AGENCY_INTEREST"."MASTER_AI_ID"="AGENCY_INTEREST_ADDRESS"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST_ADDRESS"."INT_DOC_ID"=0)
      24 - access("DSK_FRM"."INT_DOC_ID"="DSK_DOCUMENT_ATTRIBUTE"."INT_DOC_ID" AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_ATTRIBUTE_CODE"='PERMIT_NO' AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_TYPE_SPECIFIC_CODE"='PERSET')============================================================================
    Here's the explan plan output for Database B, where the query takes 2-3 seconds:
    PLAN_TABLE_OUTPUT
    | Id  | Operation                           | Name                       | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT                    |                            |     1 |   289 |    39   (0)|
    |   1 |  NESTED LOOPS OUTER                 |                            |     1 |   289 |    39   (0)|
    |   2 |   NESTED LOOPS                      |                            |     1 |   260 |    37   (0)|
    |   3 |    NESTED LOOPS                     |                            |     1 |   205 |    36   (0)|
    |   4 |     NESTED LOOPS OUTER              |                            |     1 |   172 |    35   (0)|
    |   5 |      NESTED LOOPS                   |                            |     1 |   145 |    34   (0)|
    |   6 |       NESTED LOOPS                  |                            |     1 |   104 |    33   (0)|
    PLAN_TABLE_OUTPUT
    |   7 |        NESTED LOOPS                 |                            |     1 |    61 |    26   (0)|
    |   8 |         INLIST ITERATOR             |                            |       |       |            |
    |*  9 |          TABLE ACCESS BY INDEX ROWID| ACTIVITY_TASK_LIST         |     1 |    25 |    24   (0)|
    |* 10 |           INDEX RANGE SCAN          | ACTIVITY_TASK_LIST_FK11    |   145 |       |     4   (0)|
    |* 11 |         TABLE ACCESS BY INDEX ROWID | DSK_CENTRAL_FILE           |     1 |    36 |     2   (0)|
    |* 12 |          INDEX UNIQUE SCAN          | PK_DSK_CENTRAL_FILE        |     1 |       |     1   (0)|
    |* 13 |        TABLE ACCESS BY INDEX ROWID  | DSK_CENTRAL_FILE           |     1 |    43 |     7   (0)|
    |* 14 |         INDEX RANGE SCAN            | CF_MASTER_AI_ID_IND        |     9 |       |     2   (0)|
    |  15 |       TABLE ACCESS BY INDEX ROWID   | AGENCY_INTEREST            |     1 |    41 |     1   (0)|
    |* 16 |        INDEX UNIQUE SCAN            | PK_AGENCY_INTEREST         |     1 |       |     0   (0)|
    |  17 |      TABLE ACCESS BY INDEX ROWID    | PERSON                     |     8 |   216 |     1   (0)|
    PLAN_TABLE_OUTPUT
    |* 18 |       INDEX UNIQUE SCAN             | PK_PERSON                  |     1 |       |     0   (0)|
    |  19 |     TABLE ACCESS BY INDEX ROWID     | DSK_DOCUMENT_ATTRIBUTE     |     1 |    33 |     1   (0)|
    |* 20 |      INDEX UNIQUE SCAN              | PK_DSK_DOCUMENT_ATTRIBUTE  |     1 |       |     0   (0)|
    |  21 |    TABLE ACCESS BY INDEX ROWID      | AGENCY_INTEREST_ADDRESS    |     1 |    55 |     1   (0)|
    |* 22 |     INDEX UNIQUE SCAN               | PK_AGENCY_INTEREST_ADDRESS |     1 |       |     0   (0)|
    |  23 |   TABLE ACCESS BY INDEX ROWID       | PERSON_TELECOM             |     1 |    29 |     2   (0)|
    |* 24 |    INDEX UNIQUE SCAN                | PK_PERSON_TELECOM          |     1 |       |     1   (0)|
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       9 - filter("ACTIVITY_TASK_LIST"."REFERENCE_TASK_ID"=0 AND
                  NVL("ACTIVITY_TASK_LIST"."STATUS_CODE",'$$$')<>'%  ')
      10 - access("ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3406 OR
                  "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3474 OR "ACTIVITY_TASK_LIST"."REQUIREMENT_ID"=3548)
      11 - filter("DSK_AAA"."DOC_TEMPLATE_ID"=2000 AND "DSK_AAA"."ACTIVITY_CLASS_CODE"='GNP' AND
                  "DSK_AAA"."PROGRAM_CODE"='80' AND "DSK_AAA"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  ("DSK_AAA"."ACTIVITY_TYPE_CODE"='MAB' OR "DSK_AAA"."ACTIVITY_TYPE_CODE"='NAB' OR
                  "DSK_AAA"."ACTIVITY_TYPE_CODE"='REB'))
      12 - access("ACTIVITY_TASK_LIST"."INT_DOC_ID"="DSK_AAA"."INT_DOC_ID")
      13 - filter("DSK_FRM"."DOC_TYPE_SPECIFIC_CODE"='PERSET' AND
    PLAN_TABLE_OUTPUT
                  "DSK_FRM"."ACTIVITY_CLASS_CODE"='GNP' AND "DSK_FRM"."PROGRAM_CODE"='80' AND
                  "DSK_FRM"."DOC_TYPE_GENERAL_CODE"='FRM' AND "DSK_FRM"."ACTIVITY_CATEGORY_CODE"='PER' AND
                  ("DSK_FRM"."ACTIVITY_TYPE_CODE"='MAB' OR "DSK_FRM"."ACTIVITY_TYPE_CODE"='NAB' OR
                  "DSK_FRM"."ACTIVITY_TYPE_CODE"='REB') AND "DSK_FRM"."ACTIVITY_TYPE_CODE"="DSK_AAA"."ACTIVITY_TY
                  PE_CODE" AND "DSK_FRM"."ACTIVITY_YEAR"="DSK_AAA"."ACTIVITY_YEAR" AND
                  "DSK_FRM"."ACTIVITY_NUM"="DSK_AAA"."ACTIVITY_NUM")
      14 - access("DSK_FRM"."MASTER_AI_ID"="DSK_AAA"."MASTER_AI_ID")
      16 - access("AGENCY_INTEREST"."MASTER_AI_ID"="DSK_FRM"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST"."INT_DOC_ID"=0)
      18 - access("PERSON"."MASTER_PERSON_ID"(+)="F_GET_GP_CONTACT"("AGENCY_INTEREST"."MASTER_AI_ID
                  ") AND "PERSON"."INT_DOC_ID"(+)=0)
    PLAN_TABLE_OUTPUT
      20 - access("DSK_FRM"."INT_DOC_ID"="DSK_DOCUMENT_ATTRIBUTE"."INT_DOC_ID" AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_ATTRIBUTE_CODE"='PERMIT_NO' AND
                  "DSK_DOCUMENT_ATTRIBUTE"."DOC_TYPE_SPECIFIC_CODE"='PERSET')
      22 - access("AGENCY_INTEREST"."MASTER_AI_ID"="AGENCY_INTEREST_ADDRESS"."MASTER_AI_ID" AND
                  "AGENCY_INTEREST_ADDRESS"."INT_DOC_ID"=0)
      24 - access("PERSON"."MASTER_PERSON_ID"="PERSON_TELECOM"."MASTER_PERSON_ID"(+) AND
                  "PERSON_TELECOM"."TELECOM_TYPE_CODE"(+)='wp' AND
                  "PERSON"."INT_DOC_ID"="PERSON_TELECOM"."INT_DOC_ID"(+))===============================================================================
    Edited by: harryb on Apr 9, 2009 3:29 PM

  • Calculate transaction per second

    Dear Friends ,
    I have run the Oracle 10g with AIX unix server . I want to know "How many Transaction is occured per minute or second" in PEAK hour into my production server . How can I find it ? Give me some suggestions plz ... ..

    An Oracle "transaction" isn't the same as Business "transaction". If you are getting "transactions per second" to report to management (even if it is IT management), you better be sure you identify which transactions you are counting.
    If I run a PLSQL loop that does
    loop 1 to 1000
      insert into table ...
      commit ;
    end loopI get 1,000 "transactions" and 1000 INSERT "executions" being reported by Oracle.
    On the other hand if I run
      loop 1 to 1000
        insert into table ...
    end loop
    commit;I get only 1 "transaction" and 1000 INSERT "executions" being reported by Oracle.
    Next, if I run
      insert into table ... select ... 1000 rows ..
    commit;I get 1 "transaction" and 1 INSERT "execution".
    Any database application will have a mix of "transactions" of different sizes being executed by different users / application server processes / clients.
    If you take the aggregate "transactions" count you are going to be sorely disappointed and disappointing your managers !

  • Router Switching Performance in Packets Per Second (PPS) : ISR 4431 and 4431

    Hi,
    In this document, I am able to find the Routing Performance for all routeurs except ISR 4000 series.
    http://www.cisco.com/web/partners/downloads/765/tools/quickreference/routerperformance.pdf
    I would like to know what is the Router Switching Performance in Packets Per Second (PPS) and Mbps for ISR 4431 and 4431 routers. 
    Fast/CEF Switching : PPS and Mbps 
    Anybody had a documents or information about this ?
    Regards,
    Nurul Kabir KHAN

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    I've not been able to find anything beyond a bandwidth capacity rating, such as 500 Mbps upgradable to 1 Gbps for the 4431.
    I did find http://www.cisco.com/c/dam/en/us/products/collateral/routers/4000-series-integrated-services-routers-isr/enterprise-routing-portfolio-poster.pdf?mdfid=283967372
    The point of interest for the foregoing, is the performance listings for the 800 series routers.  Assuming their bandwidth performances ratings are using a similar performance methodology for all the routers, we can look at whitepapers, like the attached, and presume the 4000 series bandwidths are a total aggregate for typical traffic with most typical "WAN" features enabled.  I.e. presume 500/1,000 Mbps is maximum recommended aggregate bandwidth usage with typical "WAN" traffic and typical "WAN" features.
    PS:
    Documents like: http://www.cisco.com/web/partners/downloads/765/tools/quickreference/routerperformance.pdf can be very easily misunderstood when trying to predict real-world performance.  I suspect Cisco's latest bandwidth recommendations are trying to provide an easy to understand values for sizing routers for typical usage.
    The attachment shows how feature usage, and traffic content, impacts ISR performance, which is why the older document can so easily mislead.

  • System.log coreservicesd[55]: *** process 55 exceeded 500 log message per second limit

    I have a number of logs that suggest there is something not working correctly.  This error is generated on start.
    coreservicesd[55]: *** process 55 exceeded 500 log message per second limit  -  remaining messages this second discarded ***
    Any ideas on what might be happening?
    Mar  4 10:53:59 TJBs-MacBook-Pro mds[39]: (Error) Server: ==== XPC handleXPCMessage XPC_ERROR_CONNECTION_INVALID
    Mar  4 10:53:59 TJBs-MacBook-Pro com.apple.launchd.peruser.501[119] ([0x0-0x15015].com.apple.AppleSpell[193]): Exited: Killed: 9
    Mar  4 10:53:59 TJBs-MacBook-Pro com.apple.launchd.peruser.501[119] (com.apple.mdworker.pool.0[508]): Exited: Terminated: 15
    Mar  4 10:53:59 TJBs-MacBook-Pro com.apple.launchd.peruser.501[119] (com.apple.quicklook.ui.helper[463]): Exited: Killed: 9
    Mar  4 10:53:59 TJBs-MacBook-Pro com.apple.launchd.peruser.501[119] (com.apple.talagent[138]): Exited: Killed: 9
    Mar  4 10:53:59 TJBs-MacBook-Pro com.apple.launchd.peruser.501[119] ([0x0-0x51051].com.apple.QuickTimePlayerX[464]): Exited: Killed: 9
    Mar  4 10:53:59 TJBs-MacBook-Pro loginwindow[42]: DEAD_PROCESS: 42 console
    Mar  4 10:54:00 TJBs-MacBook-Pro airportd[646]: _doAutoJoin: Already associated to “9868LT1”. Bailing on auto-join.
    Mar  4 10:54:00 TJBs-MacBook-Pro shutdown[649]: reboot by MASTER:
    Mar  4 10:54:12 localhost bootlog[0]: BOOT_TIME 1330883652 0
    Mar  4 10:54:24 localhost UserEventAgent[11]: starting CaptiveNetworkSupport as SystemEventAgent built May 25 2011 12:27:35
    Mar  4 10:54:14 localhost com.apple.launchd[1]: *** launchd[1] has started up. ***
    Mar  4 10:54:23 localhost com.apple.launchd[1] (com.apple.sandboxd): Unknown value for key POSIXSpawnType: Interactive
    Mar  4 10:54:24 localhost UserEventAgent[11]: WirelessAirPortDeviceNameCopy(): no BSD interface name found for object 12551
    Mar  4 10:54:24 localhost UserEventAgent[11]: CaptiveNetworkSupport:CaptiveSCCopyWiFiDevices:388 WiFi Device Name == NULL
    Mar  4 10:54:28 localhost com.apple.pfctl[32]: No ALTQ support in kernel
    Mar  4 10:54:28 localhost com.apple.pfctl[32]: ALTQ related functions disabled
    Mar  4 10:54:29 localhost com.apple.ucupdate.plist[25]: ucupdate: Checked 1 update, no match found.
    Mar  4 10:54:31 localhost mDNSResponder[35]: mDNSResponder mDNSResponder-320.14.0 (Nov 16 2011 01:16:56) starting OSXVers 11
    Mar  4 10:54:32 localhost mds[34]: (Normal) FMW: FMW 0 0
    Mar  4 10:54:34 localhost com.apple.usbmuxd[24]: usbmuxd-263 on Nov 14 2011 at 18:58:10, running 64 bit
    Mar  4 10:54:34 localhost airportd[65]: _processDLILEvent: en1 attached (down)
    Mar  4 10:54:36 localhost mds[34]: (/.Spotlight-V100/Store-V2/E6A82D50-3EA4-4011-B64F-87EB86ACF9E3)(Normal) IndexGeneral in openReverseStore:Shadowing reverse store on open
    Mar  4 10:54:37 localhost UserEventAgent[11]: CaptiveNetworkSupport:CreateInterfaceWatchList:2788 WiFi Devices Found.
    Mar  4 10:54:37 localhost UserEventAgent[11]: CaptiveNetworkSupport:CaptivePublishState:1211 en1 - PreProbe
    Mar  4 10:54:37: --- last message repeated 2 times ---
    Mar  4 10:54:37 localhost configd[14]: bootp_session_transmit: bpf_write(en1) failed: Network is down (50)
    Mar  4 10:54:37 localhost configd[14]: DHCP en1: INIT-REBOOT transmit failed
    Mar  4 10:54:37 TJBs-MacBook-Pro configd[14]: setting hostname to "TJBs-MacBook-Pro.local"
    Mar  4 10:54:37 TJBs-MacBook-Pro configd[14]: network configuration changed.
    Mar  4 10:54:38 TJBs-MacBook-Pro UserEventAgent[11]: ServermgrdRegistration cannot load config data
    Mar  4 10:54:38 TJBs-MacBook-Pro systemkeychain[61]: done file: /var/run/systemkeychaincheck.done
    Mar  4 10:54:38 TJBs-MacBook-Pro UserEventAgent[11]: get_backup_share_points no AFP
    Mar  4 10:54:38 TJBs-MacBook-Pro configd[14]: network configuration changed.
    Mar  4 10:54:38: --- last message repeated 1 time ---
    Mar  4 10:54:38 TJBs-MacBook-Pro mDNSResponder[35]: D2D_IPC: Loaded
    Mar  4 10:54:38 TJBs-MacBook-Pro mDNSResponder[35]: D2DInitialize succeeded
    Mar  4 10:54:38 TJBs-MacBook-Pro loginwindow[37]: Login Window Application Started
    Mar  4 10:54:38 TJBs-MacBook-Pro rpcsvchost[86]: sandbox_init: com.apple.msrpc.netlogon.sb succeeded
    Mar  4 10:54:39 TJBs-MacBook-Pro netbiosd[90]: Unable to start NetBIOS name service:
    Mar  4 10:54:39 TJBs-MacBook-Pro airportd[65]: _doAutoJoin: Already associated to “9868LT1”. Bailing on auto-join.
    Mar  4 10:54:40 TJBs-MacBook-Pro loginwindow[37]: **DMPROXY** Found `/System/Library/CoreServices/DMProxy'.
    Mar  4 10:54:40 TJBs-MacBook-Pro com.apple.launchctl.LoginWindow[99]: com.apple.findmymacmessenger: Already loaded
    Mar  4 10:54:40 TJBs-MacBook-Pro airportd[65]: _doAutoJoin: Already associated to “9868LT1”. Bailing on auto-join.
    Mar  4 10:54:40 TJBs-MacBook-Pro loginwindow[37]: Login Window Started Security Agent
    Mar  4 10:54:40 TJBs-MacBook-Pro SecurityAgent[105]: Echo enabled
    Mar  4 10:54:41 TJBs-MacBook-Pro WindowServer[85]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged.
    Mar  4 10:54:41 TJBs-MacBook-Pro configd[14]: network configuration changed.
    Mar  4 10:54:41 TJBs-MacBook-Pro UserEventAgent[11]: CaptiveNetworkSupport:CaptivePublishState:1211 en1 - Probe
    Mar  4 10:54:41 TJBs-MacBook-Pro UserEventAgent[11]: CaptiveNetworkSupport:CNSPreferences:60 Creating new preferences
    Mar  4 10:54:41 TJBs-MacBook-Pro UserEventAgent[11]: CaptiveNetworkSupport:CaptiveStartDetect:2343 Bypassing probe on 9868LT1 because it is protected and not on the exception list
    Mar  4 10:54:41 TJBs-MacBook-Pro UserEventAgent[11]: CaptiveNetworkSupport:CaptivePublishState:1211 en1 - Unknown
    Mar  4 10:54:41 TJBs-MacBook-Pro configd[14]: network configuration changed.
    Mar  4 10:54:47 TJBs-MacBook-Pro SecurityAgent[105]: User info context values set for MASTER
    Mar  4 10:54:49 TJBs-MacBook-Pro SecurityAgent[105]: Login Window login proceeding
    Mar  4 10:54:49 TJBs-MacBook-Pro loginwindow[37]: Login Window - Returned from Security Agent
    Mar  4 10:54:49 TJBs-MacBook-Pro loginwindow[37]: USER_PROCESS: 37 console
    Mar  4 10:54:49 TJBs-MacBook-Pro airportd[65]: _doAutoJoin: Already associated to “9868LT1”. Bailing on auto-join.
    Mar  4 10:54:49 TJBs-MacBook-Pro com.apple.launchd.peruser.501[119] (com.apple.ReportCrash): Falling back to default Mach exception handler. Could not find: com.apple.ReportCrash.Self
    Mar  4 10:54:49 TJBs-MacBook-Pro com.apple.launchctl.Aqua[130]: load: option requires an argument -- D
    Mar  4 10:54:49 TJBs-MacBook-Pro com.apple.launchctl.Aqua[130]: usage: launchctl load [-wF] [-D <user|local|network|system|all>] paths...
    Mar  4 10:54:49 TJBs-MacBook-Pro com.apple.launchd.peruser.501[119] (com.apple.launchctl.Aqua[130]): Exited with code: 1
    Mar  4 10:54:49 TJBs-MacBook-Pro UserEventAgent[11]: CaptiveNetworkSupport:CNSServerRegisterUserAgent:187 new user agent port: 14347
    Mar  4 10:54:50 TJBs-MacBook-Pro talagent[137]: CoreDockMinimizeItems failed (-4959)
    Mar  4 10:54:50 TJBs-MacBook-Pro com.apple.dock.extra[147]: Could not connect the action buttonPressed: to target of class NSApplication
    Mar  4 10:54:50 TJBs-MacBook-Pro com.apple.dock.extra[147]: 2012-03-04 10:54:50.423 com.apple.dock.extra[147:1707] Could not connect the action buttonPressed: to target of class NSApplication
    Mar  4 10:54:50 TJBs-MacBook-Pro com.apple.dock.extra[147]: Could not connect the action buttonPressed: to target of class NSApplication
    Mar  4 10:54:50 TJBs-MacBook-Pro com.apple.dock.extra[147]: 2012-03-04 10:54:50.424 com.apple.dock.extra[147:1707] Could not connect the action buttonPressed: to target of class NSApplication
    Mar  4 10:54:50 TJBs-MacBook-Pro com.apple.dock.extra[147]: Could not connect the action buttonPressed: to target of class NSApplication
    Mar  4 10:54:50 TJBs-MacBook-Pro com.apple.dock.extra[147]: 2012-03-04 10:54:50.425 com.apple.dock.extra[147:1707] Could not connect the action buttonPressed: to target of class NSApplication
    Mar  4 10:54:50 TJBs-MacBook-Pro com.apple.dock.extra[147]: Could not connect the action buttonPressed: to target of class NSApplication
    Mar  4 10:54:50 TJBs-MacBook-Pro com.apple.dock.extra[147]: 2012-03-04 10:54:50.426 com.apple.dock.extra[147:1707] Could not connect the action buttonPressed: to target of class NSApplication
    Mar  4 10:54:50 TJBs-MacBook-Pro coreservicesd[55]: *** process 55 exceeded 500 log message per second limit  -  remaining messages this second discarded ***
    Mar  4 10:54:51 TJBs-MacBook-Pro Finder[139]: kCGErrorIllegalArgument: _CGSFindSharedWindow: WID 28
    Mar  4 10:54:51 TJBs-MacBook-Pro Finder[139]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged.
    Mar  4 10:54:51 TJBs-MacBook-Pro Finder[139]: kCGErrorIllegalArgument: CGSRemoveSurface: Invalid window 0x1c
    Mar  4 10:54:51 TJBs-MacBook-Pro Dock[136]: kCGErrorIllegalArgument: CGSSetWindowTags: Invalid window 0x1c
    Mar  4 10:54:51 TJBs-MacBook-Pro Dock[136]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged.
    Mar  4 10:54:51 TJBs-MacBook-Pro Dock[136]: kCGErrorIllegalArgument: CGSClearWindowTags: Invalid window 0x1c
    Mar  4 10:54:51 TJBs-MacBook-Pro Dock[136]: kCGErrorIllegalArgument: CGSOrderWindowList
    Mar  4 10:54:51 TJBs-MacBook-Pro talagent[137]: kCGErrorIllegalArgument: CGSGetWindowPresenter
    Mar  4 10:54:51 TJBs-MacBook-Pro talagent[137]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged.
    Mar  4 10:54:51 TJBs-MacBook-Pro talagent[137]: kCGErrorIllegalArgument: CGSOrderWindowListWithGroups: invalid window ID (28)
    Mar  4 10:54:51 TJBs-MacBook-Pro talagent[137]: kCGErrorIllegalArgument: CGSOrderWindowList: NULL list pointer or empty list
    Mar  4 10:54:51 TJBs-MacBook-Pro talagent[137]: CGSConnectionRelinquishWindowRights(cid, newWindowNumber, reservedRights): CGError 1001 on line 619
    Mar  4 10:54:53 TJBs-MacBook-Pro com.apple.launchd.peruser.501[119] ([email protected][161]): Exited with code: 1
    Mar  4 10:54:54 TJBs-MacBook-Pro com.apple.launchd.peruser.89[118] (com.apple.mdworker.pool.0): Throttling respawn: Will start in 4 seconds
    Mar  4 10:54:55 TJBs-MacBook-Pro com.apple.launchd.peruser.501[119] (com.apple.mdworker.pool.0): Throttling respawn: Will start in 3 seconds
    Mar  4 10:54:53 TJBs-MacBook-Pro ntpd[21]: proto: precision = 1.000 usec
    Mar  4 10:55:06 TJBs-MacBook-Pro coreservicesd[55]: *** process 55 exceeded 500 log message per second limit  -  remaining messages this second discarded ***
    Mar  4 11:00:28 TJBs-MacBook-Pro firefox[219]: -_scrollPhase is deprecated for NSScrollWheel. Please use -momentumPhase.
    Mar  4 11:00:28 TJBs-MacBook-Pro firefox[219]: -_continuousScroll is deprecated for NSScrollWheel. Please use -hasPreciseScrollingDeltas.
    Mar  4 11:00:28 TJBs-MacBook-Pro firefox[219]: -deviceDeltaY is deprecated for NSScrollWheel. Please use -scrollingDeltaY.
    Mar  4 11:00:28 TJBs-MacBook-Pro firefox[219]: -deviceDeltaX is deprecated for NSScrollWheel. Please use -scrollingDeltaX.

    I was able to identify the cause. Wrote in this thread the solution:
    http://discussions.apple.com/message.jspa?messageID=13160082#13160082

  • Horrible video skip / lag problem - once per second in all apps!

     I built a new system last month (my first AMD) and I am having a really aggravating problem. In all games and all video playback I get an annoying skip once per second, every second. It affects sound during gameplay but not during movie or mp3 playback. It even happens with the visualization mode in Windows Media Player.
    My system is as follows: MSI K8N Neo4-F, A64 3200+ venice core, MSI 6800GT 256MB PCI-E, two sticks of Corsair valueselect DDR400 512MB each, 500 watt PS, 160GB 7200 SATA HDD. Most recent NVIDIA drivers for everything. WinXP Pro with SP2 and all updates, DX9C. Nothing overclocked, all settings standard.
    I have tried the following solutions:
    1) BIOS upgrades, started with 1.4, installed 1.5, MSI tech support gave me 1.6b2 and I installed that. No luck with any of them.
    2) Memory, installed per MSI directions, but I've tried all legal combinations, including one stick at a time. No change.
    3) full format and reinstall of WinXP. No luck.
    4) Switching between WinXP IDE drivers and NVidia drivers, with and without RAID drivers, No luck.
    5) Removal of 6800GT PCI-E card and replacing with Ancient 8MB PCI Permedia2 video card. Problem still persists.
    6) Disable onboard sound and LAN. No luck.
    7) Running Fedora core 4 on second partition. Installed Nvidia video drivers, tried some games. THIS WORKS! No hitch, no skip, no nothing. Framerates are noticably slower but very stable. In WinXP I saw framerates bounce all over the place, from 230 FPS down to about 70 with one game. That same game on Linux ran smoothly at about 166 FPS with only occasional slight drops. The big FPS drops in Windows usually came right after one of the skips but didn't occure after every skip.
    Right now I'm stumped. Linux uses totally different drivers for sound, LAN and SATA support. Some of those drivers don't fully use the Nforce4 chipset's features, maybe that's part of the difference.

    Thanks TireSmoke:
    I had found that sticky, but I took your advice and went thorugh it in detail last night.  Lots of great info, fixes tweaks and tools, sadly none of them fixed my problem.  The lag problem most people are reporting is not really like the wierd problem I am having.  I have tried the recoommended fixes with ablsolutely no change in my system's behavior.
    I am beginning to suspect a faulty motherboard component.
    Russ_XP:
    I think you are correct about fast writes.  I googled the heck out of that last night and couldn't find any reference to enabling or disabling fast writes on PCI-E.
    The drive is SATA-1.  The Neo4-F is not SATA-2 enabled (there is a hack for it though).  From memory I think it's a Western Digital WD1600-something, 7200 RPM dirve.  I've tried it on both SATA buses and tried disabling the unused bus in BIOS.
    I'm pretty sure I can dig up an old PATA drive somewhere and give that a try.
    Gpalmer:
    True enough, and I don't have these problems under Fedora.  Sadly this is a cross-platform game development box, I need both XP and Fedora working.
    Black_God:
    Nope, this is a clean install.  Although I wonder, could any of the built in XP update and security tools be causing this?  I have disabled Windows firewall and virus protection monitoring.

  • Downloads per Second slower than normal

    Hi, i've moved from profile ADSL MAX to ADSL 2+ however my downloads per second are average around 2-400kbs a sec, my download speed is just over 7mb and my downloads have been around 850kbs a sec.
    Any reason to why this is?
    I don't do much heavy downloads, most of my connection is used for either gaming or downloading some the app or playstation and its really slow at downloading anything.
    If you want to say thanks for a helpful answer, please click on the Ratings star on the left-hand side If the the reply answers your question then please mark as ’Mark as Accepted Solution

    sorry my computer crashed here it is
    FAQ
    Test1 comprises of two tests
    1. Best Effort Test:  -provides background information.
    Download  Speed
    4339 Kbps
    0 Kbps
    7150 Kbps
    Max Achievable Speed
     Download speedachieved during the test was - 4339 Kbps
     For your connection, the acceptable range of speeds is 2000-7150 Kbps.
     Additional Information:
     Your DSL Connection Rate :7192 Kbps(DOWN-STREAM), 1060 Kbps(UP-STREAM)
     IP Profile for your line is - 6345 Kbps
    2. Upstream Test:  -provides background information.
    Upload Speed
    821 Kbps
    0 Kbps
    1060 Kbps
    Max Achievable Speed
    >Upload speed achieved during the test was - 821 Kbps
     Additional Information:
     Upstream Rate IP profile on your line is - 1060 Kbps
    We were unable to identify any performance problem with your service at this time.
    It is possible that any problem you are currently, or had previously experienced may have been caused by traffic congestion on the Internet or by the server you were accessing responding slowly.
    If you continue to encounter a problem with a specific server, please contact the administrator of that server in the first instance.
    If you want to say thanks for a helpful answer, please click on the Ratings star on the left-hand side If the the reply answers your question then please mark as ’Mark as Accepted Solution

  • Acquire, display, and write data at 50 samples per second

    I have a vi running on a PXI which samples data using two 4220's (all 4 channels) and one 6031 (only 6channels).  I am acquiring data at 100 samples per second, but only need to write the data out at 50 samples per second.  The data needs to be displayed at a minimum of 10samples per second.  The problem is that the VI can not get 50samples per second writen to the file, it writes about 20 to 30samples per second.
    I dont know if the issue is displaying the data which is holding up the writing at 50samples per second or if it is something else in the VI.  I have moved the writing of the data to the outside of the while loop, but this did not help enough to reach 50samples/sec.
    Is it better to change the waveform data types to dynamic waveforms?  Would this increase speed of operations?
    Galen
    Attachments:
    ATM_FrictionTests_v1.2.vi ‏375 KB

    Galen,
    Looking at your vi, I would recommend writing to your file in a different way.  The function you are using is actually opening, writing, and then closing the file every time you call it.  This greatly increases the amount of resources being used.  Take a look at the Cont Acq to Spreadsheet File.vi example and note that the file is only being opened and closed once.  The data is being written to the file during execution of the program, and then closed when the app is done running.  The example is done in traditional DAQ but you should be able to do something similar with DAQmx.  Try this and let me know if it helps. 
    Regards,
    LA

  • Newbie trying to understand the frame/fields per second concept on video

    Do video camera shutter speeds (1/60sec) reflect an interlaced field or a full frame comprised of two interlaced fields per part of a second.
    I suspect that a shutter setting of 1/60sec means it's not a full frame of video but is just an odd or even lined interlace field. Meaning that if I want to shoot 30fps I need to keep my shutter setting on 1/60sec.
    But if that's really the case, then what am I exactly shooting per second if actual NTSC frame rate is 29.97fps? I mean, the first 2p frames can easily be divided into two fields each, but what about the remaining .97frame. How can you divide that into two fields of interlaced video lines?
    Forgive my ignorance but books have a bad habit of not answering back when you don't understand something they say.
    iMac Intel Duo-Core; Intel Mac mini single-core   Mac OS X (10.4.6)  
    iMac Intel Duo-Core; Intel Mac mini single-core   Mac OS X (10.4.6)  

    The shutter speed is not relevant. I can be either field based or progressive. The frame rate is not dependent on the shutter speed.

Maybe you are looking for

  • Problem with Nano and Alpine KCA-420i

    I own a Alpine KCA-420i iPod interface for the car. It worked perfectly with my iPod Mini. However with Nano, the KCA-420i doesn't seem to wake the Nano up when the car starts. The only way to wake it up is to physically disconnect the cable between

  • Error -50 while trying to export my project

    Good evening, I built a 1h18 project. Then, I chose to delete not used plans from the associated event. As my project is too big to share it on my MobileMe gallery, I would like, either to share it in a smaller format, or to export it or to build a D

  • Lion keeps resizing my windows ?..

    HI GUYS ! something has been happening to me lately that ends up being a real pain in the *** - i don't know if it's Lion, my iMac, a bad planetary alignment or what... lets say i have a few folders on my desktop & during the course of the day i open

  • My iPad2 quit connecting to wireless network. Won't turn on, won't search for networks, no connection

    A couple of days ago, my 1 month old iPad2 quit searching for wireless networks, won't recognize my home network, and the wi-fi connection won't stay on. I can switch it on, but it doesn't do anything (won't search for wifi) and as soon as I move awa

  • How can we deprovision and do offboard process automation in oim

    hi i would like to known how can we deprovision the user and do off board process automation.. Thanks Poorna