BIND 9 - max request per second

I use BIND 9 on Solaris 9, hardware: SunFire V280R, CPU 1.2GHz, 2x73GB HDD, 4xFE interfaces
Now, I see on our FW, there are 3154 session request to DNS, and CPU use by BIND 9 is 98%. So, 3154 is very high
I don't know how many request my DNS can service
Please help me

Guys any update on this?

Similar Messages

  • More flexibility in limiting requests per second or minute

    We've read the docs on limiting the requests-per-second (max-rps), but we would actually like to set the threshold at less than one request per second. For example, we'd like to set the maximum requests to something like once every 5 seconds. But values like 0.2 for max-rps are not accepted. So the question is:
    Is there a way to set the maximum frequency of requests to something like once every 5 or 10 seconds?
    Is there an add-in that does that or could we create an add-in to do that?
    It looks like the server is NOT open source, so we cannot change the current code to address this, but certainly let me know if I'm wrong about that.
    In case it's helpful as background, the reason we want a lower threshold is that we only intend to limit access for html or jsp pages (not gif, jpg, js, css, etc.) That's because we don't want users to get half of a page, like the html, but no images or css, which would look like an ugly error. We want them to get the html complete with all the embeded files or get an error page. We're really only trying to block bots, which might make requests every second or slightly less often, and not block any human users. If there are other general suggestions for how to meet our goal, I'd love to hear them.

    No, the lowest max limit is 1rps. The request limiting isn't really meant for what you're describing, it is for limiting high request rates. At less than 1/sec such detection would have false positive with real users anyway, so it wouldn't be such a great way of distinguishing users from bots. I'm not sure it would prove very satisfactory.
    You could create your own NSAPI plugin (see docs on NSAPI usage) to implement your desired logic. I'd probably look into long term counters instead, but I don't really know the details of the exact problem you're trying to address.

  • Requests per second

    Hi all,
    I would get or calculate how many requests per second (ReqsPerSec) my WS is doing (average in a period).
    I looked at the Admin console monitoring data and at perfdump output.
    From console I get AvgReqs in last 15'. Iis it correct to calculate ReqsPerSec = AvgReqs / 900 ?
    From perfdump I get average RequestProcessingTime. Is it correct to calculate ReqsPerSec = 1 / RequestProcessingTime ?
    Which is the most correct/accurate way to get it?
    MTIA

    If you really need reqs/second then you can do so by :
    a) at time t1, collect perf-dump and get total requests served (say r1)
    b) at time t2, collect perf-dump and get total requests served (say r2)
    So avg rq/sec = (r2 - r1)/(t2 - t1)
    So you can yourself collect rq/sec based on last x minutes (or since the server start).
    Just curious what do you want to do with rq/sec?

  • WRT120N issue: sending hundreds of connection requests per second

    I have a WRT120N and for about 1 year it worked perfectly but then my ISP started cutting off my internet because when the router tried to connect to the internet it sent hundreds of connection requests (or so they sad), even after I upgraded the firmware to 1.0.04 I still had the same problem. I got angry and started using it as a switch. Now I see that there is a new firmware out, 1.0.05, I hope that the same problem will not persist. But if someone can help me with some tips please say so. Thank you and happy holidays, Ewald

    If, after a reset, it hasn't cleared. Take a look at your systems to see if you have a trojan or some backdoor virus that it trying to open connections in the background.
    In other words, do a complete scan. And not with that free crap either, they don't pick up everything.
    Hackers call it a "Zombie", its a system that has been compromised and can reap havoc on your network. Can cause denial of service attacks, sends out email from your email account (spam and phishing emails), all kinds of stuff. The important part is that it creates enormous amounts of sessions from your machine.
    Just an idea

  • Control the transaction per second ( TPS ) to down stream system through Oracle SOA

    Hi,
      As part of our business flow,  BPEL calling down stream system via OSB.
    However the down stream system can accept only 5 request per second.
    1. Tried implementing DB poller to invoke down stream system each second with 5 records.
    The flow is as follows .
    DB --> Poller --> ParentBPEL --> Down Stream System
    In this case Poller can able to poll 5 records from DB and push it to "DownStreamSystem" via "ParentBPEL" process.
    But here the contraint is , the "DownStreamSystem" strickly can able to process only 5 requests per second.
    If the "DownStreamSystem" is processed 4 request and processing 1 request  at first second, then in the next second the fusion should push only 4 request . Since the number records in processing state should be 5 at a time.
    Please help me out to solve this scenario or suggest me if there is any other alternative to achieve the solution.
    Thanks in Advance ...

    maybe throttling in the osb is of any help ?
    Throttling
    since you really have hard specs on the "currently processed messages" you could add some coherence cache and use that as sort of lookup to see what messages are getting processed at which time. (it's just some guessing, since i don't have any experience with the situation you're describing)

  • Feature request: Frames Per Second info for clips / Warning when exporting

    I know that this may not be the right place to post, but I think many of us users would agree that this feature would be useful in iMovie. Especially for new users who are not even familiar with terms like "frame rate".
    Let's assume you import a video from your digital camera. You are not an expert in this field, so you edit the video in iMovie. Then you export it using QuickTime. The final video is jerky... and this can be due to incorrect frame rate settings.
    It would be nice if iMovie provided some warning if you are trying to import video at a different frame rate setting than your original video source. Just a pop-up saying "Your video was recorded 29.97 frames per second (FPS) and you are trying to export it into 25 FPS. This can cause some playback issues."
    Maybe even suggest the correct frame rate setting for the export. Apple's software is always trying to make things easier. I think regular users do not even know about "frame rates", so it might be a good idea to warn them.
    Also, how can you tell what FPS are your sources in iMovie? There is no way to tell what FPS was the clip recorded in. To find out, you have to go back into iPhoto, then drag the clip into QuickTime and only there you can select "Get Info" and find the recorded FPS for the specific clip.
    This seems like a lot of trouble to go through, just to find out a very basic information about the clip.
    I would like to see some functionality like this added to iMovie. I think many of us users would welcome this feature.
    What do you guys think?

    Click iMovie/Provide Feedback. That will send your enhancement request to the developers.

  • When i open EMC on 2010 cas server i get "the system load quota of 1000 requests per 2 seconds has been exceeded"

    when i open EMC on 2010 cas server i get "the system load quota of 1000 requests per 2 seconds has been exceeded"
    and it wont load

    when i open EMC on 2010 cas server i get "the system load quota of 1000 requests per 2 seconds has been exceeded"
    and it wont load
    Close EMC and Powershell and run iisreset.
    Twitter!: Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.

  • Looking for average and max redo size generated per second

    Please,
    I'm implementing Dataguard physical standby on Oracle 10g R2 on Windows 2003.
    My issue is now how could I get redo size generated per second to compare with the actual bandwidth between primary and standby database.
    I know I can use the Database Console, but It wasn't installed at the database production.
    There's any link or script or view that I could use to resolve this issue?Thanks

    It depends on the statmemnst and the datatpyes that are inserted or update or deleted
    select b.name,a.value from v$sesstat a, v$statname b where a.statistic#=b.statistic# and b.name = 'redo size' and a.sid=<your SID>;
    courtesy to Daljit

  • Downloads per Second slower than normal

    Hi, i've moved from profile ADSL MAX to ADSL 2+ however my downloads per second are average around 2-400kbs a sec, my download speed is just over 7mb and my downloads have been around 850kbs a sec.
    Any reason to why this is?
    I don't do much heavy downloads, most of my connection is used for either gaming or downloading some the app or playstation and its really slow at downloading anything.
    If you want to say thanks for a helpful answer, please click on the Ratings star on the left-hand side If the the reply answers your question then please mark as ’Mark as Accepted Solution

    sorry my computer crashed here it is
    FAQ
    Test1 comprises of two tests
    1. Best Effort Test:  -provides background information.
    Download  Speed
    4339 Kbps
    0 Kbps
    7150 Kbps
    Max Achievable Speed
     Download speedachieved during the test was - 4339 Kbps
     For your connection, the acceptable range of speeds is 2000-7150 Kbps.
     Additional Information:
     Your DSL Connection Rate :7192 Kbps(DOWN-STREAM), 1060 Kbps(UP-STREAM)
     IP Profile for your line is - 6345 Kbps
    2. Upstream Test:  -provides background information.
    Upload Speed
    821 Kbps
    0 Kbps
    1060 Kbps
    Max Achievable Speed
    >Upload speed achieved during the test was - 821 Kbps
     Additional Information:
     Upstream Rate IP profile on your line is - 1060 Kbps
    We were unable to identify any performance problem with your service at this time.
    It is possible that any problem you are currently, or had previously experienced may have been caused by traffic congestion on the Internet or by the server you were accessing responding slowly.
    If you continue to encounter a problem with a specific server, please contact the administrator of that server in the first instance.
    If you want to say thanks for a helpful answer, please click on the Ratings star on the left-hand side If the the reply answers your question then please mark as ’Mark as Accepted Solution

  • 10,000 Recorc Per Second (In EJB 3.0)

    hi all,
    i have some mission critical tasks into my project, is it possible to persist 10 000 record per seconds,
    1. AS - JBoss Application Server 4.0.4GA
    2. Database - Oracle 10G 10.2.0.1
    3.EJB - 3.0 Framework
    4.OS - SunOS 5.10
    4.Server - Memory: 16G phys mem, 31G swap, 16 CPU,
    i know that i need performace
    here is my configurations about performance
    1. JVM Config Into JBoss
    JAVA_OPTS="-server -Xmx3168m -Xms2144m -Xmn1g -Xss256k -d64 -XX:PermSize=128m -XX:MaxPermSize=256m
       -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000
        -XX:ParallelGCThreads=20 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
        -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=31 -XX:+AggressiveOpts
        -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+PrintTenuringDistribution2. also i configure my database.xml file
    <?xml version="1.0" encoding="UTF-8"?>
    <datasources>
      <xa-datasource>
        <jndi-name>XAOracleDS</jndi-name>
        <track-connection-by-tx/>
        <isSameRM-override-value>false</isSameRM-override-value>
        <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class>
        <xa-datasource-property name="URL">jdbc:oracle:thin:@192.168.9.136:1521:STR</xa-datasource-property>
        <xa-datasource-property name="User">SRVPROV</xa-datasource-property>
        <xa-datasource-property name="Password">SRVPROV</xa-datasource-property>
        <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name>
        <min-pool-size>50</min-pool-size>
        <max-pool-size>200</max-pool-size>    
        <metadata>
             <type-mapping>Oracle9i</type-mapping>
          </metadata>
      </xa-datasource>
      <mbean code="org.jboss.resource.adapter.jdbc.vendor.OracleXAExceptionFormatter"
             name="jboss.jca:service=OracleXAExceptionFormatter">
        <depends optional-attribute-name="TransactionManagerService">jboss:service=TransactionManager</depends>
      </mbean>
    </datasources>3. Also i have one simple Stlateless Session Bean
    @Stateless
    @Remote(UsageFasade.class)
    public class UsageFasadeBean implements UsageFasade {
         @PersistenceContext(unitName = "CustomerCareOracle")
         private EntityManager oracleManager;
         @TransactionAttribute(TransactionAttributeType.REQUIRED)
         public long createUsage(UsageObject usageObject, UserContext context)
                   throws UserManagerException, CCareException {
              try {
                   oracleManager
                             .createNativeQuery("INSERT INTO USAGE "
                                       + " (ID, SESSION_ID, SUBSCRIBER_ID, RECDATE, STARTDATE, APPLIEDVERSION_ID, CHARGINGPROFILE_ID, TOTALTIME, TOTALUNITS, IDENTIFIERTYPE_ID, IDENTIFIER, PARTNO, CALLTYPE_ID, USAGETYPE, APARTY, BPARTY, CPARTY, IMEI, SPECIFICCALLTYPE, APN, SOURCELOCATION, SMSCADDRESS, MSC_ID, ENDREASON, USAGEORIGIN, BILL_ID, CONTRACT_ID) "
                                       + " VALUES(SEQ_USAGE_ID.NEXTVAL, NULL, NULL, SYSDATE, SYSDATE, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL) ");                              
                   return 1;
              } catch (Exception e) {
    }3. and into client side i have 200 Threads, each of them tried to call this method 50 times
    my result is that i can persist 10000 record in 20 seconds, without hibernate, with hibernate i got worst result :(,
    also i hear that it is good idea to use JDBC 3.0 driver for performance,
    i download newest oracle jdbc jar file from oracle site
    http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/htdocs/jdbc_10201.html
    is this jar file JDBC 3.0 driver ?
    is there any hibernate performance configuration?
    is it any more performance tuning into JBoss or EJB with entity beans?
    can anybody help me ? or is there any doc which can help me ?
    Regards,
    Paata,
    Message was edited by:
    paata
    Message was edited by:
    paata

    What makes you think that your database, just the database (with the box that it is on) can handle that rate?
    What makes you think that your network can handle that?
    While this is going on is this the ONLY traffic that will be on the network?

  • 1,000,000 updates per second?

    How could you configure a coherence cluster to handle processing a million stock quotes per sec? The datafeed could be configured as a single app spewing out all 1,000,000/sec or it could be many apps producing proportionately fewer ticks/sec but in any case it's going to total a million/sec. Fractions of the feed spread among multiple physical servers sounds smartest. The quote Map.Entry would probably have a Key of String (or char[] if that's more efficient - i know the max length). The Value would be a price and a size so maybe just those two elements byte[]{Float,Integer} or a java object with Float and Integer member variables. I'd want to trigger actions based on market conditions when the planets align just right so I'm not simply ignoring these values or pub/sub'ing them out to client apps, I'm evaluating many of them simultaneously and using them as event triggers. Is something like that remotely possible? On how much hardware?
    Thanks,
    Andrew

    Andrew,
    Using partitioning, Coherence can handle 1 million updates per second, but the big question is how many updates per second do you need on the hottest instrument at the hottest time?
    The other question is related to "the planets lining up", because that may imply a global view of the market, which becomes more difficult in a partitioned system.
    To provide a high rate of change to data in a partitioned system, the data providers (those with a large amount of data or a high rate of change) should be in the cluster (not coming in over *Extend) to eliminate one hop. To avoid blocking on the tick update from the data provider, it should locally enqueue the update. The queue servicer (a separate thread) should either coalesce whatever ticks are in the queue into a single putAll(), or if every tick needs to be recorded (i.e. all three ticks in the queue like "change to 3.5", "change to 3.55", "change to 3.6" have to be published, instead of just the latest "change to 3.6") then it would batch up everything in the queue until it hits an item that it already has in its batch, and then do a putAll().
    The use of that async publishing mode is what allows for the much higher throughput, particularly when a data provider is producing a huge number of ticks in a given period of time. You can make it even smoother (e.g. avoid outliers caused by some servers being slower) by having more local queues+services (partitioned by Coherence partition, or at the extreme by instrument). You can determine the Coherence partition using the KeyPartitioningStrategy returned from the PartitionedService for the ticks cache.
    Peace,
    Cameron Purdy | Oracle Coherence
    http://coherence.oracle.com/

  • Pages Per Second, User Modes

    What is being recorded in the Pages per second metrics, is it:
    1) The DOWNLOAD and (emulated) RENDERING of the page with its attributes/objects (pages, images, frames etc)
    OR
    2) Just the DOWNLOAD of the page with its attributes/objects.
    Does this change between User Modes i.e. Thick and Thin Client

    Hi Paul,
    A page is the eTester representation of what would be a web page in terms of navigations and actions that are performed in the web browser after the document is completely rendered and before the last transition.
    Open the navigation editor and take a look at the Navigations tree. There it can be seen how the pages are divided. Each page will have between 0 to multiple navigations. Pages with zero navigations doesn't count, these pages won't even be shown in the report. If the page have frames the page will show more than one navigation. If the web application uses ajax or a similar technology with http transactions, or if the page uses java applets, flash objects or activex controls that makes http transactions and the proxy recorder was On, then the pages most likely will show more than one navigation.
    eLoad will request all the navigations contained in any given page of any script in your load test scenario, if the web server will responds back successfuly for all the navigations, then this is counted as one page received. eLoad willl be requesting multiple pages from the different scripts that exists in the scenario submitted then it will count how many of those pages are being received in an interval of time, then makes a units convertion of the time interval in order to display it in seconds.
    The pages received per second is the average of how many pages with all the navigations contained in it were successfully obtained from the web servers every second.
    The pages per second is the same statistic regardless if you are running in thick or in thin.
    The pages per second doesn't take into consideration the download of images, scripts, css, or any other object. These are considered in the hits per second.
    A similar thread was created earlier:
    http://qazone.empirix.com/thread.jspa?threadID=11&tstart=30
    The link was inserted here for crossreference if required later.
    Regards,
    Zuriel

  • 300 requests per hour reading from Application scope Good/Bad idea

    I am saving some html in application scope & reading with
    <cflock readonly.
    Will it be a problem if more than 5 requests are made per
    second?
    Does deadlocks occurs with readonly-cflock?
    Thanks

    While creating/changing application scope variables in a
    lock, I think there must be an exclusive lock. You must be careful
    not to set any shared variable such as application inside a read
    only cflock tag.
    I also would suggest to check the usage of Application.cfc
    and onApplicationStart method. You can set application default
    variables and constants in onApplicationStart method in your
    Application.cfc.

  • Weird number of disk-transfers per second when in SYNC mode

    Hi, I've managed to configure BDB in synchronous mode (i.e., each put is persisted on disk when committing). However, now I'm doing 2000 puts per seconds, each with payload from 10 to 250 kilobytes, yet I'm getting (from iostat) that each disk-transfer is only about 6 kilobytes (because of 23 megabytes written on disk per second, divided by 3750 transfers per second). How is that even possible? Is there a way of telling BDB to minimize the number of disk-transfers per second in SYNC mode? It seems that BDB is breaking each put's payload into smaller pieces, and only then saving to disk in a bunch of disk-transfers.

    Hi,
    JE does not split up a single write into multiple writes -- and certainly doesn't do an fsync for each one.
    JE may do multiple writes (but not fsyncs) for a single, multi-operation txn if the write buffer fills.  And it will do multiple writes for a single operation, if the record is larger than the write buffer.  However, it doesn't sounds like this (overflowing of the write buffer) is what you're experiencing.  In any case, you can configure the size of the JE write buffer with EnvironmentConfig.LOG_TOTAL_BUFFER_BYTES, LOG_NUM_BUFFERS, and LOG_BUFFER_SIZE.
    Another thing is that JE will group fsyncs (this is called "group commit") when multiple threads are committing concurrently with SYNC durability.  In this case you'll see a smaller number of physical writes than the number of commits.
    I asked a colleague who has more experience with iostat than I do about this, and he gave me the following information:
    We would expect there to be one sync per put on average, assuming the application is doing serial writes and there are no group commits to further obfuscate the issue. Given the high sync write rate, the writes are presumably to an SSD, or spinning rust with a large non-volatile disk write cache
    I'm not sure what he means by disk transactions in iostats. Perhaps he means the number of disk transfer requests issued to the device listed as tps (transfers per sec) in iostat output.
    If he is using ext3 and he does not have the file system mounted with noatime, he may be observing write requests to update the file system "atime" metadata with each request. So for 2K sync puts/sec he would see roughly 4K (2k put + 2k atime update) write requests/sec and his average write payload would be ~12KB/transfer (the atime write payload is negligible), which would be consistent with the application's put behavior. This is all a guess.
    I hope this helps.
    --mark

  • Max-instances-per-pk attribute

    Does anybody know what is "max-instances-per-pk" attribute in
    <entity-deployment> element in orion-ejb-jar.xml file?
    I'm using oc4j version:
    E:\oc4j\j2ee\home>java -jar oc4j.jar -version
    Oracle9iAS (9.0.2.0.0) Containers for J2EEand when I deploy my CMP EJB I get this attribute added to my orion-ejb-jar.xml file with value 20!!!!!
    And when I try to create (or findByPrimaryKey) many beans with
    the same primary key, after 20 my server is hanging and I get timeout exceptions:
    for (int i=0; i<100; i++) mybean.findByPrimaryKey("1");
    when i==20 I get:
    com.evermind.server.ejb.TimeoutExpiredException: timeout expired waiting for an instance
    at com.evermind.server.ejb.DBEntityWrapperPool.getWrapperInstance(DBEntityWrapperPool.java:189)
    at com.evermind.server.ejb.DBEntityEJBHome.getWrapperInstance(DBEntityEJBHome.java:135)
    at CatalogHome_EntityHomeWrapper10.findByPrimaryKey(CatalogHome_EntityHomeWrapper10.java:302)
    at pl.empolis.delta.modules.catalogman.CatalogManagerBean.getCatalogEntryByPublicID(CatalogManagerBean.java:56)
    at CatalogManager_StatelessSessionBeanWrapper6.getCatalogEntryByPublicID(CatalogManager_StatelessSessionBeanWrapper6.java:734)
    at java.lang.reflect.Method.invoke(Native Method)
    at com.evermind.server.rmi.RMICallHandler.run(RMICallHandler.java:80)
    at com.evermind.util.ThreadPoolThread.run(ThreadPoolThread.java:62)
    How do I set this value to infinity?
    Why I can not find any documentation about this?????
    Please help,
    Artur

    Artur -- max-instances-per-pk controls the size of the pool of wrapper instances for OC4J. I would just set it to a very large
    integer value. It will be documented in the production release of the EJB guide but for now here is a section from
    book. Also, the cache will be able to be disabled but that function was not in the pre-release. Lastly, we should be
    posting some of the documentation very soon (look for an announcement) so that will help to clarify some things.
    The wrapper instance is OC4J-generated wrapper code that provides for the
    services requested in the deployment descriptor. Before the bean instance is
    invoked, the client retrieves a handle to the wrapper instance. When the client
    invokes the bean, the wrapper is associated with a bean instance.
    The max-instances-per-pk attribute sets the maximum entity bean
    wrapper instances allowed in its pool for a given primary key. An entity beans
    wrapper code can be pooled if it is not used by a client.
    The default maximum value is 50. Set the maximum wrapper instances as
    follows:
    <entity-deployment ... max-instances-per-pk="20"
    </entity-deployment>
    Set the minimum wrapper instances as follows:
    <entity-deployment ... min-instances-per-pk="2"
    </entity-deployment>
    Thanks -- Jeff

Maybe you are looking for