Very interesting! Who faster?

Suppose you hava a direct bytebuffer(size 50M) to read file content by filechannel.
once the buffer is full,just clear the buffer and continue to read the remaining part of the file.
which one is faster,reading a 300M's file or reading ten 30M's files?
I did my test and found: one 300M file: 30M/second
ten 30M files: 40M/second
if the file is 600M,the reading rate continues to be slower: 21M/second.
why this happen?

The limit to the size of file that you can map into memory is not related to the amount of physical memory
you actually have. However there is a fixed limit of about 1.2 to 1.8 gig on most 32 bit platforms. If you are
lucky enough to be using a 64 bit JVM then this is not a problem. Therefore I now have to recomend that
you don't use this approach (there are issues with closing and then creating new mappings that will
hinder you)
Opening a file has quite a high cost associated with it. But the cost is fixed and when you are reading
5Gig the cost is ignorable. It is probably too small, when comapred to the time taken to read the file, to be taken
into account.
The low performance for larger files could be because the OS is allocating more memory to the disk
cache and therefore forcing your JVM's memory to be paged to the swap file (on the hard disk) this is a
very slow process.
File fragmentation could also account for the slower access time for larger files. What filesystem are you using?
some are better then others for large files. What is the block size on the disk? larger block sizes waste
more space for small files but allow more efficient access to large files.
Above all else, how much memory are you dedicating to your VM (the -Xmx option) please ensure that the
value here is less then the amount of pysical memory your computer has (by a few hundred Meg) otherwise
performance will be appaling.
I would not bother too much with trying to improve the performance at this stage as it is likely that the final
performance will be limited by the processing that you perform on the file you are reading (well you won't
actually know this until you profile the working code) then you will have a basis from which to consdier
optimising the program. Also there is little point fine tuning your application for your computer when it
is unlikely that that will be its final destination. Different computers have different hardware and are
capable of very different performances for a similar operation.
As you know that there is likely to be a optimisation issue in this part of the code perhaps you should
abstract it (write an interface) then you can swap implementaions easilly at a later stage (and compare
them). Perhaps using a different implementaion for different hardware configurations.
matfud

Similar Messages

  • I am just starting out in graphic design and I wanted to know how to get more involved with either adobe and or graphic design? I am really very interested in working with adobe and graphic design more and becoming more involved with both!!

    I am just starting out in graphic design and I wanted to know how to get more involved with either adobe and or graphic design? I am really very interested in working with adobe and graphic design more and becoming more involved with both!!

    I have now recently downloaded 10.0.2 which is confusing in itself, as, as far as I can ascertain that is actually version 11, but I'm not even sure about that.
    Version 10.0.2 is the newest version and the successor to GarageBand '11 (version 6.0.5).
    The '11 is referring to the iLife '11 suit of multimedia application - the older GarageBand was a part of this bundle.
    Have a look at Edgar's graphical enhanced manuals, the explain very detailed how things work and why. You can buy them as iBooks from the iBook store or directly from the page:
    http://DingDingMusic.com/Manuals/

  • Very slow MV fast refresh after "Merge" on dimension

    Hi,
    We have a sales cube with 300Mio records and product dimension with 20.000 records.
    We have a materialized view something like:
    SELECT      COUNT(*) AS cnt,
         COUNT(cube.amount) AS cnt_amt,
         sum(cube.AMOUNT) as amount,
    p.product_name
    FROM sales_cube cube
    JOIN product_dimension p on cube.product_dimension = p.product_dimension
    group by day, product_dimension
    There is tho mv_log on cube and on product_dimension defined.
    After import process for sales_cube
    In the night we run an product dimension update script implemented wit MERGE.
    If something changes in product dimension then next fast refresh from materialized view takes 4 hours. If we have no changes in product_dimension then fast refresh takes 8 seconds.
    My question is, why takes it sooo long and who can we prevent it.

    I'm not authorized to publish EXECUTION PLAN or schema details.
    But I found the problem.
    ETL Prozess is generated by Warehouse Builder.
    ETL Prozess is modeled as Mapping from product table to product dimension in Warehouse as "UPDATE/INSERT" by PK (product number).
    Warehouse Builder generates an MERGE statement for such "UPDATE/INSERT" mappings.
    But this mapping updates every record in Dimension Table, as result Materialized View Log for this dimension is confused. Fast refresh (join between fact table and dimension) is very slow (slower than compleate refresh).
    Now, i simple must find out how i can change this OWB mapping not to update records in dimension if corresponding records are not modified in source table.
    There is an "match by constraint" option in mapping, looks useful, i'll try it tomorow.

  • Large SGA issue-- insert data is very slow--Who can help me?

    I set the max_sga_size to 10G,and db_cache_size to 8G, but the value of db_cache_size is negative number in OEM, and also I found that the data inserting was very slow, I checked the OS, found no CPU consuming and no IO consuming.
    The OS is HP-UX B11.23 ia64.
    Oracle server 9.2.0.7
    Physical memory : 64G
    CPU: 8
    (oracle server and os are all 64-bit).
    If I decrease the SGA to 3G,and db_cache_size to 2G, and the same data inserting is very fast, everything is well.
    so I guess if there are some os parameters needed to set for using LARGE memory.
    Who know this issue or who has the experience of using large SGA in HP-UX ?
    Message was edited by:
    user548543

    Sounds like you might have a configuration issue on the o/s side.
    Check that kernel parameters are set as recommended in the installation guide
    The first thing that came to mind after reading the problem description was that you might have too low SHMMAX for that 10GB SGA, which would cause multiple shm segments to be created and thus explain the performance degration you're experiencing.
    A quick way to check if that's the case would be doing "ipcs -m" to see if there are multiple shm segments when SGA is set to 10GB.

  • Very Interesting problem, need urgent resolution

    Hi Guys,
    I have very weird and interesting problem which I have to fix urgently. Appreciate any help you guys can provide.
    I have one query which runs in All our database enviornments but Prod. Our UAT is refreshed by Prod Fortnightly so I am sure that it is not a data problem. I even tried for very small dataset making sure to select same data in UAT and Prod.
    Error:
    ORA-00932: inconsistent datatypes: expected NUMBER got -
    Query:
    select level ,--works if we reomve this
    xmlelement("L1", XMLATTRIBUTES(resource_name as "L1" ,resource_id as "p_resource_id",resource_manager_id as "p_rm_id",FTE, project_hrs ,
                 misc_hrs , total_hrs, avg_tot_hrs, Perc_utilization))
          from (  SELECT   resource_id,
               resource_name,
               resource_manager_id,
               trim(to_char(round(SUM (FTE),1), '999,999,999,999.9')) FTE,
               trim(to_char(round(SUM (project_hrs),1), '999,999,999,999.9')) project_hrs,
               trim(to_char(round(SUM (misc_hrs),1), '999,999,999,999.9')) misc_hrs,
               trim(to_char(round(SUM (total_hrs),1), '999,999,999,999.9')) total_hrs,
               trim(to_char(round(SUM (total_hrs)/decode(SUM (FTE),0,1,SUM (FTE)),1), '999,999,999,999.9')) avg_tot_hrs,
               trim(to_char(ROUND (SUM (project_hrs) * 100 / decode(SUM (expected_project_hrs),0,1,SUM (expected_project_hrs)), 1), '999,999,999,999.9'))
                  perc_utilization
        FROM   (    SELECT   CONNECT_BY_ROOT resource_name AS resource_name,
                             CONNECT_BY_ROOT resource_id AS resource_id,
                             CONNECT_BY_ROOT resource_manager_id AS resource_manager_id,
                             employee_type_code,
                             FTE,
                             project_hrs,
                             misc_hrs,
                             total_hrs,
                             avg_tot_hrs,
                             expected_project_hrs
                      FROM   (    SELECT   r.username resource_name,
                                           resource_id,
                                           resource_manager_id,
                                           employee_type_code,
                                           fte,
                                           project_hrs,
                                           misc_hrs,
                                           total_hrs,
                                           avg_tot_hrs,
                                           expected_project_hrs
                                    FROM   TIME_UTILILIZ_ORG_SUM_L3M_MV r
                              START WITH   resource_id = 129523
                             CONNECT BY   PRIOR r.resource_id = r.resource_manager_id)               
                CONNECT BY   PRIOR resource_id = resource_manager_id)
    GROUP BY   resource_id, resource_name, resource_manager_id)
              start with resource_id =129523 connect by prior resource_id=resource_manager_id; --works if we remove thisIf we remove outermost connect by, it runs so not a xmlelement problem as well. Any idea?
    Edited by: 783830 on Jul 22, 2010 6:58 AM

    I'm not sure if this will help you, but:
    with my_tab as (select 1 resource_id, 0 resource_manager_id, 1 project_hrs from dual union all
                    select 2 resource_id, 1 resource_manager_id, 1 project_hrs from dual union all
                    select 3 resource_id, 1 resource_manager_id, 1 project_hrs from dual union all
                    select 4 resource_id, 2 resource_manager_id, 1 project_hrs from dual union all
                    select 5 resource_id, 2 resource_manager_id, 1 project_hrs from dual union all
                    select 6 resource_id, 0 resource_manager_id, 2 project_hrs from dual union all
                    select 7 resource_id, 6 resource_manager_id, 2 project_hrs from dual union all
                    select 8 resource_id, 7 resource_manager_id, 2 project_hrs from dual),
    --- end of mimicking some data
        results as (select resource_id,
                           project_hrs,
                           prior resource_id prev_resource_id,
                           level lvl,
                           sum(project_hrs) over (partition by connect_by_root (resource_id)) tot_project_hrs
                    from   my_tab
                    connect by prior resource_id = resource_manager_id),
       results2 as (select resource_id,
                           connect_by_root resource_id top_resource_id,
                           project_hrs,
                           prior resource_id prev_resource_id,
                           level lvl
                    from   my_tab
                    connect by prior resource_id = resource_manager_id
                    start with resource_manager_id = 0)
    select r1.resource_id,
           r1.project_hrs,
           r1.tot_project_hrs,
           r2.top_resource_id,
           r2.prev_resource_id,
           r2.lvl
    from   results r1,
           results2 r2
    where  r1.resource_id = r2.resource_id
    and    r1.lvl = 1
    order by resource_id;
    RESOURCE_ID PROJECT_HRS TOT_PROJECT_HRS TOP_RESOURCE_ID PREV_RESOURCE_ID        LVL
              1           1               5               1                           1
              2           1               3               1                1          2
              3           1               1               1                1          2
              4           1               1               1                2          3
              5           1               1               1                2          3
              6           2               6               6                           1
              7           2               4               6                6          2
              8           2               2               6                7          3

  • Im a PC user very interested in the Mac

    I was thinking about buying a lower-end version and tryin to upgrade it. Is this worth it?
    Also, is the processor speed very noticeable? Like, is there a big difference between a 1.33 or so to a 2.0?
    Thanks
    Toshiba Laptop   Windows XP  

    John I have an iBook (1Ghz) and a 2Ghz MacBook. So I'm probably qualified to answer your question.
    You will notice a real speed improvement between the two machines (the MacBook) being much faster. Whether there is any benefit in buying the MacBook depends on what you use it for. For word processsing there will be little practical difference. I can't type any faster on the MacBook than I can on the iBook. Only you can work out if the extra speed is worth the extra dollars.
    If the iBook has 256 MB of Ram I would advise upgrading it (easily done) to at least 768 Mb. The only other relatively easy upgrades is installation of an airport card (if it doesn't already have one). Obviously you would only do this if you had a need to connect wirelessly. If the iBook is second hand check how much charge the battery holds or be prepared to buy a replacement battery.

  • Very Interesting problem

    Dear All,
    I scheduled propagation between two servers thru database link.
    Link is okey, since I am able to query data from other server on the other.
    Incoming queue is okey, I can propogate from the third machine, but instead of propagation from 1st to second,
    messages are going to the exception queue.
    There are no errors, there is nothing in Oracle trace files, but messages go to Exception queue.
    Does anybody have any idea why?
    Thank you very much for any assistance
    Regards
    Artem

    Forms stores a lot of compiled code in the fmx that is never used, and slows down the Forms Builder when it opens the fmb file. If you compile your fmb and save, the fmb balloons in size due to the compiled code.
    If you do a replace all, changing all ; to ; (which forces all program units to become uncompiled), and save WITHOUT compiling, your fmb will shrink.
    PS. Next time you post a question, please make your subject title a little more descriptive. My favorite is "Please help". :-(

  • Very interesting and quizzical issue in JNI COding.

    HI All,
    i've been facing a really frustrating, interesting and mindwrenching issue in my native code. Here is a background.
    1. I was given a thirdparty ACtiveX OCX file that needed to be called through JAva.
    2. I wrote a C++ JNI wrapper to call the methods of the activex object.
    3. I first initialize the object, set the relevant parameters (sent from java) and then call the method I need.
    4. For every record, I need to perform step 3. NOw, I am ryuning batch loads. THE code works fine for 9500 records. BUt, once the record count reaches 9885, I get a pop-up error from the activeX OCX control. I've not coded that pop-up. It comes out of the blue.
    I initially thought that it could be a memory issue, but even after doubling my VM Memory allocation, I get the error. Its not a data issue, because the reocrd that causes the issue, if I run the code just for that single record, it works fine.
    Below is the implemetnation of the C++ code. LEt me know if I'm doign somethign wrong or if someone has seen something like this happen before. WHAt is baffling is that it works smoothly for less records. When the pop up appears, no exception is caught on the native side also.!!!
    Does java set aside some memory specfic for native libs and that is exhausted after 9885?
    I'm running on windows, so is it possible to trace the internal malloc and release within the dLL?
    ANy thoughts and ideas will help. I've been struggling for about three days now. ALso, do you know of a good C++ forum that i could post this question?
    #include "stdafx.h"
    #include "jni.h"
    #import "thirdparty.ocx" raw_native_types
    JNIEXPORT jstring JNICALL Java_org_nik_integration_test_Exporter_saveAsImage
      (JNIEnv * env, jobject obj, jint height, jint width , jstring fileName, jstring encryptionKey, jshort encryptionAlgorithm, jstring encryptedImage, jshort imageType)
         const char* fName = NULL;
         const char* encryptionKeyc = NULL;
         const char* encImgc = NULL;
         ThirdPartyLib::_DTPPtr tpptr = 0;
         HRESULT hres = 0;
         try
              hres =  ::CoCreateInstance(__uuidof(ThirdPartyLib::TP),NULL,CLSCTX_ALL, __uuidof(ThirdPartyLib::_DTP), (void**)&tpptr);
              //Retrieve values sent from Java     
              fName = env->GetStringUTFChars(fileName,0);
              encryptionKeyc = env->GetStringUTFChars(encryptionKey,0);
              encImgc = env->GetStringUTFChars(encryptedImage,0);
                         tpptr->Key = _com_util::ConvertStringToBSTR(encryptionKeyc);
              tpptr->Algorithm = encryptionAlgorithm;
              tpptr->Img = _com_util::ConvertStringToBSTR(encImgc);
              tpptr->SaveUnencrypted(width, height,_com_util::ConvertStringToBSTR(fName) , 1);
         catch(_com_error &e)
              _bstr_t bstrSource(e.Source());
             _bstr_t bstrDescription(e.Description());
             printf( "Exception thrown for classes generated by #import" );
             printf( "\tCode = %08lx\n",      e.Error());
             printf( "\tCode meaning = %s\n", e.ErrorMessage());
             printf( "\tSource = %s\n",       (LPCTSTR) bstrSource);
             printf( "\tDescription = %s\n",  (LPCTSTR) bstrDescription);
             // Errors Collection may not always be populated.
             if( FAILED( hres ) )
                printf( "*** HRESULT ***" );
              return NULL;
         __finally
              tpptr->Release();
              tpptr = NULL;
              //Release memory - java specific
              env->ReleaseStringUTFChars(encryptionKey, encryptionKeyc);
              env->ReleaseStringUTFChars(ink, inkc);
              env->ReleaseStringUTFChars(fileName, fName);
         }

    Well you have now demonstrated conclusively that it has nothing to do with JNI.
    It is either a bug with that component and/or you are using it incorrectly. Solutions to either of those would come from the source of that component, which would be somewhere besides here.
    If it was a bug then you might find a way around it by doing one of the following
    1. Try variations of use (options, parameters, whatever.)
    2. Determine if you can solve your problem by only processing 5000 entries at a time (where 5000 chosen just to be significantly lower than the limit you have already found.) If that works then you can solve the problem by using a restartable executable that wraps the component.

  • WM Structure... Very Interesting

    Great Gurus
    We have implemented SAP (FICO, MM, SD, PP, PM ).
    Have one main Store (A Big hall with 3 big Racks < Rows, Line > ,Bins).  (1 Rack = 7 Rows, 13 Columns)
    Rack 1 > Auto, Rack 2 > Production, Rack 3 > Chemicals 
    Only In Rack 2 (Production) we have maintained 5 SAP Location
    ( 101 MN Store Empty Bottles , 201 MN Store Beverage, 301 MN Store CO2, 401 MN Store Water treatment, 501 MN Store Power House ) (each belongs to different SAP Plant Master ( 1000, 2000, 3000, 4000, 5000 ).    
    Same is the case for Rack 3 > Chemicals. (Same location, same plant).
    What should be the Warehouse Management Structure? ( Like Warehouse #, Storage type, Storage Section,  Storage Bin.
    Considering there all are Fixed Bin Storage, no pallet storage, no bulk storage, no high racks.
    & No Fast moving, No slow moving.
    What is your Guru Opinion.?

    Hi Adnan,
    just think how it would be if we have following WM structure for your scenario...
    its just an opinion.
    5 - Warehouse numbers(for each combination of Plant & storage location) .
    3 -Storage Types (Rack1,Rack2 & Rack3).
    5 - Storage Sections under eah storage type (like Empty ottles,Beverages,CO2, Water Treatment & Power House).
    1 - Storage Bin.
    Regards

  • Very Interesting Bug?

    What I've noticed (while using XFCE, but I don't think that makes a difference) is:
    * When I turn on my computer and start Arch Linux normally and open up documents in openoffice and a transparent terminal, this is what I notice:
    ** Scrolling in openoffice is a bit laggy when I scroll fast. Not a problem, but just odd.
    ** Terminal lags when I move it and it refreshes its transparent background.
    However, when I switch the gtk theme, using switch2 and the xfce settings, the above issues disappear. I just switch to some random theme and switch back to my original one. When I scroll in openoffice quickly, there is no lag whatsoever. When I move the transparent terminal, refreshing of the background is instantaneous.
    Any idea why this could be happening?

    Diaz wrote:what is your graphic board and what drivers do you use for it?
    Graphics: Integrated:
    01:05.0 VGA compatible controller: ATI Technologies Inc RS690 [Radeon X1200 Series]
    Drivers: catalyst (proprietary drivers).

  • Very interesting and informative post-LR/ACR Camera Profiles

    This post by Eric Chan from Adobe is very imformative and reveals the reality of processing of raw files not only from Adobe's perspective but for all software that processes raw files from digital cameras. The thread is concerning Adobe's processing raw files from a Panasonic Camera model in comparison to the Camera's JPEG rendition.
    "Sorry for joining this thread late.
    Unfortunately this is a limitation of our current color profile process. This limitation actually applies to all of our camera models that we support, not just Panasonic. What is happening is that the color transform we've built is optimized mainly for daylight and incandescent light conditions, but when applied to scenes with bright light sources (especially neon lights, and especially blue/purple lights), the transform will tend to oversaturate and clip those colors.
    My team is investigating how to build better profiles going forward, but in the meantime, my main suggestion is to try reducing the Red/Green/Blue Saturation sliders in the Camera Calibration panel (not the HSL tab, and not in the Basic panel). This will help to reduce the oversaturation and clipping, and will give you a better starting point for further edits (Exposure, Contrast, etc.). As a shortcut, you can store your Red/Green/Blue Saturation slider adjustments as a preset that you can then apply quickly to other images you have that show the same issue."
    Link to the actual thread.
    http://forums.adobe.com/thread/1254354?start=40&tstart=0

    My Nikon D80 and D90 don't look the same and I have run comparisons between the Canon 7D and the Nikon D90. Taken together, they all different from each other.
    The biggest difference between the D80 and the D90 seems to lie with the much larger dynamic range of the D90. Compared to the D80 at first glance, the D90 seems washed out at the lower values. This is easily overcome in ACR, but even with that, the subjectivity of the reproduction sometimes gives a nod to one over the other.
    The closest film comparison is Fuji Astia vs Provia. The D90 at default Nikon Camera Standard resembles Astia, while the D80 is a cross between Provia and Velvia. All this is controlable. One slider I use to enrich the D90 presentation is the black slider
    The Canon has other undefined differences which I have simply noted by viewing. I haven't engaged in any tweaking of that camera's images.
    So I'll use both the D80 and the D90 according to what I am wanting to happen. Of course, there are times where the differences simply inform the operator of what may be doable, and then one is tweaked to look much like the other.
    I checked out sprengel's links to the calibrator software. They have stopped at CS3, it seems. How does it perform with CS5? I may want to at least run a calibration of both cameras and look again.
    And, of course, Adobe Standard and Nikon Standard do not agree. At all. So, when is a standard not a standard?
    When there is more than one.
    Looking back at your post, I should specify that the profile I used when making the comparisons have been  the Camera standards, not Adobe Standard.
    Message was edited by: Hudechrome

  • Oracle Time Out - This is very interesting

    We have run across a very curious situation. We have a series of statements (1 update, 1 insert, 1 call to a package) wrapped in a begin/end block. About 1 percent of these grouped statement timeout every day. Further research showed that only statements that were 2005 bytes long would time out. Even more strange was that if we retried the statement (in code; immediately after the time out occured) it would execute right away. The same statement pasted into SQL Plus (2005 bytes) completed instantly. Any thoughts on this? I have seen a number of postings discussing timeouts and mysterious random events like this but I have yet to see one explained.

    First some questions:
    * Is the database on the same machine as the JDBC client? If not, does the same problem occur on all clients?
    * Does the problem only occur for your specific statements, or any set of statemensts that are 2005 bytes? Can you reproduce it with statements that don't make any modifications to the database?
    * What JDBC drivers are you using? Have you tried the latest versions (ojdbc14.jar for Oracle). Are you using THIN or OCI? Can you try the other mechanism?
    I presume you're talking about Oracle here. One of your best bets would be to use Oracle's trace facilities. If you're using OCI driver then you can do client side tracing (I think). You can also do server side tracing. You might identify what kind of conversation is occurring during the timeouts.
    While I'm pretty sure this isn't going to be your problem, figured I'd share my story with you.
    A number of years ago we had a client server database app that suffered a similar problem: A certin statement always failed with an Oracle network error message (although I'm not sure it was a timeout). We also discovered it was any statement of an exact byte size (the amount escapes me).
    It turned out we had a network switch that dropped packets of an exact byte size. There was a bug in its OS. The manufacturer name also escapes me. Anyway, we uncovered the problem by using the "ping" command to ping the exact same packet size. The ping always failed. Note that we had to adjust the byte size passed to the ping command to account for the additional bytes added to the packet by ping.
    Tim

  • Since my last software update 15 feb 13, the battery on my iphone 4s drains very and unusually fast.

    As above i had had troubles since the last update of the phone software. the battery drains incredibly fast. the phone has also started turning itself off even though it has had a over 30% charge in it.
    is there any way to stop this or is it a problem with the update?
    any suggestions welcome.
    thanks

    A simple search of the forums would have revealed that after EVERY iOS update there are reports of battery issues.
    That same search would have revealed that basic troubleshooting as described in the User's Guide resolves the vast majority of battery issues.

  • How to reduce physical reads - very interesting issue

    HI,
    Every week i generate a statspack report which will give me top20 buffer gets queries and top 20 physical reads query.When i start reducing physical reads by placing the heavy accessed table in buffer i will get the same query in top 20 higher buffer gets list in the next week.If i want to reduce buffer gets i have to remove the heavy accessed table from buffer which will cause higer physical reads.How can i solve this problem.I am trying to tune query as much as possible.
    Your help is appreciated..
    Thanks and Regards
    Anand

    I think that you are labouring under a misapprehension. Buffer gets and physical reads are basically the same thing. In order to satisfy your query, Oracle needsto read X number of blocks to find all of the data. Admittedly, it is generally faster to read those blocks from memory (buffer gets) than from disk (physical reads), but the blocks have to be read (this is called logical I/O). The only way to reduce I/O is to re-write the query to be more efficient. This may or may not be possible.
    As Kamal said, if the query is fast enough, then you are done. If it is too slow, then you need to look at ways of reducing I/O. It is unlikely that any database parameters will have a significant impact on the I/O. You need to look at the statisitics on the table and the actual sql statement.
    John

  • Airport (very) slow, ethernet fast, only my Mac affected

    Dear community,
    I experience a weird problem since some days which gives me a headache. Following situation:
    - Airport is connected with my router properly
    - I experience very slow connection speeds
    - Other users within the same network do not have any problem
    - When connected to the same router with ethernet: normal speed
    What makes the issue totally absurd is that I dont have any slowdowns in other networks using airport. Any suggestions?
    My config:
    [IMG]http://img804.imageshack.us/img804/8265/screencapture5.png[/IMG]
    [IMG]http://img713.imageshack.us/img713/6113/screencaptureh.png[/IMG]
    [IMG]http://img23.imageshack.us/img23/3849/screencapture1cd.png[/IMG]
    [IMG]http://img29.imageshack.us/img29/3098/screencapture2wx.png[/IMG]
    [IMG]http://img194.imageshack.us/img194/1570/screencapture3vq.png[/IMG]
    [IMG]http://img689.imageshack.us/img689/3961/screencapture4w.png[/IMG]
    [IMG]http://img809.imageshack.us/img809/1599/screencapture6.png[/IMG]
    [IMG]http://img153.imageshack.us/img153/778/screencapture7i.png[/IMG]
    Speed test with Ethernet:
    [IMG]http://img204.imageshack.us/img204/5348/screencapturel.png[/IMG]
    Speed test with airport:
    [IMG]http://img94.imageshack.us/img94/5631/screencapture2d.png[/IMG]
    What da hack is going on?

    I have the impression that the speed is slightly increased, but that does not make that much sense to me. So I checked speedtest.org.....still very slow.
    Any other suggestions?
    http://img695.imageshack.us/img695/9056/screencapture143204.png

Maybe you are looking for

  • Mail to all the employess in the internal table.

    Hi Experts, I have a internal table where the employee IDs are saved.. I need to send a mail to all the employees in the internal table. How will i be able to send a mail from the function module SAP_WAPI_START_WORKFLOW? where will i pass the interna

  • Set calls to automaticaly go back in a ready state when not answered

    What I would like to do is when a call is not answered and it is dropped into a not ready state so it can be presented to the next Agent is have that agent that was put in a NOt Ready State come right back to a ready state. That way the Person does n

  • Adobe forms-caption missing

    Hi, While creating the layout of the form I put some import fields, All the fields are character fields of various length. I have caption also in the form Here is my problem :--> While tesing the form only the values passed are shown and the caption

  • Photoshop to Revel

    If I moved my photos from photoshop, how do i find them on revel?

  • HT4528 deleting unwanted apps

    I have an iphone 4.  How do I delete messages I no longer want?