How to improve the speed of creating multiple strings from a char[]?

Hi,
I have a char[] and I want to create multiple Strings from the contents of this char[] at various offsets and of various lengths, without having to reallocate memory (my char[] is several tens of megabytes large). And the following function (from java/lang/String.java) would be perfect apart from the fact that it was designed so that only package-private classes may benefit from the speed improvements it offers:
    // Package private constructor which shares value array for speed.
    String(int offset, int count, char value[]) {
     this.value = value;
     this.offset = offset;
     this.count = count;
    }My first thought was to override the String class. But java.lang.String is final, so no good there. Plus it was a really bad idea to start with.
My second thought was to make a java.lang.FastString which would then be package private, and could access the string's constructor, create a new string and then return it (thought I was real clever here) but no, apparently you cannot create a class within the package java.lang. Some sort of security issue.
I am just wondering first if there is an easy way of forcing the compiler to obey me, or forcing it to allow me to access a package private constructer from outside the package. Either that, or some sort of security overrider, somehow.

My laptop can create and garbage collect 10,000,000 Strings per second from char[] "hello world". That creates about 200 MB of strings per second (char = 2B). Test program below.
A char[] "tens of megabytes large" shouldn't take too large a fraction of a second to convert to a bunch of Strings. Except, say, if the computer is memory-starved and swapping to disk. What kind of times do you get? Is it at all possible that there is something else slowing things down?
using (literally) millions of charAt()'s would be
suicide (code-wise) for me and my program."java -server" gives me 600,000,000 charAt()'s per second (actually more, but I put in some addition to prevent Hotspot from optimizing everything away). Test program below. A million calls would be 1.7 milliseconds. Using char[n] instead of charAt(n) is faster by a factor of less than 2. Are you sure millions of charAt()'s is a huge problem?
public class t1
    public static void main(String args[])
     char hello[] = "hello world".toCharArray();
     for (int n = 0; n < 10; n++) {
         long start = System.currentTimeMillis();
         for (int m = 0; m < 1000 * 1000; m++) {
          String s1 = new String(hello);
          String s2 = new String(hello);
          String s3 = new String(hello);
          String s4 = new String(hello);
          String s5 = new String(hello);
         long end = System.currentTimeMillis();
         System.out.println("time " + (end - start) + " ms");
public class t2
    static int global;
    public static void main(String args[])
     String hello = "hello world";
     for (int n = 0; n < 10; n++) {
         long start = System.currentTimeMillis();
         for (int m = 0; m < 10 * 1000 * 1000; m++) {
          global +=
              hello.charAt(0) + hello.charAt(1) + hello.charAt(2) +
              hello.charAt(3) + hello.charAt(4) + hello.charAt(5) +
              hello.charAt(6) + hello.charAt(7) + hello.charAt(8) +
              hello.charAt(9);
          global +=
              hello.charAt(0) + hello.charAt(1) + hello.charAt(2) +
              hello.charAt(3) + hello.charAt(4) + hello.charAt(5) +
              hello.charAt(6) + hello.charAt(7) + hello.charAt(8) +
              hello.charAt(9);
         long end = System.currentTimeMillis();
         System.out.println("time " + (end - start) + " ms");
}

Similar Messages

  • How to improve the speed for backing up files to time capsule via in hose wifi?

    How to improve the speed for backing up files to time capsule via in hose wifi?

    via in hose wifi?
    Use a bigger hose??
    House??
    Need to spell this out a bit more..
    But speed via 2.4ghz to the TC is restricted by Apple to max 130mbps link speed.. much slower actual transfer.. latest TC and laptops can do 450mbps on 5ghz but you need to be up close.. so place the TC near the device using it.. and force 5ghz connection by using a different name for 5ghz network.

  • How to improve the query performance or tune query from Explain Plan

    Hi
    The following is my explain plan for sql query. (The plan is generated by Toad v9.7). How to fix the query?
    SELECT STATEMENT ALL_ROWSCost: 4,160 Bytes: 25,296 Cardinality: 204                                         
         8 NESTED LOOPS Cost: 3 Bytes: 54 Cardinality: 1                                    
              5 NESTED LOOPS Cost: 2 Bytes: 23 Cardinality: 1                               
                   2 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 13 Cardinality: 1                          
                        1 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                     
                   4 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 1 Bytes: 10 Cardinality: 1                          
                        3 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1                     
              7 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 1 Bytes: 31 Cardinality: 1                               
                   6 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1                          
         10 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                                    
              9 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                               
         15 NESTED LOOPS Cost: 2 Bytes: 29 Cardinality: 1                                    
              12 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                               
                   11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                          
              14 TABLE ACCESS BY INDEX ROWID TABLE ONT.OE_ORDER_HEADERS_ALL Cost: 1 Bytes: 17 Cardinality: 1                               
                   13 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Cardinality: 1                          
         21 FILTER                                    
              16 TABLE ACCESS FULL TABLE ONT.OE_TRANSACTION_TYPES_TL Cost: 2 Bytes: 1,127 Cardinality: 49                               
              20 NESTED LOOPS Cost: 2 Bytes: 21 Cardinality: 1                               
                   18 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                          
                        17 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                     
                   19 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Bytes: 9 Cardinality: 1                          
         23 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1                                    
              22 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1                               
         45 NESTED LOOPS Cost: 4,160 Bytes: 25,296 Cardinality: 204                                    
              42 NESTED LOOPS Cost: 4,150 Bytes: 23,052 Cardinality: 204                               
                   38 NESTED LOOPS Cost: 4,140 Bytes: 19,992 Cardinality: 204                          
                        34 NESTED LOOPS Cost: 4,094 Bytes: 75,850 Cardinality: 925                     
                             30 NESTED LOOPS Cost: 3,909 Bytes: 210,843 Cardinality: 3,699                
                                  26 PARTITION LIST ALL Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18          
                                       25 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_HEADERS Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18     
                                            24 INDEX SKIP SCAN INDEX XLA.XLA_AE_HEADERS_N1 Cost: 264 Cardinality: 1,398,115 Partition #: 29 Partitions accessed #1 - #18
                                  29 PARTITION LIST ITERATOR Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32           
                                       28 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_LINES Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32      
                                            27 INDEX RANGE SCAN INDEX (UNIQUE) XLA.XLA_AE_LINES_U1 Cost: 1 Cardinality: 1 Partition #: 32
                             33 PARTITION LIST ITERATOR Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35                
                                  32 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_DISTRIBUTION_LINKS Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35           
                                       31 INDEX RANGE SCAN INDEX XLA.XLA_DISTRIBUTION_LINKS_N3 Cost: 1 Cardinality: 1 Partition #: 35      
                        37 PARTITION LIST SINGLE Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 38                     
                             36 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_EVENTS Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 39 Partitions accessed #2               
                                  35 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_EVENTS_U1 Cost: 1 Cardinality: 1 Partition #: 40 Partitions accessed #2          
                   41 PARTITION LIST SINGLE Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 41                          
                        40 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_TRANSACTION_ENTITIES Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 42 Partitions accessed #2                    
                             39 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_TRANSACTION_ENTITIES_U1 Cost: 1 Cardinality: 1 Partition #: 43 Partitions accessed #2               
              44 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 1 Bytes: 11 Cardinality: 1                               
                   43 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1

    damorgan wrote:
    Tuning is NOT about reducing the cost of i/o.
    i/o is only one of many contributors to cost and only one of many contributors to waits.
    Any time you would like to explore this further run this code:
    SELECT 1 FROM dual
    WHERE regexp_like(' ','^*[ ]*a');but not on a production box because you are going to experience an extreme tuning event with zero i/o.
    And when I say "extreme" I mean "EXTREME!"
    You've been warned.I think you just need a faster server.
    SQL> set autotrace traceonly statistics
    SQL> set timing on
    SQL> select 1 from dual
      2  where
      3  regexp_like   (' ','^*[ ]*a');
    no rows selected
    Elapsed: 00:00:00.00
    Statistics
              1  recursive calls
              0  db block gets
              0  consistent gets
              0  physical reads
              0  redo size
            243  bytes sent via SQL*Net to client
            349  bytes received via SQL*Net from client
              1  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              0  rows processedRepeated from an Oracle 10.2.0.x instance:
    SQL> SELECT DISTINCT SID FROM V$MYSTAT;
           SID
           310
    SQL> ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    Session altered.
    SQL> select 1 from dual
      2  where
      3  regexp_like   (' ','^*[ ]*a');The session is hung. Wait a little while and connect to the database using a different session:
    COLUMN STAT_NAME FORMAT A35 TRU
    SET PAGESIZE 200
    SELECT
      STAT_NAME,
      VALUE
    FROM
      V$SESS_TIME_MODEL
    WHERE
      SID=310;
    STAT_NAME                                VALUE
    DB time                                   9247
    DB CPU                                    9247
    background elapsed time                      0
    background cpu time                          0
    sequence load elapsed time                   0
    parse time elapsed                        6374
    hard parse elapsed time                   5997
    sql execute elapsed time                  2939
    connection management call elapsed        1660
    failed parse elapsed time                    0
    failed parse (out of shared memory)          0
    hard parse (sharing criteria) elaps          0
    hard parse (bind mismatch) elapsed           0
    PL/SQL execution elapsed time               95
    inbound PL/SQL rpc elapsed time              0
    PL/SQL compilation elapsed time              0
    Java execution elapsed time                  0
    repeated bind elapsed time                  48
    RMAN cpu time (backup/restore)               0Seems to be using a bit of time for the hard parse (hard parse elapsed time). Wait a little while, then re-execute the query:
    STAT_NAME                                VALUE
    DB time                                   9247
    DB CPU                                    9247
    background elapsed time                      0
    background cpu time                          0
    sequence load elapsed time                   0
    parse time elapsed                        6374
    hard parse elapsed time                   5997
    sql execute elapsed time                  2939
    connection management call elapsed        1660
    failed parse elapsed time                    0
    failed parse (out of shared memory)          0
    hard parse (sharing criteria) elaps          0
    hard parse (bind mismatch) elapsed           0
    PL/SQL execution elapsed time               95
    inbound PL/SQL rpc elapsed time              0
    PL/SQL compilation elapsed time              0
    Java execution elapsed time                  0
    repeated bind elapsed time                  48
    RMAN cpu time (backup/restore)               0The session is not reporting additional CPU usage or parse time.
    Let's check one of the session's statistics:
    SELECT
      SS.VALUE
    FROM
      V$SESSTAT SS,
      V$STATNAME SN
    WHERE
      SN.NAME='consistent gets'
      AND SN.STATISTIC#=SS.STATISTIC#
      AND SS.SID=310;
         VALUE
           163Not many consistent gets after 20+ minutes.
    Let's take a look at the plan:
    SQL> SELECT SQL_ID,CHILD_NUMBER FROM V$SQL WHERE SQL_TEXT LIKE 'select 1 from du
    al%';
    SQL_ID        CHILD_NUMBER
    04mpgrzhsv72w            0
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('04mpgrzhsv72w',0,'TYPICAL'))
    select 1 from dual where regexp_like   (' ','^*[ ]*a')
    NOTE: cannot fetch plan for SQL_ID: 04mpgrzhsv72w, CHILD_NUMBER: 0
          Please verify value of SQL_ID and CHILD_NUMBER;
          It could also be that the plan is no longer in cursor cache (check v$sql_p
    lan)No plan...
    Let's take a look at the 10053 trace file:
    Registered qb: SEL$1 0x19157f38 (PARSER)
      signature (): qb_name=SEL$1 nbfros=1 flg=0
        fro(0): flg=4 objn=258 hint_alias="DUAL"@"SEL$1"
    Predicate Move-Around (PM)
    PM: Considering predicate move-around in SEL$1 (#0).
    PM:   Checking validity of predicate move-around in SEL$1 (#0).
    CBQT: Validity checks failed for 7uqx4guu04x3g.
    CVM: Considering view merge in query block SEL$1 (#0)
    CBQT: Validity checks failed for 7uqx4guu04x3g.
    Subquery Unnest
    SU: Considering subquery unnesting in query block SEL$1 (#0)
    Set-Join Conversion (SJC)
    SJC: Considering set-join conversion in SEL$1 (#0).
    Predicate Move-Around (PM)
    PM: Considering predicate move-around in SEL$1 (#0).
    PM:   Checking validity of predicate move-around in SEL$1 (#0).
    PM:     PM bypassed: Outer query contains no views.
    FPD: Considering simple filter push in SEL$1 (#0)
    FPD:   Current where clause predicates in SEL$1 (#0) :
              REGEXP_LIKE (' ','^*[ ]*a')
    kkogcp: try to generate transitive predicate from check constraints for SEL$1 (#0)
    predicates with check contraints:  REGEXP_LIKE (' ','^*[ ]*a')
    after transitive predicate generation:  REGEXP_LIKE (' ','^*[ ]*a')
    finally:  REGEXP_LIKE (' ','^*[ ]*a')
    apadrv-start: call(in-use=592, alloc=16344), compile(in-use=37448, alloc=42256)
    kkoqbc-start
                : call(in-use=592, alloc=16344), compile(in-use=38336, alloc=42256)
    kkoqbc-subheap (create addr=000000001915C238)Looks like the query never had a chance to start executing - it is still parsing after 20 minutes.
    I am not sure that this is a good example - the query either executes very fast, or never has a chance to start executing. But, it might still make your point physical I/O is not always the problem when performance problems are experienced.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • How to Execute the Business objects created in ABAP from webDynpro applicat

    Wht is the steps , or where the help documents are available for accessing the Business objects created in ABAP or R/3 systems from webDynpro project.

    Hello Vishal,
    I couldn't find any useful documents for your purpose.
    However i had a similar requirement and had implemented the same using GCP APIs. But before i send you the code help, i would like to know your exact requirement. What are you trying to achieve? Are you just wanting to execute the BO and get the result? Or is your requirement has got something more to do?
    Regards,
    Sudeep.

  • How to improve the picture on a Samsung TV from a mac pro?

    I currently have a Samsung HDTV [ 1920x1080 ] connected via a VGA cable to my 2007/8 Mac Pro through a NVIDIA GeForce 8800 GT card.
    Can I improve the sharpness of the Samsung by connecting via the HDMI/DVI sockets on the back?
    Having said that the NVIDIA doesn't have a HDMI/DVI output [ long thin USB type connector ] so presumably I would I need a converter.
    Maybe a new video card?

    Can I improve the sharpness of the Samsung by connecting via the HDMI/DVI sockets on the back?
    VGA is purely analog, so the digital out put of your computer has to be converted to analog and then back to digital when it gets to the TV. DVI can carry both analog and digital; most likely it will use a digital connection and should be much sharper. Without the model number of your TV it's hard to say.
    Having said that the NVIDIA doesn't have a HDMI/DVI output long thin USB type connector so presumably I would I need a converter.
    If your TV only has an HDMI connection than yes you need a DVI to HDMI cable: http://store.apple.com/us/product/TR842LL/A?mco=MTY3ODQ5OTY.

  • I have a new ipad with retina display and the orientation change from portrait to landscape is extremely slow? How can I improve the speed?

    Any ideas on how to improve the speed ?

    Hi rodtheprod,
    I apologize, I'm a bit unclear on the issue you are describing. If a movie from the iTunes Store supports multiple languages, they can usually be changed as outlined in the following article:
    iTunes: Language settings for video
    http://support.apple.com/kb/HT5562
    If this is not possible and no changes have been made to the country settings for the iTunes Store, you may want to try redownloading the movie (if it is still available):
    Download past purchases
    http://support.apple.com/kb/HT2519
    Regards,
    - Brenden

  • How to improve spreadsheet speed when single-threaded VBA is the bottleneck.

    My brother works with massive Excel spreadsheets and needs to speed them up. Gigabytes in size and often with a million rows and many sheets within the workbook. He's already refined the sheets to take advantage of Excel's multi-thread recalculation and
    seen significant improvements, but he's hit a stumbling block. He uses extensive VBA code to aid clarity, but the VB engine is single-threaded, and these relatively simple functions can be called millions of times. Some functions are trivial (e.g. conversion
    functions) and just for clarity and easily unwound (at the expense of clarity), some could be unwound but that would make the spreadsheets much more complex, and others could not be unwound. 
    He's aware of http://www.analystcave.com/excel-vba-multithreading-tool/ and similar tools but they don't help as the granularity is insufficiently fine. 
    So what can he do? A search shows requests for multi-threaded VBA going back over a decade.
    qts

    Hi,
    >> The VB engine is single-threaded, and these relatively simple functions can be called millions of times.
    The Office Object Model is
    Single-Threaded Apartments, if the performance bottleneck is the Excel Object Model operation, the multiple-thread will not improve the performance significantly.
    >> How to improve spreadsheet speed when single-threaded VBA is the bottleneck.
    The performance optimization should be based on the business. Since I’m not familiar with your business, so I can only give you some general suggestions from the technical perspective. According to your description, the size of the spreadsheet had reached
    Gigabytes and data volume is about 1 million rows. If so, I will suggest you storing the data to SQL Server and then use the analysis tools (e.g. Power Pivot).
    Create a memory-efficient Data Model using Excel 2013
    and the Power Pivot add-in
    As
    ryguy72 suggested, you can also leverage some other third party data processing tool
    according to your business requirement.
    Regards,
    Jeffrey
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • How to improve the response speed

    Dear Consultants,
    In BI 7.0 EP Environment,  displaying 40000 detailed records wasted  3 minutes, this speed is slowly.
    How to improve the response speed ?
    Please give me some proposals and methods to solve this question.
    Thanks a lot & Best Regards
    Ricky

    Hello,
    3min. is not so bad for 40000 rows of data. Firstly you can analyze where the time spent is it olap or DB. you can use RSRT for analysis.
    Then,
    1. Create aggregates
    2. Use caching and precalculation
    3. check 'Use Selection of Structure Elements' from properties of query in RSRT
    4. Remove unnecessary calculations from query
    5. remove unnecessary rows. try to make them free characteristics.
    6. If there is unused data in infocube, you can archive this data.
    regards,

  • HT5114 How can I create multiple accounts without the necessity to create multiple email addresses?  I now have 2 i-phones (mine and my wife's#, 1 i-pad #mine#, 1 i-pad mini #10 year old's#, and 1 I-pod touch #11 year old's).  I would like to maintain pur

    How can I create multiple accounts without the necessity to create multiple email addresses?  I now have 2 i-phones (mine and my wife's), 1 i-pad (mine), 1 i-pad mini (10 year old's), and 1 I-pod touch (11 year old's).  I would like to maintain purchase control, but would like to be able to have a separate account ID for each person.  Is that possible?

    Each email address can only be on one iTunes account, and all purchases are tied to the account that buys them. If you want to have separate iTunes accounts on each then you will need separate email accounts for each iTunes account

  • How can I force TestStand to create multiple instance of an activex server instead of share the same reference?

    Hi, All:
         I am trying to migrate a application coded by VC++ to TestStand+CVI. This application opens multiple serial com ports and send commands to test targets, and get responses from these targets. The VC++ application creates multiple thread, and every thread create an activeX server instance which is responsible for opening serial port,sending test commands ,and getting response. It works fine as every thread has itsown activeX instance, so data and commands can be handled in the right serial port successfully.
         Yet when I works on TestStand, I got problems. I choose parallel model as the process model. I create a activeX object reference with AxtiveX/COM Adapter, and store the reference in a sequence local varialbe. If there is only 1 testsocket, It is OK. But if there are multiple test sockets, and communication between the application and test targets will be directly to the last test target. I try to popup a message within a execution to indicate the value of the activeX object reference, and all of them are identical. But this is not the behavior I want.
        So, is it possible to force TestStand to create independent instances of an activeX server? How can I make it work?
    Thanks

    Thanks for your comment, Dan.
          Yet I do get problems as none of us know how to program an activeX server so that it can be forced to be a single-instance or multiple-instance.
    As I mentioned earlier, there is existing application which is written by VC++ can create multiple instances of this activeX server, all I have to do is to create multiple threads, and initiate an instance of this server to contrl multiple test targets.
         My colleague said he creates the server with VC++ ATL, and cofigure it to be dual-interface and STA. And in the VC++ application side(client side), I use smart-pointer in threads to create an instance of the server:
     IMySerialServer pIMySerialServer;
      HRESULT hr = pIMySerialServer.CreateInstance("UUTCmd.MySerialServer");
      if(FAILED(hr))
       AfxMessageBox("Fail to create instance.");
    And then I got multple instances of the activeX server. Every thread can have itsown com port, can send/receive commands indepently. I have no idea how can I make it work on TestStand. Would you show me some reference documents or sample codes?
    Thank you
    Cipher

  • How can i improve the speed of my mac?

    my mac has 667 MHz, how can i improve the speed?

    Sorry if that sounded not very helpful, but basically it comes down to that. Not much can improve the performance of machine of that vintage, to stand up to a current model, unless you spend almost as much on upgrades (some of which to consider are listed below).
    Some thought should be given to the budget (money and time and effort) one is willing to spend. The G4 Digital Audio is possibly a good candidate for upgrading for someone who is interested in do-it-yourself computer stuff and bargain hunting for parts. It can even be a fun hobby. But even max-ing out all possible upgrades will still be lacking in performance compared to the current Mac models, I don't actually have any specific benchmarks, just my gut feel. And you may spend a good portion of the cost of a new machine on all those parts for the old machine.
    Is this the 667MHz machine (G4 Digital Audio) in question?
    http://www.lowendmac.com/ppc/digital-audio-power-mac-g4.html
    http://eshop.macsales.com/Descriptions/specs/Framework.cfm?page=g4da.html&title= Power%20Macintosh%20G4%20Digital%20Audio
    or is it some other model?
    Here are some CPU upgrades, for example:
    http://eshop.macsales.com/MyOWC/Upgrades.cfm?sort=pop&model=162&type=Processor&T I=2420&shoupgrds=Show+Upgrades
    you can even get a small rebate for your original CPU
    http://eshop.macsales.com/Service/rebate-program/
    Any of those CPUs should double your processing speed, but you may also need more RAM...or the CPU will just be waiting for data to be read from the virtual memory on the hard drive.
    http://eshop.macsales.com/item/Other%20World%20Computing/133SD512328/
    you could max it out with a whopping 1.5 GB, with 3 of these.
    and you may need a better video card...
    http://eshop.macsales.com/MyOWC/Upgrades.cfm?sort=pop&model=162&type=Video&TI=29 34&shoupgrds=Show+Upgrades
    To add larger and faster hard drives than the original Ultra ATA/66 can handle you would need a PCI ATA133 or PCI SATA card...
    http://eshop.macsales.com/item/ACARD/AEC6280MOB/
    http://eshop.macsales.com/item/Sonnet%20Technology/TSATA/
    And if you need to install OS X from a DVD, and want to burn your own DVD, you may need a new optical drive, if one hasn't been installed already, to replace the CD-RW:
    http://eshop.macsales.com/MyOWC/Upgrades.cfm?Model=162&Type=InternalOpticalDrives&sort=pop
    Also, check out the Power Macintosh G4 forums if you have any questions about that particular model, (if I guessed correctly)
    http://discussions.apple.com/category.jspa?categoryID=113
    Even while I am trying to build a case to convince someone not to waste their money on this project, I am thinking to myself, where can I pick up one of these G4 towers to take on this project myself. Maybe a Quicksilver, no a FW400 MDD dual-boot for sure, I still have some OS 9 software. I did about all I can with my G3 Desktop, and spent more than my MacBook cost.

  • HT1338 how can I improve the speed of my Macbook?  It's become incredibly slow...

    how can I improve the speed of my Macbook?  It's become incredibly slow...

    Hi,
    Troubleshooting Steps are... Restart...  Reset...  Restore from Backup...  Restore as New...
    Try a Reset of the Phone... You will Not Lose Any Data...
    Turn the Phone Off...
    Press and Hold the Sleep/Wake Button and the Home Button at the Same Time...
    The Apple logo will Appear and then Disappear...
    Usually takes about 10 - 15 Seconds...
    Turn the Phone On...
    If that does not help... See Here:
    http://support.apple.com/kb/HT1414
    iPhone User Guide
    http://manuals.info.apple.com/en_US/iphone_user_guide.pdf
    iPhone Syncing
    http://www.apple.com/support/iphone/syncing/

  • Why is my macbook pro running so slow and how do I improve the speed?

    why is my macbook pro running so slow and how do I improve the speed?

    Write off the MS stuff as indicators for now--starting with Office 2008, they developed very long launch times. The time to launch 2008 on my old 1.25Ghz G4 and my MBP 2.2G are virtually the same. Even document opens/saves are slow compared to Office 2004 and earlier.
    1) You have adequate hard drive space so that eliminates one of the usual suspects.
    2) Check for runaway background processes. Quit all running user applications so the computer is at normal idle. Find Activity Monitor in Applications > Utilities and launch it. If you've not run it before, change the "Show" pulldown at the top of the AM window from its default of "User Processes" to "All Processes." There is a column for "%CPU." Click that column header to sort by CPU usage. Watch the processes "bubble" for about a minute to get a better picture of the usage.
    If any process is using more than about 25 percent of the CPU cycles while the computer is idling, post the names of the processes and we can see if they can be eliminated. My MBP shows nothing using more that about 5 percent on any given test run as long as I close all my user apps.
    3) If you have never done any sort of maintenance procedures, taht can slow you down. Simply restarting the computer a few times a week clear a lot of temporary files. You also may wish to review this article on the Mac OSX periodic maintenance scripts, which, if run, also do a lot of cleanup:
    http://thexlab.com/faqs/maintscripts.html
    There are other useful articles on file maintenance associated with the one. Here's the index:
    http://thexlab.com/faqs/faqs.html
    Check out articles with "Maintain" and "Tuning" in the titles.
    4) Use Disk Utility to check the SMART Status of your hard drive. A failing drive can make things slow before it makes things stop.

  • How to improve the performance of the abap program

    hi all,
    I have created an abap program. And it taking long time since the number of records are more. And can anyone let me know how to improve the performance of my abap program.
    Using se30 and st05 transaction.
    can anyone help me out step by step
    regds
    haritha

    Hi Haritha,
    ->Run Any program using SE30 (performance analysis)
    Note: Click on the Tips & Tricks button from SE30 to get performance improving tips.
    Using this you can improve the performance by analyzing your code part by part.
    ->To turn runtim analysis on within ABAP code insert the following code
    SET RUN TIME ANALYZER ON.
    ->To turn runtim analysis off within ABAP code insert the following code
    SET RUN TIME ANALYZER OFF.
    ->Always check the driver internal tables is not empty, while using FOR ALL ENTRIES
    ->Avoid for all entries in JOINS
    ->Try to avoid joins and use FOR ALL ENTRIES.
    ->Try to restrict the joins to 1 level only ie only for tables
    ->Avoid using Select *.
    ->Avoid having multiple Selects from the same table in the same object.
    ->Try to minimize the number of variables to save memory.
    ->The sequence of fields in 'where clause' must be as per primary/secondary index ( if any)
    ->Avoid creation of index as far as possible
    ->Avoid operators like <>, > , < & like % in where clause conditions
    ->Avoid select/select single statements in loops.
    ->Try to use 'binary search' in READ internal table. -->Ensure table is sorted before using BINARY SEARCH.
    ->Avoid using aggregate functions (SUM, MAX etc) in selects ( GROUP BY , HAVING,)
    ->Avoid using ORDER BY in selects
    ->Avoid Nested Selects
    ->Avoid Nested Loops of Internal Tables
    ->Try to use FIELD SYMBOLS.
    ->Try to avoid into Corresponding Fields of
    ->Avoid using Select Distinct, Use DELETE ADJACENT
    Check the following Links
    Re: performance tuning
    Re: Performance tuning of program
    http://www.sapgenie.com/abap/performance.htm
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    check the below link
    http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
    See the following link if it's any help:
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    Check also http://service.sap.com/performance
    and
    books like
    http://www.sap-press.com/product.cfm?account=&product=H951
    http://www.sap-press.com/product.cfm?account=&product=H973
    http://www.sap-img.com/abap/more-than-100-abap-interview-faqs.htm
    http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
    Performance tuning for Data Selection Statement
    http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
    Debugger
    http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
    http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
    http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm
    Run Time Analyser
    http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
    SQL trace
    http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
    CATT - Computer Aided Testing Too
    http://help.sap.com/saphelp_47x200/helpdata/en/b3/410b37233f7c6fe10000009b38f936/frameset.htm
    Test Workbench
    http://help.sap.com/saphelp_47x200/helpdata/en/a8/157235d0fa8742e10000009b38f889/frameset.htm
    Coverage Analyser
    http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
    Runtime Monitor
    http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
    Memory Inspector
    http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
    ECATT - Extended Computer Aided testing tool.
    http://help.sap.com/saphelp_47x200/helpdata/en/20/e81c3b84e65e7be10000000a11402f/frameset.htm
    Just refer to these links...
    performance
    Performance
    Performance Guide
    performance issues...
    Performance Tuning
    Performance issues
    performance tuning
    performance tuning
    You can go to the transaction SE30 to have the runtime analysis of your program.Also try the transaction SCI , which is SAP Code Inspector.
    edited by,
    Naveenan

  • HI All, How to improve the performance in given query?

    HI All,
    How to improve the performance in given query?
    Query is..
    PARAMETERS : p_vbeln type lips-vbeln.
    DATA : par_charg TYPE LIPS-CHARG,
    par_werks TYPE LIPS-WERKS,
    PAR_MBLNR TYPE MSEG-MBLNR .
    SELECT SINGLE charg
    werks
    INTO (par_charg, par_werks)
    FROM lips
    WHERE vbeln = p_vbeln.
    IF par_charg IS NOT INITIAL.
    SELECT single max( mblnr )
    INTO par_mblnr
    FROM mseg
    WHERE bwart EQ '101'
    AND werks EQ par_werks (index on werks only)
    AND charg EQ par_charg.
    ENDIF.
    Regards
    Steve

    Hi steve,
    Can't you use the material in your query (and not only the batch)?
    I am assuming your system has an index MSEG~M by MANDT + MATNR + WERKS (+ other fields). Depending on your system (how many different materials you have), this will probably speed up the query considerably.
    Anyway, in our system we ended up by creating an index by CHARG, but leave as a last option, only if selecting by matnr and werks is not good enough for your scenario.
    Hope this helps,
    Rui Dantas

Maybe you are looking for