Efficient subset generation

Trying to find an efficient method of taking a list of values e.g. [1,2,3,4,5] and generating a list of all possible subsets. Anyone got any ideas on approaches, I have seen a couple of things that use Gray codes to help efficiency but not really sure what these are tbh!
Cheers

Thanks for your help dwq. Your quite right about
that. It was fun listening to the computer trying to
cope though! I was wondering if there are any
techniques to solve this problem more efficiently
than our current approaches?Hi Alex,
Here's a different approach to solve the problem more efficiently.
This doesnt use any in-memory Lists.
I tried it on sets > 10 and it gives the results fast enough.
For sets = 24, it is fast if the System.out.println to print the output on
console is commented out.
Let me know what you think.
Interesting problem :-)
Ajay
import java.util.*;
public class Subsets {
  public static void main(String [] args) {
    int [] set = new int[16];
    for (int i=0; i<set.length; i++) {
      set[i] = i;
    for (int i=1; i <= set.length; i++) {
      System.out.println("Subsets of size " + i);
      SubsetIterator iterator = new SubsetIterator(set, i);
      int[] subset = iterator.nextSubset();
      while (subset != null) {
        for (int j=0; j<subset.length; j++) {
          System.out.print(subset[j] + " ");
     System.out.println("");
        subset = iterator.nextSubset();
      System.out.println();
  static class SubsetIterator {
    private int [] set;
    private int setSize;
    private int subsetSize;
    private int[] subsetIndices;
    private boolean firstTime = true;
    public SubsetIterator(int [] set, int subsetSize) {
      this.set = set;
      this.setSize = set.length;
      this.subsetSize = subsetSize;
      subsetIndices = new int[subsetSize];
      for (int i=0; i<subsetSize; i++) {
          subsetIndices[i] = i;
     * Returns the next subset (or null if there are no more subsets)
    public int [] nextSubset() {
      if (!firstTime) incrementSubsetIndices();
      firstTime = false;
      int[] subset = new int[subsetSize];
      for (int i=0; i<subsetSize; i++) {
        if (subsetIndices[i] >= setSize) return null;
     subset[i] = set[subsetIndices];
return subset;
private void incrementSubsetIndices() {
for (int i=subsetSize-1; i>=0; i--) {
subsetIndices[i]++;
int upperBound = setSize + i - subsetSize;
if (subsetIndices[i] > upperBound) continue;
for(int j=i+1; j<subsetSize; j++) {
subsetIndices[j] = subsetIndices[j-1] + 1;
break;
/** @TODO
public boolean hasNextSubset() {

Similar Messages

  • How can I speed up PDF generation?

    I am working on 2 manuals in Frame 10, unstructured. I have installed all the latest FM patches. The smaller manual is 270 pages with 200 graphics inserted by reference. It takes 2.5 hours to "Save as PDF." The larger manual is 500 pages, 250 graphics and takes 3 hours to "Save as PDF".
    I have converted graphics to smaller .jpg files. I have freed up as much space on my PC as possible - it has 4GB memory and 300 GB hard drive space available, dual processor with i3 chip. I delete Temp files and reboot before creating PDF.
    If I turn off graphics (Esc+v+v), the PDF is created in just a few minutes.
    What else can I do to produce a complete PDF with graphics in a reasonable amount of time?

    I am a lone writer working on a single PC. No network drives. I open all files in the book before creating PDF.
    Many of the graphics are product photos - how to assemble, etc. I was provided with .psd and .jpg versions. Originally inserted the .psd ones, but then consultant advised us to use the .jpg instead. PDF generation is extremely long either way. Also had numerous huge .tif files that I saved to much smaller .jpg.
    Manuals will be localized and printed. What type of graphics are advised in this case - for good print quality and efficient PDF generation?  
    doolie
    Date: Thu, 26 Jan 2012 11:33:15 -0700
    From: [email protected]
    To: [email protected]
    Subject: How can I speed up PDF generation?
        Re: How can I speed up PDF generation?
        created by Error7103 in FrameMaker General Discussion - View the full discussion
    Are the Frame files and/or the imported objects coming from network servers? If so, you might get a speed up by saving to manual to the local machine prior to printing, and printing to PostScript, where the .ps file is on the local machine. Then Distill separately (if you have the full Acrobat product). Distilling separately* at least frees up Frame for that part of the rendering. I have converted graphics to smaller .jpg files. From what? And I wouldn't have bet that using JPEG would help. The files may be smaller, but they still need to be filtered for export to Ps or PDF, and if they are raster, they may be subject to further processing during PDF generation based on the subsampling specified. You might try sending the objects as EPS (which requires minimal processing into Ps), and you might try making sure that raster images are already at the desired dpi, and require no further downsampling in the rendering flow.

  • What are Apple's rules on refunding price difference ?

    Just bought a MacBook Air last Friday and didn't know that the price would drop less than a week later.  Will I be able to get the difference back?

    Since the processors in the new models are more energy efficient, it's definitely a worthwhile swap:
    Power-efficient fourth generation Intel Core i5 and Core i7 processors work in conjunction with OS X® Mavericks to give the 13-inch MacBook Air up to 12 hours of battery life and the 11-inch MacBook Air up to 9 hours of battery life. iTunes® movie playback times increase to 12 hours on the 13-inch notebook and 9 hours on the 11-inch notebook, adding up to two hours of playback time to the updated MacBook Air.

  • Video intensive work on Mac Pro? Advice Needed

    Hi everyone,
    I've been originally saving up for a while now for an iMac 24" as it offers the best specification price wise. But now I leaning on the Mac Pro and therefore need some advice.
    I know that there are alot of posts on advice - need to buy mac etc. so therfore I would really appreciate if I am afforded some help.
    I am mostly going to do ALOT of video encoding (via handbrake) and I am looking for the best configuration available to achieve lowest waiting time.

    I think you should go back and look at the other threads with advice, your budget. There is "ideal" and "affordable."
    Will it pay for itself in 24 months so you can afford the next revision at that time?
    Start with an 8-Core 3GHz.
    Add a Port Multiplier controller and two drive cases, 5 drives each, nearly 400MB/sec RAID.
    Throw in 8GB RAM.
    Add in costs for software upgrades, two sets of backups, four internal 500GB drives.
    Seagate has 750GB, as does WD. Hitachi has their $450 1000GB monster.
    Give yourself time to build, test, and optimize your setup, and expect to change it over time as you learn more.
    Can't afford that? then trim the 8-core. But Apple has shown how efficient next generation applications already are for video when it comes to 3GHz 8-core Mac Pro.

  • What are the opcodes that put vars on the local frame array? Remove them?

    Does anyone know of a reference that lists all the opcodes that put a local variable in the local frame array (like ASTORE, BIPUSH, etc) and all the ones that remove them (like POP, ALOAD, etc)?

    BIPUSH doesn't affect the frame, it pushes a value onto the operand stack. The distinction may be subtle, but it's
    important since the JVM is a stack-based architecture (in other words, if you don't make the distinction, than your
    list contains [almost] all of the JVM opcodes).You're right. So then, what exactly are POP, DUP, etc doing? Are they taking from the frame?First thing to remember is that the frame and operand stack are logically different: the frame contains method parameters and local variables, while the operand stack is used for intermediate results. They may be implemented using a single processor stack (in X86 terms, the BP register would point to the start of the frame, while pushes and pops would modify the SP), but don't have to be. I think the JVM spec even talks about storing frames in the heap.
    Here's an example:
    public static int foo(int a, int b)
        int c = a + b;
        int d = (c + c) * 5;
        return d;
    }Running javap -c to show the bytecodes, we get this:
    public static int foo(int, int);
      Code:
       0:   iload_0
       1:   iload_1
       2:   iadd
       3:   istore_2
       4:   iload_2
       5:   iload_2
       6:   iadd
       7:   iconst_5
       8:   imul
       9:   istore_3
       10:  iload_3
       11:  ireturnThe two method parameters, a and b, are stored in the frame in slot 0 and 1 (if this were an instance method, slot 0 would contain "this"). They get pushed onto the stack using the iload opcode (the normal version of this opcode takes a second byte index into the frame; to minimize code size, the JVM has alternate versions of most frame-access opcodes that are hardcoded for a specific low-numbered slots).
    After the loads, the stack will contain the values held in b and a; the iadd instruction adds these together, leaving the result on the stack. The istore_2 removes the topmost value from the operand stack, and stores it in the frame (in the slot assigned to variable "c").
    To your question about dup and so forth, I was hoping to demonstrate that with the repeated reference to variable "c"; instead, the compiler translated that into the two iload operations (opcodes 4 and 5). I suppose it makes sense, but it shows just how little optimization the compiler attempts to do -- instead, it relies on Hotspot being smart enough to optimize hardware accesses.
    The iconst_5 opcode is another example of something that affects the operand stack but not the frame: it pushes a hardcoded constant value onto the stack.
    It's also another case where the JVM defines several opcodes that do the same thing: the iconst_n for values -1 to 5, bipush for values that are outside this range but still smaller than a byte, sipush for values that will fit into a short, and ldc for everything else. The goal is efficient code generation: 1-3 bytes for most of the integer values that you're likely to load. The ldc opcode is interesting in its own right: it does a lookup into a per-class table to find the value to push. Not very efficient in terms of CPU cycles and memory access (although it can be inlined by Hotspot), but again efficient from a bytecode perspective: if you're loading a large constant, there's at least a chance that you'll be loading it multiple times.
    Chapter 6 of the VM spec lists all the opcodes, and I seem to remember groupings by purpose: http://
    java.sun.com/docs/books/jvms/second_edition/html/Instructions.doc.htmlUnfortunately, it doesn't quite do that. They're mostly all in the same place on the summary page but not quite all. (For example, the array ones were in a separate place) And the link you posted lists them only alphabetically.Yeah, but jverd posted the link to section 3.11, which groups the opcodes. You probably want 3.11.2: http://java.sun.com/docs/books/jvms/second_edition/html/Overview.doc.html#6348
    Which leads to the more interesting question: why?

  • XML Generation using a sql query in an efficient way -Help needed urgently

    Hi
    I am facing the following issue while generating xml using an sql query. I get the below given table using a query.
         CODE      ID      MARK
    ==================================
    1 4 2331 809
    2 4 1772 802
    3 4 2331 845
    4 5 2331 804
    5 5 2331 800
    6 5 2210 801
    I need to generate the below given xml using a query
    <data>
    <CODE>4</CODE>
    <IDS>
    <ID>2331</ID>
    <ID>1772</ID>
    </IDS>
    <MARKS>
    <MARK>809</MARK>
    <MARK>802</MARK>
    <MARK>845</MARK>
    </MARKS>
    </data>
    <data>
    <CODE>5</CODE>
    <IDS>
    <ID>2331</ID>
    <ID>2210</ID>
    </IDS>
    <MARKS>
    <MARK>804</MARK>
    <MARK>800</MARK>
    <MARK>801</MARK>
    </MARKS>
    </data>
    Can anyone help me with some idea to generate the above given CLOB message

    not sure if this is the right way to do it but
    /* Formatted on 10/12/2011 12:52:28 PM (QP5 v5.149.1003.31008) */
    WITH data AS (SELECT 4 code, 2331 id, 809 mark FROM DUAL
                  UNION
                  SELECT 4, 1772, 802 FROM DUAL
                  UNION
                  SELECT 4, 2331, 845 FROM DUAL
                  UNION
                  SELECT 5, 2331, 804 FROM DUAL
                  UNION
                  SELECT 5, 2331, 800 FROM DUAL
                  UNION
                  SELECT 5, 2210, 801 FROM DUAL)
    SELECT TO_CLOB (
                 '<DATA>'
              || listagg (xml, '</DATA><DATA>') WITHIN GROUP (ORDER BY xml)
              || '</DATA>')
              xml
      FROM (  SELECT    '<CODE>'
                     || code
                     || '</CODE><IDS><ID>'
                     || LISTAGG (id, '</ID><ID>') WITHIN GROUP (ORDER BY id)
                     || '</ID><IDS><MARKS><MARK>'
                     || LISTAGG (mark, '</MARK><MARK>') WITHIN GROUP (ORDER BY id)
                     || '</MARK></MARKS>'
                        xml
                FROM data
            GROUP BY code)

  • Talkin' Bout My Generations: A Brief History of Intel-based Portable Macs

    During my first four years here at Discussions, I came across a fairly common problem while trying to help folks using Windows on a Mac: very few people I responded to could tell mewhat kind of system they were using. Many were users of portable Macs, so to try and help them out identifying the machines they used, I thought of making a guide to portableidentification.  But as I was writing this article two years ago, I got thinking about a more detailed history of the MacBook family from 2006 to 2010. I’ve taken many of the news snippets I’ve read from Macworld magazine and other sources to provide the historical content in this guide and combinedthem with my personal opinions on each model. Specifications where used have been verified by Brock Kyle’s EveryMac.com and by Apple support documents as well as keynote speeches from Apple execs.  The opinions provided are those of the author and are independent of Apple, Inc, so in other words, if you feel differently about these machines…
    DON’T SHOOT THE MESSENGER!
    And now, the guide.  Enjoy!
    First generation (1G):
    These are the only 32-bit Intel Mac portables in the field, sporting Intel Core Duo (“Yonah”) processors from 1.83-2.16 GHz (Early '06, including Glossy)
    MacBook
    This long-awaited upgrade of the iBook has a port setup comparable to the Mid-'05 iBook--2 USB 2.0, 1 FW400, audi oout, mini video.   Also uses an inset keyboard, which drew some groans from the community-at-large when it first launched.  Internally, uses an Intel GMA950 graphics system that borrows up to 64 MB as video RAM and adds 16 MB overhead. 
    Case type: Solid white or black polycarbonate shell
    Chipset: Intel 945GM
    Standard RAM: 512 MB (432 MB usable)
    Maximum RAM: 2.00 GB PC2-5300 DDR2 SDRAM(1968 MB usable)
    Pros: Solid performance vs. iBook, goodbasic machine for the Web, hard drive is user-serviceable.
    Cons: Poor graphics make this unit ascratch for mid-level business work, games or creative apps; limited RAM, no64-bit support
    MacBook Pro
    This was Apple's Intel debut, along withthe iMac (Core Duo).  Apple flashed a1.67 GHz prototype at Macworld Expo ‘06 that was scratched in production for a1.83 GHz model.  Supply chain economicsresulted in an optical drive downgrade to a standard single-layer drive fromthe double-layer drives in the late '05 PowerBooks.  It's also the only model in the MacBook Procontinuum not to bear a FireWire 800 port.  Although functionally similar to the MacBookthat followed it, this line has discrete graphics by way of AMD's RADEONX1600--up to 256 MB.  Slightly revisedversions, rolled in by mid-year, included a glossy display and improved videoRAM. 
    Case type: Anodized aluminum compositewith plastic edging.
    Chipset: Intel 945GM
    Standard RAM: 1 GB
    Maximum RAM: 2.00 GB PC2-5300 DDR2 SDRAM
    Pros: Good step up from PB '05, can runpro apps and games with ease
    Cons: limited RAM, no 64-bit support, no DVD±DL support, lack of FW800 abother for some
    Second generation (2G):
    The 2G portables (“Late 2006” in Applespeak) were a mild speed bump of the 1G lines, replacing the 32-bit Core with the 64-bit Core2 (“Merom”).  Processor speeds ranged from 2.0 GHz-2.33 GHz. Apple fixed many 1G shortcomings here, but retained the 945 family chipsets until well into 2007.  As aresult of the 945 family’s addressing limitations, usable RAM is limited to 3GB, even when 4 GB can be installed. (See http://www.everymac.com/systems/apple/macbook_pro/faq/macbook-pro-core-2-duo-3-g b-memory-limitation-details.html)  Further, Apple has chosen to limitWindows support on these units to Vista; anything else is “use at own risk”.
    On the plus side, these 2G portables arethe absolute earliest qualifiers for Mac OS X Lion, albeit with a significantlylimited user experience—that is, many features of note simply are not possible given the nature of the 2G internals. 
    MacBook
    No visible markers set these units apart from the 1G models, and all internals are the same save for the Core2 CPU.  These units were slightly revised in 2007 toenable draft 802.11n support; those models shipped in October 2006 and onward could download an update to enable 802.11n. The only way to confirm a 2G MacBook is via software; the Model ID iseither ”2,1” or “2,2”
    Case type: Solid white or blackpolycarbonate shell
    Chipset: Intel 945GM
    Standard RAM: 1 GB (944 MB usable)
    Maximum RAM: 3.00 GB PC2-5300 DDR2 SDRAM (2992 MB usable)
    Pros: Core2 offers 64-bit support and modest speed boost, max RAM up
    Cons: Still comes up short forhigh-demand applications.
    MacBook Pro
    Functionally similar to its predecessor while retaining the AMD X1600 graphics, the 2G Pro had three notable differences.  This line marks the permanent return of the FireWire 800 port—this one’s on the right side. Also back for an encore is the double-layer SuperDrive; Apple’s suppliers finally had the size of optical drive that Apple needed.  Like the MacBook, it also gets a lift from the new Core2 CPUs with twice as much L2 cache as their predecessors and their trendier plastic-clad siblings.
    Case type: Anodized aluminum composite with plastic edging.
    Chipset: Intel 945GM
    Standard RAM: 1 GB
    Maximum RAM: 3.00 GB PC2-5300 DDR2 SDRAM
    Pros: FW800 is back, as is DVD±DL; max RAM up, graphics still strong
    Cons: Speed improvement only nominal, Windows Vista support still lacking inspots (X1000-series chips are not DX10 qualified)
    Third generation (3G):
    The “Mid/Late 2007” portables were somewhat of a redesign from the inside, though they remained similar to 2G models when viewed from without.  Common to both lines is the Intel 965 chipset family, best known by its Intel codename, “Santa Rosa”; with it, the system bus got ramped to 800 MT/s while the memory bus remained at 667 MT/s.  Here, the Core2 gets another modest speed bump, with standard frequencies ranging from 2.1 GHz-2.4 GHz.  At this time, the RAM ceiling was lifted, allowing 4 GB to be used in all models and making theseMacs capable 64-bit machines.  Windows x64 variants will run on this class, but it requires Boot Camp 2.1 or higher and some finesse with installing individual software packages since Apple’s installer places a soft block on these units.
    Also of note: 3G and 4G MacBook Pros were particularly susceptible to a defect in the NVIDIA graphics chip, which left unchecked would cause these units not to display video, or to show scrambled video.  Apple has a current repair program to fixthis issue if you should run across it, but time is running out.  Unless you are aware that the defect has been repaired, these models are best avoided
    MacBook
    By the time the 3G models surfaced, the2G models were dealing with heavy criticism for not being refreshed in sync with the Pro models.  Apple had three convincing reasons for such a delay. First came the iPhone EDGE, for which development was a top priority.  The delay actually bought some time for Apple to reveal the other two reasons; Intel was providing the GMA X3100 as a companion to the GM965, which in itself was a modest improvement over the GMA 950 used in the first two iterations; and Apple had been working on its latest flagship OS, “Leopard”, released just days before the new MacBook surfaced on All Saints’ Day (11/1).  One might say that waiting does indeed payoff, judging from Macworld’s bench scores of the 3G MacBooks, 2007 was a good year to upgrade the old iBook to something better.
    Case type: Solid white or black polycarbonate shell
    Chipset: Intel GM965
    Standard RAM: 1 GB (880 MB usable)
    Maximum RAM: 4.00 GB PC2-5300 DDR2 SDRAM (3952 MB usable)
    Pros: Better graphics, potentially faster WLAN support, improved speed, conservative energy usage
    Cons: Poor graphics in Windows, game support on both platforms limited to casual titles (many FPS/RTS/MMO games not supported)
    MacBook Pro
    The 3G Pro underwent a massive interior overhaul in June 2007, sporting NVIDIA GeForce 8600M GT graphics and—for the first time in an Apple portable—an option to build a Core2 Extreme into the unit at 2.6 GHz.  These were the first portables to carry 802.11n as a standard option, as well as the first Apple portables touse an LED-backlit display.  The 3G Pro also meets or exceeds all Windows Vista operating requirements, and was one of the best performing computers to run Vista, according to PC World.
    Unfortunately for longtime notebook users, the 3G lines of the MacBook Pro also mark some “lasts”.  The line of 3G Pros was the last line of portables to have officially shipped with Tiger, the last portables to includean Apple Remote as standard equipment, and, perhaps more notably, the last tobear a traditional numeric keypad.
    Case type: Anodized aluminum composite with plastic edging.
    Chipset: Intel GM965
    Standard RAM: 2 GB
    Maximum RAM: 4.00 GB PC2-5300 DDR2 SDRAM
    Pros: Significantly improved graphics, greater energy efficiency over 2G units due to chipset and display upgrades, fastest unit of its time for current OSes, solid all-around performance, potentially faster WLAN support.
    Cons: Not quite “future-proof”
    Fourth generation (4G)
    The “Early 2008” portables were met with fervent anticipation, as Apple hinted about “something in the air” at what would be CEO Steve Jobs’ final Macworld Expo address. Notebooks were all the rage, as was the upcoming iPhone software upgrade that gave rise to application development and the App Store.  Exciting news indeed, it was.  Yet, as was the norm in Jobsian monologues, he had “one more thing” to show off. Inter-office memos?  Nope, but it did arrive in the classic manila envelope used for such.  It was the first-generation MacBook Air, partof a 4G lineup that saw revamped Core2 CPUs ranging from 1.6 GHz all the way upt o 2.6 GHz depending on model and build options.
    The new CPUs were based on Intel’s latest “Penryn” cores, some of which received a drop in L2 cache versus the “Merom” cores used in 2G and 3G units.  However, the drop in cache did little to impact performance; the new CPUs were actually faster by a slight margin at the same speeds as prior Core2’s, per Macworld’s bench scores.  As there were few changes in case designapart from removing the keypad from the MacBook Pro, only software can separate a 4G unit from a 3G unit.
    The 4G units, and all units following, officially support x64-native Windows via Boot Camp 2.1 as included on their Install Discs, or ondiscs with future versions of OS X and Boot Camp.
    MacBook
    The 4G MacBook saw the processor upgrade and little else,but the bump was likely enough to convince any but the hard-core 12” PowerBookenthusiasts to cross over to Intel. Because it’s still based on the Santa Rosa (GM965) platform, the 20-pluspercentage point improvements touted by tech-savvy bloggers and enthusiastsites are never realized. Rather, some sources have documented a roughimprovement of between three percent and ten percent over the 3G units.
    Sadly for some, this model is the last MacBook to bear anysize and speed of FireWire port.
    Case type: Solid white or black polycarbonate shell (as of late 2008, white only)
    Chipset: Intel GM965
    Standard RAM: 2 GB (1904 MB usable)
    Maximum RAM: 4.00 GB PC2-5300 DDR2 SDRAM (3952 MB usable)
    Pros: Still a solid machine for light work, cheap, fast for its price
    Cons: It’s the only cheap way to make your FireWire gear work
    MacBook Air
    The new kid on the block this go-around;the MacBook Air is Apple’s first sub-notebook since the PowerBook Duo of the early 1990’s. Classified as a “thin and light”, the Air is a very strikingdefinition of that term.  At three pounds weight and 0.16” to 0.76” thickness, and with logic circuitry the length of a standard No. 2 pencil, Apple could crow about making “the world’s thinnest notebook” and still pack more punch into a space of 14 inches at a time when other sub-note vendors were still trying to shrink their wares.  These vendors, according to Jobs, started shrinking items that shouldn’t be shrunk. Where most sub-notes had 11” or 12” screens, for example, the Air packed in a 13-incher; and when a keyboard was needed for the Air, Apple went with a full-size board identical to the then one-and-a-half-year-old MacBook design, complete with inset keys.  From the MacBook Pro, the Air gained an aluminum finish as well as a backlit keyboard.  On its own, the Air introduced solid-state storage (colloquially “flash drives”) as hard drives for the Mac.  However, this option added $1,000 to the Air’s asking price and dropped its already limited storage capacity from80 GB to 64 GB.  To add insult to injuryin some minds, the Air also dropped common expansion options and an internal optical drive to acquire its legendary dimensions.  Left after shrinkage: a single USB port, an audio jack, and a “micro-DVI” video port. Despite these sacrifices, the 1G MacBook Air still outclasses other sub-notes where it counts because its chipset is the same GM965 used in the 3G and 4G MacBook offerings in addition to having the fastest low-voltage CPU’s of the day in custom quarter-sized packages. Its performance in comparison to full-featured notebooks is lower by way of processor speed being lower, and yet normal for a portable of its class.
    Case type: Anodized aluminum
    Chipset: Intel GM965
    Standard RAM: 2 GB onboard (1904 MB usable)
    Pros: Size and weight offer maximumportability, big screen and keyboard offer comfort for travelers, multi-gesturetrackpad has large surface for easy usability, and price is on par for class.
    Cons: Limited expansion options, limited storage, and service-removable battery ,costly add-ons required for use in environments where WLAN isn’t an option, not well suited to Windows variants beyond XP.
    MacBook Pro
    Not much new here from the 3G lines, save for the absentkeypad.  Base specs were upped by small increments, and dedicated VRAM doubled for all models.   Nonetheless, the 4G Pro can make a capable,if not solid gaming unit (as if the 3G unit wasn’t competent in its own right).  Like the 3G unit, it is also well suited to Vista and its 64-bit variant, and it can easily run Windows 7 in its many forms as well.
    Case type: Anodized aluminum composite with plastic edging.
    Chipset: Intel GM965
    Standard RAM: 2 GB
    Maximum RAM: 4.00 GB PC2-5300 DDR2 SDRAM
    Pros: Robust graphics, flexible options,and multi-gesture trackpad
    Cons: What’s not to like?  If you liveor die crunching numbers, it’s tougher, but doable.
    Fifth generation (5G)
    As is done in every odd generation, Apple reworked the entire line of notebooks from within for the “Late 2008/Early 2009” cycle.  In addition, Apple was hard at work on atotally new and totally trend-setting casing process for its portables.  The result: an extreme makeover not seen in Apple’s portable lines since the 68K-to-PowerPC transitions of the early 1990’s.  To rework the interior of the MacBook family, Apple went to NVIDIA—not Intel—for a high-performance logicsolution to be used in notebooks.  NVIDIAwas working on a desktop chipset at the time; but if Steve Jobs’ statement at Apple’s October ‘08 notebook event is to be believed, Apple designers asked NVIDIA to make it mobile, and the company delivered an MCP logic set dubbed“GeForce 9400M” unto Apple.  All linesthus benefited from markedly faster graphics and the adoption of ultra-fas tDDR3 memory.  Here, the 5G MacBook and 2G MacBook Air became passable all-around units, with the 5G MacBook Pro sportingdynamically switchable graphics engines.
    For the exterior makeover, Apple Senior Designer Jon Iverevealed that Apple’s latest process created a “unibody” enclosure that waslighter and required fewer parts to produce, for it was milled entirely fromone sheet of aluminum.  To complete themakeover, Apple drew on its experience with the Aluminum line of iMac desktopsand fused all-glass displays into the new assemblies.
    For some models, the fifth generation held well into 2010,and so received only incremental upgrades to the CPU, GPU, and system RAM
    All models from this generation, save for the whiteMacBook, include a button-less, customizable multi-gesture trackpad.
    MacBook and MacBookPro (15”)
    Because the two lines had converged in this iteration, only subtle visual differences kept them apart. Both lines dropped the FireWire 400 port and exchanged their respectivevideo outputs for a common Mini DisplayPort, based on an emerging standard.  The loss of certain status quofeatures on both lines  (FW400 on theMacBook, traditional keyboard on the Pro) drew some whining in certain circles,but such things happen when Apple does this sort of retooling.
    With the 5G notebooks, Apple further blurred the line thatonce separated MacBook from MacBook Pro, allowing the former a backlit keyboardin its fullest build.  Apple hoped that thiswould swing “fence people” toward the MacBook instead of a low-cost Windows PC since these are folks that would be forced to spend $2,000 on a MacBook Probecause they want to play games in either Mac OS or Windows, casually orotherwise.
    Case type: Anodized aluminum unibody
    Chipset: NVIDIA GeForce 9400M MCP (withGeForce 9600M GT GPU in Pro models)
    Standard RAM: 2 GB (1792 MB usable)
    Maximum RAM: 8.00 GB PC3-8500 DDR3 SDRAM( 7936 MB usable)
    Pros: Fast graphics, lighter, moredurable, energy efficient, hard drive is user-serviceable, wealth of optionsavailable
    Cons: Changes in port makeup require conversion adapters; may frustrate some
    MacBook Pro (17”)
    At MacWorld Expo ’09, Apple SeniorVice-President Phil Schiller spent more than 90 minutes touting the company’slatest software offerings.  In typical Apple style, however, Schiller couldn’t let Apple make what would be its finalcurtain call without a fantastic final act. The 5G-notebook lineup would be rounded out with a stunning revision to one of Apple’s crown jewels: the 17-inch MacBook Pro.  Though it’s fundamentally similar to its smaller siblings and received the same makeover from its 4G incarnation that the others received, its battery puts it in a class of its own; Apple claimed not only that the battery will last an unheard-of 8 hours, but also that it would continue to function at nearly 100% potential after 300charge cycles and drop to 80% potential after 1000 cycles, thereby lastingthree times longer than most conventional notebook batteries, including itsown.  The reason for this is thebattery’s adaptive charging circuitry, which requests that charge be directedonly to the cells that require it instead of the system charging the battery uniformly across all cells.  Real world testing of Apple’s claims yielded figures closer to 5 hours.  Still, the fact that the battery is fixed inplace seemed irrelevant.  Fixed batteries have been a source of worry for many gadget lovers since the original iPoddebuted in 2001.
    Nonetheless, Apple’s flagship retained manyof thee same advantages and disadvantages of its 5G fellows, and yet it remaineda solid machine for those fortunate enough to afford its nearly $3,000 base sticker price.  Build-to-order modelsnearly eclipsed the 3 GHz mark—but as Don Adams would have said, missed it by that much.
    Case type: Anodized aluminum unibody
    Chipset: NVIDIA GeForce 9400M MCP with GeForce 9600M GT GPU
    Standard RAM: 2 GB (1792 MB usable)
    Maximum RAM: 8.00 GB PC3-8500 DDR3 SDRAM (7936 MB usable)
    Pros: Powerful, lighter, more durable,energy efficient, hard drive is user-serviceable, wealth of options available
    Cons: Changes in port makeup require conversion adapters; may frustrate some ,expensive entry price, fixed battery
    MacBook Air (Second Generation and Third Generation)
    How do you improve on the world’s most eye-catching notebook?  Apparently, you improve uponit from within, as CEO Jobs outlined during the October event introducing the5G-notebook architecture.  Like itsfull-sized siblings, the 2G Air ships with an NVIDIA 9400M MCP and 2 GB of fast DDR3 RAM onboard even as the ultra-low voltage Core2 CPU at its heart has seenonly miniscule improvements in overall clock speed.  Hard drive options have seen more modest gains, with the standard drive adding 50% more space than its predecessor and the SSD option doubling to 128 GB.  With these adjustments, the Air becomes more palatable to travelers willing toaccept certain tradeoffs in exchange for size and weight.  For Windows users under Boot Camp, the Air also becomes a more capable, if still underpowered, Vista unit, albeit one that won’t gain much from an x64-based variant thereof. 
    Case type: Anodized aluminum unibody
    Chipset: NVIDIA GeForce 9400M MCP
    Standard RAM: 2 GB onboard (1792 MB usable)
    Pros: Size and weight offer maximumportability, big screen and keyboard offer comfort for travelers, multi-gesturetrack pad has large surface for easy usability, and price is on par for class,better storage options than previous model.
    Cons: No change in onboard RAM to offset new hardware overhead, add-ons still required where WLAN isn’t available, adapter required for new Mini DisplayPort with most displays
    MacBook (’09 White)
    A surprise refresh in early 2009 brought an entry-level MacBook under $1,000 with most of the 5G features above.  To keep it that affordable, Apple ended up blending a third-gen polycarbonate MacBook exterior with a modified 5G-logicassembly.  Users of this model got the same fast graphics engine as the one in the mainstream aluminum MacBooks, all the while keeping the single and now scarce FW400 port; but they also gave up niceties such as the multitouch track pad and the slightly quicker DDR3 RAM.  Nonetheless, this 5G model was mostlikely aimed at those looking to start with a Mac and get a full-fledged computer.
    Case type: Polycarbonate unibody shell
    Chipset: NVIDIA GeForce 9400M MCP
    Standard RAM: 2 GB (1792 MB usable)
    Maximum RAM:  4 GB (3840 MB usable)
    Pros: Solid construction, cheaper than prior models, few if any changes from previous model
    Cons: Limited trackpad motion support, RAM capped at 4 GB, looks less classy
    Sixth generation (6G)
    Perhaps the only generation not to offer a significant step up from the previous one, the sixth generation opened with a minor redesign of the white MacBook, which at long last had caught up with the earliest 5G models and therefore offered a better value than its previousmodel.  MacBook Airs also see but a minorspeed bump.  True improvement is not achieved until the arrival of the first mobile processors to use the emerging “Nehalem”microarchitecture and to see the return of multithreading support.  The processor’s redesign also affords the ability to shut down inactive processor cores whilst boosting the clock speed of those that remain active. Unfortunately, MacBook Pros are the only models to receive this welcome upgrade, even if it only comes in a dual-core package to start with.  All other models run on the last knownreleases of the “Penryn” core—a harbinger of things to come, maybe?
    MacBook
    From Mid 2009 onward, MacBooks continued to shadow their upper-crust siblings, but in the process, they ultimately catch up—to 2008’s lineup.  It’s from here that these modelstake a multitouch glass-backed trackpad, a fixed battery, and the Mini DisplayPort monitor connection.  A remolded unibody design gives this model a curved front.  FireWire finally drops, as does the IR receiver; Apple found that many consumers buying the MacBook just didn’t care for either add-on.  Still, subtle bumpsin CPU speed and battery life may have been enough to justify an upgrade from previous generation models.
    Case type: Polycarbonate unibody shell
    Chipset: NVIDIA GeForce 9400M MCP
    Standard RAM: 2 GB (1792 MB usable)
    Maximum RAM:  4 GB (3840 MB usable)
    Pros: Long battery life, sleeker and slimmer design,slightly lighter
    Cons: Almost no change from 5G setup; ports dropped
    MacBook Pro (15” and17”)
    As mentioned above, the 6G Pro offered little in the way of improvements over the 5G lineup—or so it might seem at first glance.  Externally, they appear very much like the  5Gmodels, except that Apple has added an SD card slot to the port array—a big upgrade for camera buffs whom usually resorted to carrying cheap and oft-clunky card readers to dangle from a USB port.
    Internally, these two flagship units make several changes to accommodate the Intel “Nehalem” architecture mentioned above.  No longer could a third-party chipset be used—the direct result of a protracted battle between Intel and NVIDIA over the terms of the deal that allowed the Core2 to run on a non-Intel logic set.  In its place, Intel supplied the “Arrandale” Core i-series multipurpose processors along with the then-new 5 series logic sets.  Arrandale brought with it a completely new bus known as QuickPath Interconnect, which in theory was much improved over the traditional front-side bus. Also making their debut were Turbo Boost, which shut down one core and turned up the other based on demand, and the Intel HD Graphics core, a welcome boost over previous Intel offerings that for their part lacked muscle; this new engine could render 720p HD where 2007’s X3100 had to feign it.  Last but certainly not least, Hyper Threading Technology, absent since the last of the Pentium 4 600 series CPU’s were cas tin 2006, returns to little fanfare but grants users twice the effective coresduring heavy workload.
    Flash storage, introduced on MacBook Airs, makes its way into the mainstream lines with this generation and all that will follow it, though the drives’ expense and potential loss of storage space were not always justifiable, even though flash storage delivers on the promise of improved read/write access speeds.
    Despite these huge gains, users anticipating quad-core chips on Macs when high-end Windows notebooks already had such were at the very least disappointed
    For the discrete graphics engine, Apple again turned to NVIDIA for its 300-series chips, these being significantly more powerful than the 9-series previously used. Video RAM remained unchanged.
    Case type: Anodized aluminum unibody
    Chipset: Intel 5 Series/HD Graphics with NVIDIA GT 330M
    Standard RAM: 4 GB (3840 MB usable inlow-energy modes)
    Maximum RAM: 8.00 GB PC3-8500 DDR3 SDRAM (7936 MB usable in low-energy modes)
    Pros: Big lift from i-Series CPU’s, SD cards now usablewithout extra hardware, more starting RAM, SSD options for better performance
    Cons: Low-energy modes use a graphics engine that is a drag on gaming for some (per user reports), still dual-core.
    Seventh generation (7G)
    There may be some discussion as to whether a seventh generation of Mac portables exists, or whether this line should be part of the sixth generation instead.  Apple’s internal naming schemes for the mainstream models did indeed point to a seventh generation, so on that basis, here’s a definition: Seventh-gen models were, as the sixth-gen models, a mild refresh. This time, though, the refresh targeted only those models not receivingthe Arrandale i-Series upgrade.  All models received the final upgrade of the Penryn Core2’s, as well as replacing NVIDIA’s 9400M MCP with a more robust version in the 320M.
    With Windows XP in decline from 2009’s release of Windows 7, this became the last iteration of Mac portables to run the nearly-decade-old platform.  Vista, too, would meet its end here, though Microsoft still considers it in mainstream support untilmid-2012.  Perhaps Apple wished to streamline their Windows support to a single version—or perhaps it realized what so many others outside of itself knew from experience: Vista was a disaster, and it was best left to rot with its distant ancestor, Windows Me, inthe depths of history’s sewers.
    MacBook
    The trusty steed of many a cheapskate since its 2006 intro received what would be its last upgrade ever in mid 2010.  The Penryn processor gets a slight bump from 2.1 GHz to 2.4 GHz, and NVIDIA 320M graphics round out the package.  Otherwise, there’s not much new, for its reign as King of Value would quickly come to a close.
    Case type: Polycarbonate unibody shell
    Chipset: NVIDIA GeForce 320M MCP
    Standard RAM: 2 GB (1792 MB usable)
    Maximum RAM:  4 GB (3840 MB usable)
    Pros: Modest gains for CPU and GPU—but that’s it
    Cons: Still cheap looking with a plastic shell—and you paid WHAT?
    MacBook Pro (13”)
    Now firmly rebranded as a Pro model, Apple’s 13” aluminum notebook was poised to gain clout with “prosumers” and other types that loved the aluminum look but did not want to pay extra for the new CPU’s of the 15” and 17” models.  Still, these units made big gains from the new NVIDIA MCP and Penryn chips up to 2.66 GHz. All in all, this seemed a very well-balanced unit for one a full generation behind its peers, and one that was well worth its $1,200 entry fee
    Case type: Anodized aluminum unibody
    Chipset: NVIDIA GeForce 320M MCP
    Standard RAM: 4 GB (3840 MB usable)
    Maximum RAM: 8.00 GB PC3-8500 DDR3 SDRAM (7936 MB usable)
    Pros: Full featured for the size, hits a“sweet spot” for the price
    Cons: Aging architecture now at limit, no i-Series chips to be found
    MacBook Air (Fourth Generation)
    The head-turning Air gets a late 2010 all-around makeoverwhile expanding the family of portables to include Apple’s smallest notebook since the 12” PowerBook made a splash in 2003. Even at the new 11.6” size, the Air gets a slightly thicker body than its previous two models.  The extra thickness isn’t enough to keep it from being the thinnest, but it is enough to add a much-requested second USB port and to eliminate the clumsy door covering the initial USB port and the video port in addition to exposing the MagSafe connector, making the once-awkward connection more accessible.  This also gives it a more rectangular profile in line with Apple’s other models.
    The upgraded 13” model doubles onboard flash storage andadds the SD card slot from the MacBook Pros.
    Both models now feature factory upgrades to storage andRAM—up to 256 GB and 4GB respectively-- as well as new options from theultra-low-voltage Penryn Core2’s.  Bothmodels also benefitted from NVIDIA’s 320M MCP Starting at 1.4 GHz with 64 GB ofstorage and 2 GB RAM for $999, the MacBook Air slowly began to earn its place as the value leader, costing just as much as the venerable white MacBook.  Even so, with so many options for this model,there was something to fit every budget.
    These models are the first to carry a specific OS requirement when running Boot Camp, despite running Snow Leopard as previous models can.  Windows 7 is a must, though one would be hard-pressed trying to squeeze it into a minimally configured 11” unit
    Case type: Anodized aluminum unibody
    Chipset: NVIDIA GeForce 320M MCP
    Standard RAM: 2 GB (1792 MB usable)
    Maximum RAM:  4 GB (3840 MB usable)
    Pros: Still thin and light, wealth of options available,extra USB port, ports much more accessible
    Cons: Options fixed at time of order, Boot Camp needs toospecific for some users
    What About Sandy Bridge?
    As of February 2011, Apple was one of the first manufacturers to introduce Intel’s Sandy Bridge platform to the world, ushering in the eighth and current generation of portable Macs.  With this generation, quad-core, eight-thread i-Series CPU’s are a staple of the 15” and 17” high end, while dual-core ,quad-thread models still populate the lower end.  Nonetheless, all models now benefit from the same new technology with none fully ahead of or behind the others. 
    All models also feature a breakthrough in peripheralconnectivity that combines bandwidths of both PCI Express and DisplayPort intoa bus markedly faster than any bus presently in use.  Christened “Thunderbolt”, the new interface offers enormous potential with its theoretical 10 gigabit-per-second bandwidth.  However, devices using Thunderbolt are only beginning to emerge on the market,thus it is still too early to offer any concrete opinion regarding thistechnology.
    As these models are currently on sale (and have recentlybeen updated) at the Apple Store and Apple Authorized Resellers worldwide, to proffer any opinion of current models defeats the purpose of this, anhistorical document of Mac portable evolution.
    Conclusion and Final Thoughts
    To have witnessed and tracked the evolution of Apple’snotebook lines from 2006 to the present is no small feat.  One could say that doing so is in fact opening a window on the history of Apple itself, for it is in Apple’s notebooks that we have seen the greatest innovations both from the company and in computing itself.  From their inceptionin 2006, Apple’s Intel notebooks have evolved into some of the best and mostreliable notebooks on the market today. To be able to run Windows as well asthe Mac OS only solidifies that position.
    Yet, with each stage of their evolution, the MacBook, MacBookPro and MacBook Air, while they have made significant forward progress, havehad to sacrifice features that some users find essential.  Still, while the complaints roll in with each generation of notebooks, time must march on. Apple is a computer company after all, and must continually update its wares if it is to remain in its current position near the top of the industryat large.
    The stark realities of Apple’s business, however, should never be used as an excuse to buy the latest and greatest hardware even if yours seems less capable than someone else’s. Holding onto older Apple hardware may actually put you at an advantage, since you may still be able to work with hardware that newer models don’tsupport.  This is one of many reasons Macs tend to stick around longer than most Windows PCs.
    I certainly hope you have enjoyed this look back at Apple’s Intel notebook lines.  As a proudmember of the Mac community for almost eight years and a volunteer whose role connects him to computing past, I find this knowledge of the past fascinating; and yet it is vital to maintain such a background, as it can give us as users an idea of where the industry will be in the months and years to come. 

    Due to a copy/paste glitch, some necessary spaces have inadvertently been removed.  If I could fix this, I would.

  • Report Generation - Word Template

    Hi,
    I am trying to write a program that captures up to 873 different scope shots.
    At the end, the program will generate a Word Report with all scope shots and measured parameters.
    I have used the Word Template in the past and created field for each parameters.  However, this approach will not be efficient with this complex test.  The report format of each test is shown below with the read box indicating the field where data will be plugged in by LabVIEW.
    To create the word template for entire tests can create unnecessary post processing to delete the part of the test that user didn't run.  If user ran only 3 tests then he would have to delete other 870 templates which will not be cost efficient.  Also, this could create unnecessary programming mess that is not very flexible.
    My questions -- is there any way I can copy this template below as I run the test?  Rather than creating a word template for 873 different test for which I may only run part of it, can I copy the template based on number of test that user would like to run?
    Any pointers will be appreciated.
    Thanks,
    Chetna
    Intersil Inc.
     919-405-3696
     Example of the scope data figure:
    Figure Error! No text of specified style in document.‑1. □
    1. □
    2. □
    3. □
    4.  □
    Test Conditions:Vin=□V,  Vout=□V: □A Temp=□C Mode=□Fs=□Serial #=□ 

    Hi Vivek,
    I have LabVIEW 8.5 Development version and the report generation tool kit.  I will be writing VI to fill the data in and generate a word report with Up to 873 different tests.  The template that I attached is the sample template for only one test, I will be doing 873 tests.  The sample test template consists of scope shot and some measured parameters.
    My situation - the VI that I will be writing will fill in the sample template that I attached for 873 times.  For me to generate a word report for all tests, I would have to create a report template with 873 sample templates.  Where there is a red box, I would have to create a field for each box so the LabVIEW will know where to put the data in. 
    Well, that is lot of work in the first place.  Secondly, to keep up with 873 different permuation fields can create lot of unnecessary programming mess.  Lastly, if the user decide to run only part of the test, then there would be lot of empty test templates.  If user decide to run only 10 tests the final report will have 860 empty boxes.  That is not efficient.
    My goal is to create the word report using just one sample template rather than 873.  I would like to copy the sample template based on number of tests user defines. 
    My question is-  Is there any way to copy the sample test template that I attached using the report generation toolkit?
    Regards,
    Chetna Tailor

  • Generation of previews is still a very significant problem

    An old problem surfaced again yesterday, after several years of stability.
    I decided to upgrade my previews in preparation for a move to a new iPad.  My reading says that, with the retina screen, the new iPad is capable of handling larger image files and showing much more detail than previous machines.
    I have had much of my Aperture Library on my iPhone for well over a year and I liked this functionality.
    So I set my preference for Preview quality higher and selected all my photos (around 22000) and started an "Update Previews".  Three or four times, the process crashed and I restarted it.  Then there was a crash that could not be handled.  Each time I rebooted Aperture, it crashed before I could do anything about it.
    So I did all three recovery procedures.  Holding down Command-Option on reboot, I repaired permissions, repaired the database and then rebuilt the database.  This let me reboot successfully, but once the Update Preview recommenced automatically, Aperture crashed again.
    My reading says that there was a corrupted file somewhere.  But there's no way to find it without trying Update Preview on one or two or a few image files at a time—a process which could take days or weeks with a big library.
    So I wiped my disk clean and copied over one of my back-up libraries and gave up on the Update Preview.  Presumably, my back-up library has the corrupt file in it and I wasn't about to lose all Aperture functionality again.  Fortunately, I maintain a number of Vaults and other Library back-ups (created outside Aperture), so I can recover when my default Library gets into a state like yesterday.
    The bottom line is this—there appears to be no way to easily root out a single corrupted file.  When you try to generate a Preview of a corrupted image file, Aperture will crash, and sometimes this is fatal and you have to simply replace the library you are using.  So there's a crying need for an effective method of finding and correcting a single corrupted file, AND/OR the software should report a corrupted file, but carry on with the generation of previews for the thousands of good files.
    I've decided to avoid this problem in the future by changing my work method.  I can do without the previews.  I've turned off the Maintain Previews for this Project, for all my projects.  I will select a few of my best images and will create high-end previews of them for my iPad and that will satisfy my needs.

    My reading says that there was a corrupted file somewhere.  But there's no way to find it without trying Update Preview on one or two or a few image files at a time—a process which could take days or weeks with a big library.
    Yes, that procedure would be terribly slow on a large library: But you could search more efficiently by a "divide and conquer" method:
    Back up your library.
    Divide the set of images in your library in two albums of equal size; album A1 and A2.
    Try to build the Previews for all images in Album A1 at once.
    If that succeeds, you know that does A1 not contain the corrupted image - you'll have to search A2 next.
    But if it crashes again, the corrupted image is in A1 and not A2.
    Depending on the first result now build two albums B1 and B2 from the images in either A1 or A2 (depending on the result of step 1).
    Then divide again, and again, and again, until the album is so small that you can spot the broken image.
    Using this subdivision strategy you could scan 100000 images in only 16 steps
    So there's a crying need for an effective method of finding and correcting a single corrupted file, AND/OR the software should report a corrupted file, but carry on with the generation of previews for the thousands of good files.
    d’accord!    Have you sent feedback to Apple? http://www.apple.com/feedback/aperture.html
    Regards
    Léonie

  • How to select the data efficiently from the table

    hi every one,
      i need some help in selecting data from FAGLFLEXA table.i have to select many amounts from different group of G/L accounts
    (groups are predefined here  which contains a set of g/L account no.).
    if i select every time for each group then it will be a performance issue, in order to avoid it what should i do, can any one suggest me a method or a smaple query so that i can perform the task efficiently.

    Hi ,
    1.select and keep the data in internal table
    2.avoid select inside loop ..endloop.
    3.try to use for all entries
    check the below details
    Hi Praveen,
    Performance Notes
    1.Keep the Result Set Small
    You should aim to keep the result set small. This reduces both the amount of memory used in the database system and the network load when transferring data to the application server. To reduce the size of your result sets, use the WHERE and HAVING clauses.
    Using the WHERE Clause
    Whenever you access a database table, you should use a WHERE clause in the corresponding Open SQL statement. Even if a program containing a SELECT statement with no WHERE clause performs well in tests, it may slow down rapidly in your production system, where the data volume increases daily. You should only dispense with the WHERE clause in exceptional cases where you really need the entire contents of the database table every time the statement is executed.
    When you use the WHERE clause, the database system optimizes the access and only transfers the required data. You should never transfer unwanted data to the application server and then filter it using ABAP statements.
    Using the HAVING Clause
    After selecting the required lines in the WHERE clause, the system then processes the GROUP BY clause, if one exists, and summarizes the database lines selected. The HAVING clause allows you to restrict the grouped lines, and in particular, the aggregate expressions, by applying further conditions.
    Effect
    If you use the WHERE and HAVING clauses correctly:
    • There are no more physical I/Os in the database than necessary
    • No unwanted data is stored in the database cache (it could otherwise displace data that is actually required)
    • The CPU usage of the database host is minimize
    • The network load is reduced, since only the data that is required by the application is transferred to the application server.
    Minimize the Amount of Data Transferred
    Data is transferred between the database system and the application server in blocks. Each block is up to 32 KB in size (the precise size depends on your network communication hardware). Administration information is transported in the blocks as well as the data.
    To minimize the network load, you should transfer as few blocks as possible. Open SQL allows you to do this as follows:
    Restrict the Number of Lines
    If you only want to read a certain number of lines in a SELECT statement, use the UP TO <n> ROWS addition in the FROM clause. This tells the database system only to transfer <n> lines back to the application server. This is more efficient than transferring more lines than necessary back to the application server and then discarding them in your ABAP program.
    If you expect your WHERE clause to return a large number of duplicate entries, you can use the DISTINCT addition in the SELECT clause.
    Restrict the Number of Columns
    You should only read the columns from a database table that you actually need in the program. To do this, list the columns in the SELECT clause. Note here that the INTO CORRESPONDING FIELDS addition in the INTO clause is only efficient with large volumes of data, otherwise the runtime required to compare the names is too great. For small amounts of data, use a list of variables in the INTO clause.
    Do not use * to select all columns unless you really need them. However, if you list individual columns, you may have to adjust the program if the structure of the database table is changed in the ABAP Dictionary. If you specify the database table dynamically, you must always read all of its columns.
    Use Aggregate Functions
    If you only want to use data for calculations, it is often more efficient to use the aggregate functions of the SELECT clause than to read the individual entries from the database and perform the calculations in the ABAP program.
    Aggregate functions allow you to find out the number of values and find the sum, average, minimum, and maximum values.
    Following an aggregate expression, only its result is transferred from the database.
    Data Transfer when Changing Table Lines
    When you use the UPDATE statement to change lines in the table, you should use the WHERE clause to specify the relevant lines, and then SET statements to change only the required columns.
    When you use a work area to overwrite table lines, too much data is often transferred. Furthermore, this method requires an extra SELECT statement to fill the work area. Minimize the Number of Data Transfers
    In every Open SQL statement, data is transferred between the application server and the database system. Furthermore, the database system has to construct or reopen the appropriate administration data for each database access. You can therefore minimize the load on the network and the database system by minimizing the number of times you access the database.
    Multiple Operations Instead of Single Operations
    When you change data using INSERT, UPDATE, and DELETE, use internal tables instead of single entries. If you read data using SELECT, it is worth using multiple operations if you want to process the data more than once, other wise, a simple select loop is more efficient.
    Avoid Repeated Access
    As a rule you should read a given set of data once only in your program, and using a single access. Avoid accessing the same data more than once (for example, SELECT before an UPDATE).
    Avoid Nested SELECT Loops
    A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. You should therefore only use nested SELECT loops if the selection in the outer loop contains very few lines.
    However, using combinations of data from different database tables is more the rule than the exception in the relational data model. You can use the following techniques to avoid nested SELECT statements:
    ABAP Dictionary Views
    You can define joins between database tables statically and systemwide as views in the ABAP Dictionary. ABAP Dictionary views can be used by all ABAP programs. One of their advantages is that fields that are common to both tables (join fields) are only transferred once from the database to the application server.
    Views in the ABAP Dictionary are implemented as inner joins. If the inner table contains no lines that correspond to lines in the outer table, no data is transferred. This is not always the desired result. For example, when you read data from a text table, you want to include lines in the selection even if the corresponding text does not exist in the required language. If you want to include all of the data from the outer table, you can program a left outer join in ABAP.
    The links between the tables in the view are created and optimized by the database system. Like database tables, you can buffer views on the application server. The same buffering rules apply to views as to tables. In other words, it is most appropriate for views that you use mostly to read data. This reduces the network load and the amount of physical I/O in the database.
    Joins in the FROM Clause
    You can read data from more than one database table in a single SELECT statement by using inner or left outer joins in the FROM clause.
    The disadvantage of using joins is that redundant data is read from the hierarchically-superior table if there is a 1:N relationship between the outer and inner tables. This can considerably increase the amount of data transferred from the database to the application server. Therefore, when you program a join, you should ensure that the SELECT clause contains a list of only the columns that you really need. Furthermore, joins bypass the table buffer and read directly from the database. For this reason, you should use an ABAP Dictionary view instead of a join if you only want to read the data.
    The runtime of a join statement is heavily dependent on the database optimizer, especially when it contains more than two database tables. However, joins are nearly always quicker than using nested SELECT statements.
    Subqueries in the WHERE and HAVING Clauses
    Another way of accessing more than one database table in the same Open SQL statement is to use subqueries in the WHERE or HAVING clause. The data from a subquery is not transferred to the application server. Instead, it is used to evaluate conditions in the database system. This is a simple and effective way of programming complex database operations.
    Using Internal Tables
    It is also possible to avoid nested SELECT loops by placing the selection from the outer loop in an internal table and then running the inner selection once only using the FOR ALL ENTRIES addition. This technique stems from the time before joins were allowed in the FROM clause. On the other hand, it does prevent redundant data from being transferred from the database.
    Using a Cursor to Read Data
    A further method is to decouple the INTO clause from the SELECT statement by opening a cursor using OPEN CURSOR and reading data line by line using FETCH NEXT CURSOR. You must open a new cursor for each nested loop. In this case, you must ensure yourself that the correct lines are read from the database tables in the correct order. This usually requires a foreign key relationship between the database tables, and that they are sorted by the foreign key. Minimize the Search Overhead
    You minimize the size of the result set by using the WHERE and HAVING clauses. To increase the efficiency of these clauses, you should formulate them to fit with the database table indexes.
    Database Indexes
    Indexes speed up data selection from the database. They consist of selected fields of a table, of which a copy is then made in sorted order. If you specify the index fields correctly in a condition in the WHERE or HAVING clause, the system only searches part of the index (index range scan).
    The primary index is always created automatically in the R/3 System. It consists of the primary key fields of the database table. This means that for each combination of fields in the index, there is a maximum of one line in the table. This kind of index is also known as UNIQUE.
    If you cannot use the primary index to determine the result set because, for example, none of the primary index fields occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
    You specify the fields of secondary indexes using the ABAP Dictionary. You can also determine whether the index is unique or not. However, you should not create secondary indexes to cover all possible combinations of fields.
    Only create one if you select data by fields that are not contained in another index, and the performance is very poor. Furthermore, you should only create secondary indexes for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. As a rule, secondary indexes should not contain more than four fields, and you should not have more than five indexes for a single database table.
    If a table has more than five indexes, you run the risk of the optimizer choosing the wrong one for a particular operation. For this reason, you should avoid indexes with overlapping contents.
    Secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible. The fewer table entries that can be selected by a certain column, the higher that column’s selectivity. Place the most selective fields at the beginning of the index. Your secondary index should be so selective that each index entry corresponds to at most five percent of the table entries. If this is not the case, it is not worth creating the index. You should also avoid creating indexes for fields that are not always filled, where their value is initial for most entries in the table.
    If all of the columns in the SELECT clause are contained in the index, the system does not have to search the actual table data after reading from the index. If you have a SELECT clause with very few columns, you can improve performance dramatically by including these columns in a secondary index.
    Formulating Conditions for Indexes
    You should bear in mind the following when formulating conditions for the WHERE and HAVING clauses so that the system can use a database index and does not have to use a full table scan.
    Check for Equality and Link Using AND
    The database index search is particularly efficient if you check all index fields for equality (= or EQ) and link the expressions using AND.
    Use Positive Conditions
    The database system only supports queries that describe the result in positive terms, for example, EQ or LIKE. It does not support negative expressions like NE or NOT LIKE.
    If possible, avoid using the NOT operator in the WHERE clause, because it is not supported by database indexes; invert the logical expression instead.
    Using OR
    The optimizer usually stops working when an OR expression occurs in the condition. This means that the columns checked using OR are not included in the index search. An exception to this are OR expressions at the outside of conditions. You should try to reformulate conditions that apply OR expressions to columns relevant to the index, for example, into an IN condition.
    Using Part of the Index
    If you construct an index from several columns, the system can still use it even if you only specify a few of the columns in a condition. However, in this case, the sequence of the columns in the index is important. A column can only be used in the index search if all of the columns before it in the index definition have also been specified in the condition.
    Checking for Null Values
    The IS NULL condition can cause problems with indexes. Some database systems do not store null values in the index structure. Consequently, this field cannot be used in the index.
    Avoid Complex Conditions
    Avoid complex conditions, since the statements have to be broken down into their individual components by the database system.
    Reduce the Database Load
    Unlike application servers and presentation servers, there is only one database server in your system. You should therefore aim to reduce the database load as much as possible. You can use the following methods:
    Buffer Tables on the Application Server
    You can considerably reduce the time required to access data by buffering it in the application server table buffer. Reading a single entry from table T001 can take between 8 and 600 milliseconds, while reading it from the table buffer takes 0.2 - 1 milliseconds.
    Whether a table can be buffered or not depends its technical attributes in the ABAP Dictionary. There are three buffering types:
    • Resident buffering (100%) The first time the table is accessed, its entire contents are loaded in the table buffer.
    • Generic buffering In this case, you need to specify a generic key (some of the key fields) in the technical settings of the table in the ABAP Dictionary. The table contents are then divided into generic areas. When you access data with one of the generic keys, the whole generic area is loaded into the table buffer. Client-specific tables are often buffered generically by client.
    • Partial buffering (single entry) Only single entries are read from the database and stored in the table buffer.
    When you read from buffered tables, the following happens:
    1. An ABAP program requests data from a buffered table.
    2. The ABAP processor interprets the Open SQL statement. If the table is defined as a buffered table in the ABAP Dictionary, the ABAP processor checks in the local buffer on the application server to see if the table (or part of it) has already been buffered.
    3. If the table has not yet been buffered, the request is passed on to the database. If the data exists in the buffer, it is sent to the program.
    4. The database server passes the data to the application server, which places it in the table buffer.
    5. The data is passed to the program.
    When you change a buffered table, the following happens:
    1. The database table is changed and the buffer on the application server is updated. The database interface logs the update statement in the table DDLOG. If the system has more than one application server, the buffer on the other servers is not updated at once.
    2. All application servers periodically read the contents of table DDLOG, and delete the corresponding contents from their buffers where necessary. The granularity depends on the buffering type. The table buffers in a distributed system are generally synchronized every 60 seconds (parameter: rsdisp/bufreftime).
    3. Within this period, users on non-synchronized application servers will read old data. The data is not recognized as obsolete until the next buffer synchronization. The next time it is accessed, it is re-read from the database.
    You should buffer the following types of tables:
    • Tables that are read very frequently
    • Tables that are changed very infrequently
    • Relatively small tables (few lines, few columns, or short columns)
    • Tables where delayed update is acceptable.
    Once you have buffered a table, take care not to use any Open SQL statements that bypass the buffer.
    The SELECT statement bypasses the buffer when you use any of the following:
    • The BYPASSING BUFFER addition in the FROM clause
    • The DISTINCT addition in the SELECT clause
    • Aggregate expressions in the SELECT clause
    • Joins in the FROM clause
    • The IS NULL condition in the WHERE clause
    • Subqueries in the WHERE clause
    • The ORDER BY clause
    • The GROUP BY clause
    • The FOR UPDATE addition
    Furthermore, all Native SQL statements bypass the buffer.
    Avoid Reading Data Repeatedly
    If you avoid reading the same data repeatedly, you both reduce the number of database accesses and reduce the load on the database. Furthermore, a "dirty read" may occur with database tables other than Oracle. This means that the second time you read data from a database table, it may be different from the data read the first time. To ensure that the data in your program is consistent, you should read it once only and then store it in an internal table.
    Sort Data in Your ABAP Programs
    The ORDER BY clause in the SELECT statement is not necessarily optimized by the database system or executed with the correct index. This can result in increased runtime costs. You should only use ORDER BY if the database sort uses the same index with which the table is read. To find out which index the system uses, use SQL Trace in the ABAP Workbench Performance Trace. If the indexes are not the same, it is more efficient to read the data into an internal table or extract and sort it in the ABAP program using the SORT statement.
    Use Logical Databases
    SAP supplies logical databases for all applications. A logical database is an ABAP program that decouples Open SQL statements from application programs. They are optimized for the best possible database performance. However, it is important that you use the right logical database. The hierarchy of the data you want to read must reflect the structure of the logical database, otherwise, they can have a negative effect on performance. For example, if you want to read data from a table right at the bottom of the hierarchy of the logical database, it has to read at least the key fields of all tables above it in the hierarchy. In this case, it is more efficient to use a SELECT statement.
    Work Processes
    Work processes execute the individual dialog steps in R/3 applications. The next two sections describe firstly the structure of a work process, and secondly the different types of work process in the R/3 System.
    Structure of a Work Process
    Work processes execute the dialog steps of application programs. They are components of an application server. The following diagram shows the components of a work process:
    Each work process contains two software processors and a database interface.
    Screen Processor
    In R/3 application programming, there is a difference between user interaction and processing logic. From a programming point of view, user interaction is controlled by screens. As well as the actual input mask, a screen also consists of flow logic. The screen flow logic controls a large part of the user interaction. The R/3 Basis system contains a special language for programming screen flow logic. The screen processor executes the screen flow logic. Via the dispatcher, it takes over the responsibility for communication between the work process and the SAPgui, calls modules in the flow logic, and ensures that the field contents are transferred from the screen to the flow logic.
    ABAP Processor
    The actual processing logic of an application program is written in ABAP - SAP’s own programming language. The ABAP processor executes the processing logic of the application program, and communicates with the database interface. The screen processor tells the ABAP processor which module of the screen flow logic should be processed next. The following screen illustrates the interaction between the screen and the ABAP processors when an application program is running.
    Database Interface
    The database interface provides the following services:
    • Establishing and terminating connections between the work process and the database.
    • Access to database tables
    • Access to R/3 Repository objects (ABAP programs, screens and so on)
    • Access to catalog information (ABAP Dictionary)
    • Controlling transactions (commit and rollback handling)
    • Table buffer administration on the application server.
    The following diagram shows the individual components of the database interface:
    The diagram shows that there are two different ways of accessing databases: Open SQL and Native SQL.
    Open SQL statements are a subset of Standard SQL that is fully integrated in ABAP. They allow you to access data irrespective of the database system that the R/3 installation is using. Open SQL consists of the Data Manipulation Language (DML) part of Standard SQL; in other words, it allows you to read (SELECT) and change (INSERT, UPDATE, DELETE) data. The tasks of the Data Definition Language (DDL) and Data Control Language (DCL) parts of Standard SQL are performed in the R/3 System by the ABAP Dictionary and the authorization system. These provide a unified range of functions, irrespective of database, and also contain functions beyond those offered by the various database systems.
    Open SQL also goes beyond Standard SQL to provide statements that, in conjunction with other ABAP constructions, can simplify or speed up database access. It also allows you to buffer certain tables on the application server, saving excessive database access. In this case, the database interface is responsible for comparing the buffer with the database. Buffers are partly stored in the working memory of the current work process, and partly in the shared memory for all work processes on an application server. Where an R/3 System is distributed across more than one application server, the data in the various buffers is synchronized at set intervals by the buffer management. When buffering the database, you must remember that data in the buffer is not always up to date. For this reason, you should only use the buffer for data which does not often change.
    Native SQL is only loosely integrated into ABAP, and allows access to all of the functions contained in the programming interface of the respective database system. Unlike Open SQL statements, Native SQL statements are not checked and converted, but instead are sent directly to the database system. Programs that use Native SQL are specific to the database system for which they were written. R/3 applications contain as little Native SQL as possible. In fact, it is only used in a few Basis components (for example, to create or change table definitions in the ABAP Dictionary).
    The database-dependent layer in the diagram serves to hide the differences between database systems from the rest of the database interface. You choose the appropriate layer when you install the Basis system. Thanks to the standardization of SQL, the differences in the syntax of statements are very slight. However, the semantics and behavior of the statements have not been fully standardized, and the differences in these areas can be greater. When you use Native SQL, the function of the database-dependent layer is minimal.
    Types of Work Process
    Although all work processes contain the components described above, they can still be divided into different types. The type of a work process determines the kind of task for which it is responsible in the application server. It does not specify a particular set of technical attributes. The individual tasks are distributed to the work processes by the dispatcher.
    Before you start your R/3 System, you determine how many work processes it will have, and what their types will be. The dispatcher starts the work processes and only assigns them tasks that correspond to their type. This means that you can distribute work process types to optimize the use of the resources on your application servers.
    The following diagram shows again the structure of an application server, but this time, includes the various possible work process types:
    The various work processes are described briefly below. Other parts of this documentation describe the individual components of the application server and the R/3 System in more detail.
    Dialog Work Process
    Dialog work processes deal with requests from an active user to execute dialog steps.
    Update Work Process
    Update work processes execute database update requests. Update requests are part of an SAP LUW that bundle the database operations resulting from the dialog in a database LUW for processing in the background.
    Background Work Process
    Background work processes process programs that can be executed without user interaction (background jobs).
    Enqueue Work Process
    The enqueue work process administers a lock table in the shared memory area. The lock table contains the logical database locks for the R/3 System and is an important part of the SAP LUW concept. In an R/3 System, you may only have one lock table. You may therefore also only have one application server with enqueue work processes.
    Spool Work Process
    The spool work process passes sequential datasets to a printer or to optical archiving. Each application server may contain several spool work process.
    The services offered by an application server are determined by the types of its work processes. One application server may, of course, have more than one function. For example, it may be both a dialog server and the enqueue server, if it has several dialog work processes and an enqueue work process.
    You can use the system administration functions to switch a work process between dialog and background modes while the system is still running. This allows you, for example, to switch an R/3 System between day and night operation, where you have more dialog than background work processes during the day, and the other way around during the night.
    ABAP Application Server
    R/3 programs run on application servers. They are an important component of the R/3 System. The following sections describe application servers in more detail.
    Structure of an ABAP Application Server
    The application layer of an R/3 System is made up of the application servers and the message server. Application programs in an R/3 System are run on application servers. The application servers communicate with the presentation components, the database, and also with each other, using the message server.
    The following diagram shows the structure of an application server:
    The individual components are:
    Work Processes
    An application server contains work processes, which are components that can run an application. Work processes are components that are able to execute an application (that is, one dialog step each). Each work process is linked to a memory area containing the context of the application being run. The context contains the current data for the application program. This needs to be available in each dialog step. Further information about the different types of work process is contained later on in this documentation.
    Dispatcher
    Each application server contains a dispatcher. The dispatcher is the link between the work processes and the users logged onto the application server. Its task is to receive requests for dialog steps from the SAP GUI and direct them to a free work process. In the same way, it directs screen output resulting from the dialog step back to the appropriate user.
    Gateway
    Each application server contains a gateway. This is the interface for the R/3 communication protocols (RFC, CPI/C). It can communicate with other application servers in the same R/3 System, with other R/3 Systems, with R/2 Systems, or with non-SAP systems.
    The application server structure as described here aids the performance and scalability of the entire R/3 System. The fixed number of work processes and dispatching of dialog steps leads to optimal memory use, since it means that certain components and the memory areas of a work process are application-independent and reusable. The fact that the individual work processes work independently makes them suitable for a multi-processor architecture. The methods used in the dispatcher to distribute tasks to work processes are discussed more closely in the section Dispatching Dialog Steps.
    Shared Memory
    All of the work processes on an application server use a common main memory area called shared memory to save contexts or to buffer constant data locally.
    The resources that all work processes use (such as programs and table contents) are contained in shared memory. Memory management in the R/3 System ensures that the work processes always address the correct context, that is the data relevant to the current state of the program that is running. A mapping process projects the required context for a dialog step from shared memory into the address of the relevant work process. This reduces the actual copying to a minimum.
    Local buffering of data in the shared memory of the application server reduces the number of database reads required. This reduces access times for application programs considerably. For optimal use of the buffer, you can concentrate individual applications (financial accounting, logistics, human resources) into separate application server groups.
    Database Connection
    When you start up an R/3 System, each application server registers its work processes with the database layer, and receives a single dedicated channel for each. While the system is running, each work process is a user (client) of the database system (server). You cannot change the work process registration while the system is running. Neither can you reassign a database channel from one work process to another. For this reason, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. This has important consequences for the programming model explained below.
    Dispatching Dialog Steps
    The number of users logged onto an application server is often many times greater than the number of available work processes. Furthermore, it is not restricted by the R/3 system architecture. Furthermore, each user can run several applications at once. The dispatcher has the important task of distributing all dialog steps among the work processes on the application server.
    The following diagram is an example of how this might happen:
    1. The dispatcher receives the request to execute a dialog step from user 1 and directs it to work process 1, which happens to be free. The work process addresses the context of the application program (in shared memory) and executes the dialog step. It then becomes free again.
    2. The dispatcher receives the request to execute a dialog step from user 2 and directs it to work process 1, which is now free again. The work process executes the dialog step as in step 1.
    3. While work process 1 is still working, the dispatcher receives a further request from user 1 and directs it to work process 2, which is free.
    4. After work processes 1 and 2 have finished processing their dialog steps, the dispatcher receives another request from user 1 and directs it to work process 1, which is free again.
    5. While work process 1 is still working, the dispatcher receives a further request from user 2 and directs it to work process 2, which is free.
    From this example, we can see that:
    • A dialog step from a program is assigned to a single work process for execution.
    • The individual dialog steps of a program can be executed on different work processes, and the program context must be addressed for each new work process.
    • A work process can execute dialog steps of different programs from different users.
    The example does not show that the dispatcher tries to distribute the requests to the work processes such that the same work process is used as often as possible for the successive dialog steps in an application. This is useful, since it saves the program context having to be addressed each time a dialog step is executed.
    Dispatching and the Programming Model
    The separation of application and presentation layer made it necessary to split up application programs into dialog steps. This, and the fact that dialog steps are dispatched to individual work processes, has had important consequences for the programming model.
    As mentioned above, a work process can only make database changes within a single database logical unit of work (LUW). A database LUW is an inseparable sequence of database operations. The contents of the database must be consistent at its beginning and end. The beginning and end of a database LUW are defined by a commit command to the database system (database commit). During a database LUW, that is, between two database commits, the database system itself ensures consistency within the database. In other words, it takes over tasks such as locking database entries while they are being edited, or restoring the old data (rollback) if a step terminates in an error.
    A typical SAP application program extends over several screens and the corresponding dialog steps. The user requests database changes on the individual screens that should lead to the database being consistent once the screens have all been processed. However, the individual dialog steps run on different work processes, and a single work process can process dialog steps from other applications. It is clear that two or more independent applications whose dialog steps happen to be processed on the same work process cannot be allowed to work with the same database LUW.
    Consequently, a work process must open a separate database LUW for each dialog step. The work process sends a commit command (database commit) to the database at the end of each dialog step in which it makes database changes. These commit commands are called implicit database commits, since they are not explicitly written into the application program.
    These implicit database commits mean that a database LUW can be kept open for a maximum of one dialog step. This leads to a considerable reduction in database load, serialization, and deadlocks, and enables a large number of users to use the same system.
    However, the question now arises of how this method (1 dialog step = 1 database LUW) can be reconciled with the demand to make commits and rollbacks dependent on the logical flow of the application program instead of the technical distribution of dialog steps. Database update requests that depend on one another form logical units in the program that extend over more than one dialog step. The database changes associated with these logical units must be executed together and must also be able to be undone together.
    The SAP programming model contains a series of bundling techniques that allow you to group database updates together in logical units. The section of an R/3 application program that bundles a set of logically-associated database operations is called an SAP LUW. Unlike a database LUW, a SAP LUW includes all of the dialog steps in a logical unit, including the database update.
    Happy Reading...
    shibu

  • IPod Touch 1G (1st Generation) Battery Issue

    Hello. I realize there has been a ton of conversation about battery life on the Touch - so much so that it makes it difficult to search through in the Forum. So here is my battery problem. Recently, I updated to the latest 3.1.3 firmware on my wife's iPod Touch. The other day I noticed that the iPod's battery only lasted about 4 or 5 hours and all we were doing was listening to music (through the headphone port). It had a full charge when we started and we just queued up a large playlist (NOT Pandora or anything streaming) and let it play. We did, however, have wifi on. Originally, I thought it was the battery going bad and started to look into replacement. First though, I ran some tests. Here they are in chronological order:
    1. - Fully charged iPod, all apps installed, and wifi OFF. I played videos from the iPod for 5.5 hours straight (with auto dimming turned on). Pretty impressive. Must not be a bad battery.
    2. - Fully charged iPod, deleted all non-Apple apps, and wifi ON. There were 2 email accounts and a Google calendar account set to fetch, but push was turned on in the main settings. After leaving sit (in sleep mode) for 2 hours, the battery was at 97%. Still pretty good.
    3. - Fully charged iPod, wifi ON, with the following apps installed - Remote, Pandora, Facebook, Mobile Air Mouse Pro, Last.fm, Paper Toss (free version), Evernote, Google (apps), and Accura Precision Battery Monitoring. These are probably my most essential apps and most (if not all) are from large companies with a wide user base (assumed more reliable). I left it sit over night (in sleep mode) for 10 hours and had 60% battery life left in the morning. Something is draining the battery here.
    So, it appears to have something to do with wifi and/or my apps. My understanding is that the iPod Touch is supposed to power down wifi while in sleep mode. Is this correct? Could it be a bug with the new firmware? I feel like this is a recent phenomenon with the iPod (don't remember ever having such poor battery life). I would just turn wifi off when not in use, but to me that seems like an unnecessary hassle. I mean, isn't wifi one of the main points of having an iPod Touch? Is anyone else having this issue with the latest firmware and similar apps installed? Any help is appreciated. Thanks.

    I got my 32 gb 1st gen ipod touch in 2007. It has been used non stop since then, in january 2010 I started to notice that the battery was not lasting as it should (19 hours). in 2008 the battery still lasted for 29 hours of non tsop music (no wifi of course).
    I got the battery replaced and the battery is behaving as it should (29 hours of non stop music playback).
    as the other poster commented. if you use wifi continuously, this will die quicker. The same with games, since the hardware in the 1st gen is not as efficient as in the later generations.
    i would say first disable wifi and test just continuous music playback and see if you get the advertised 22 hours of battery life. if you do nto get that then it is time for a new battery...

  • Ipod touch 1st generation battery lifespan?

    how long does the average lifespan of the ipod touch 1st generation last? ive had mine since around the time it first came out, i havnt noticed any loss in battery life at all but i was just wondering how long it would take before i would actually have to replace the battery, and if i have to send it in to replace the battery will anything happen to my songs/apps/videos/pictures?

    The music is stated by apple to last 22 hours on a full charge for music and 5 hours for movies. The battery efficiency goes down to 80% after about 400 cycles, but your battery could bust at any time, but don't worry about it. It'll be awhile till your battery has to be replaced. If you replace the battery it won't effect any of the content on your iPod as long as it's all on iTunes.

  • My 4th gen ipod shuffle no longer selects a subset of my music libary to sync. All I get is not enough space message. Anybody know how to get around this without manually recreating 2 gb playlists?

    When syncing my old 1st generation shuffle, itunes would select a subset of my music library. Now all I get is a not enough space message. I cannmot find a way to accomplish this and do not want to keep manually creating 2gb playlists. Does anybody know how to make itunes select songs from the music library?
    As usual, Apple Help is of no help.  Thanks

    I solved my problem by restoring the shuffle to factory settings on the summary screen. Now it seems to work the way it's supposed to. Many thanks to Apple for zero help, as usual. Apple should direct less resources to developing the next snazzy ipad and more to cleaning up and enhancing itunes. I'll make the same editorial comment about the iphone 4 voicemail options.

  • Batch PDF Generation

    Hello,
    Currently in the process of migrating to Oracle 10g with HTML DB 1.6. Looking for possible solutions for batch generation of PDF invoices with line item detail, graphs and tabular data display. Batch sizes average around 10,000 invoices so speed and efficiency is important.
    Any comments or suggestions regarding batch PDF generation with XML-XSLT-FOP, PL/PDF, or any other approaches is greatly appreciated.
    Thanks,
    Glenn

    Hello,
    For PDF printing on this scale you would probably be much better off looking at using Oracle Reports.
    The XML-XSLT-FOP solution we created is geared more to provding a PDF download to a user during an active HTML DB session.
    Carl

  • I'm looking at the dell Inspiron Desktop 4th Generation Intel Core i5 Processor for photoshop work versus the Dell XPS 8700 i7 IS IT WORTH SPENDING THE EXTRA $400?

    I'm looking at the dell Inspiron Desktop 4th Generation Intel® Core™ i5 Processor for photoshop work versus the Dell XPS 8700 i7 IS IT WORTH SPENDING THE EXTRA $400? my old desktop is a AMD about 5 years old so there will be a huge change in speed to what I am use to
    Here are the specks on both:
    Inspiron
    Processor
    4th Generation Intel® Core™ i5-4460 Processor (6M Cache, up to 3.40 GHz)
    Operating System
    Help Me Choose
    Windows® 8.1 (64Bit) English
    Memory2
    8GB Dual Channel DDR3 1600MHz (4GBx2)
    Hard Drive
    1TB 7200 rpm SATA 6Gb/s Hard Drive
    Video Card
    NVIDIA® GeForce® 705 1GB DDR3
    Ports
    Front
    (2) USB 2.0, MCR 8:1, Mic and Headphone Jacks
    Rear
    Four USB 2.0 connectors , Two USB 3.0 connectors, HDMI, VGA, RJ-45 (10/100/1000 Ethernet), 3-stack audio jacks supporting 5.1 surround sound
    Media Card Reader
    Integrated 8-in-1 Media Card Reader
    (supports Secure Digital (SD), Hi Speed SD (SDXC), Hi Capacity SD (SDHC), Memory Stick (MS), Memory Stick PRO (MS PRO), Multimedia Card (MMC), Multimedia Card Plus (MMC Plus), xD-Picture Card(XD))
    Memory Slots
    2 DIMM Slots
    Chassis
    Bluetooth
    BT 4.0 via 1705 WLAN card
    Chipset
    Intel® H81 PCH
    Power
    300 Watt Power Supply
    XPS 8700
    Processor
    4th Generation Intel® Core™ i7-4790 processor (8M Cache, up to 4.0 GHz)
    Operating System
    Help Me Choose
    Windows 8.1 (64Bit) English
    Choose Options  
    Memory2
    12GB Dual Channel DDR3 1600MHz (4GBx2 + 2GBx2)
    Hard Drive
    1TB 7200 RPM SATA Hard Drive 6.0 Gb/s
    Video Card
    NVIDIA GeForce GTX 745 4GB DDR3
    CPU Thermal
    86W
    Graphics Thermal
    225W/150W/75W
    Power
    460W, optional 80 PLUS Bronze, 85% efficient, supply available on ENERGY STAR configurations
    Ports
    Bays
    Support for 4 HDD bays: including (3) 3.5” HDDs
    –Capable of 1 SSD and 3 HDD configuration
    Media Card Reader
    19-in-1 Card Reader (CF Type I, CF Type II, Micro drive, mini SD, MMC, MMC mobile, MMC plus, MS, MS Pro, MS Pro Duo, MS Duo, MS Pro-HG, RS-MMC, SD, SDHC Class 2, SDHC Class 4, SDHC Class 6, SM, xD)
    Slots
    Memory Slots
    4 DIMM

    From my personal experience, I wouldn't go for an Integrated card. This is one of the most important components for Photoshop, so invest in a decent graphics card (ATI or NVidia). It doesn't have to be a really exepensive one - I have been using ATI Radeon with 256MB of RAM on my Dell Studio for almost two years and it still rocks! (even when I work with 3D in Ps CS5 Extended).
    I would also invest in more RAM (this can be added easily - I bought extra 4GB as my Studio came with 3GB).
    I wouldn't worry about the processor - I'm on Intel Core 2 Duo - and it works very very well, it's very quick which is very important as I'm training Photoshop.
    I hope this helps.

Maybe you are looking for