Too many "Zero Repli

<SPAN><FONT face=Century>A considerable number of persons in this forum don?t get any reply at all ? including myself. Maybe one of the Creative guys should scroll down the list from time to time and reopen the ?zero replies messages? and then consider once again if these persons really don?t deserve some kind of an answer. As far as I can see too many of us ? being newbies maybe ? are left in the dark with our problems.<SPAN><FONT face=Century size=2>

LarsKyd,
While we do monitor the forum, and try to reply where we can, we don't reply to every thread. The forums are intended for user-to-user interaction. If you haven't had a reply after a few days, you can "bump" your thread (by replying to it). If someone can help you out, then they will reply.
If you have an urgent technical problem that needs a reply from a Creative rep, then you should consider contacting Customer Support directly.
Cat

Similar Messages

  • I have been blocked from my own ITunes songs because my apple ID associated with too many devices

    I cannot access songs from my own iTunes Apple ID because it says my iPhone has been associated with too many devices.  I have to wait 90 days to access my own songs on my iPhone.  Here is what happend:
    1-Our family moved.  So, with the address change, the credit cards on all of the apple ID's would no longer work because the other users had our old address. (I have 4 users in the family).
    2-My family started calling me while I was out of town complaining that purchases were rejected.  This was because our address had changed and it no longer matched. 
    3-I tried to walk them thru the process of changing the home address on the credit card, but they couldn't do it.  They assured me that they were doing it correctly and were still rejected.
    4-So, being the techie of the family, I simply logged into each of their iTunes accounts from my iPhone while I was out of town and made the address change.  While I was in their  accounts, I noticed they didn't have automatic downloads and iTunes match and stuff.  So, I changed it so it would work for them when they logged back in.
    5-I go back to log into my own account on my iphone and I cannot access my own songs on the iTunes cloud for 90 days!!!!  This is quite disappointing.
    How can I get this corrected?   I tried to reset user settings (not a erase and reset).  I figure it's not my phone, but my iTunes account thinks I'm doing something sharing-wise.
    So, in summary, when I am on my iPhone and I go to my iTunes library and click on a song (that I have paid for) with the cloud and down arrow beside it, it's rejected.  It says that I have too many apple id's associated with this device and I must wait 90 days.  It counts down each day.
    I just want access to the music I paid FOR!!

    Thank you-I will contact support.
    No, I cannot wait 90 days.
    Let me restate what I have to endure, even though I own the music and have paid for iTunes Match:
    I cannot access my purchases-zero-at all
    I have to listen to never ending McDonald's latte commercials and progressive insurance. I have now banned both of those annoying businesses and will not make purchases there.
    In summary, I have given apple my minded, and have zero to show for it.
    Ps-you get to hear the McDonald's "love hate commercial" every 2 songs on iTunes Radio.  I have substituted the words where I hate McDonald's and love when the commercial is over.

  • Unable to create virtual folder. Too many folders

    Hi,
    We are getting this error while trying to create new folder in UCM (even trying directly from UCM throws same error)
    oracle.stellent.ridc.protocol.ServiceException: Unable to create virtual folder. Too many folders
    Has anyone experienced this? If yes, please let us know how you resolved it.
    Thanks!

    You don't "store" anything in contribution folders. They are just webdav access points. I recommend zero folders personally.
    If you want to upload files using webdav, then I suggest using one staging folder, where files can enter the system. When the document is workflowed and/or verified, it is moved out of the folder. It's more likely however that you'll need a few access points for various types of content as each folder enforces specific metadata.
    Now I imagine you are after a way to store all the HR bits and pieces of one person together as a unit. Some people use Content Folios. Personally, I'd just use a metadata field that contains the ContentID of the person's master record. Any new content for that person just gets the master ContentID stamped on it. Just search for that ContentID to get all their stuff.
    Regarding best practices, there are lots of quality blogs floating around. Stellent/Oracle don't talk about best practice because each system is used by their clients in a unique way... well, that's what they say.

  • Toshiba DT01ACA050 too many bad sectors on first 5 months

    Hi Good day,
    I bought a toshiba internal drive 500gb(sealed) from my friend but after weird behavior on my pc I found out that it has too many bad sectors detected by HD tune pro and HDsentinel. He insisted that the drive is in good condition because it was 
    sealed thus he don't cover it for personal warranty and further instructed that I must be the one to RMA it but dont know how I live in the Philippines and don't have experience rma'ing a hard drive yet. Also it's weird coz it shows a different product model (Hitachi) instead of Toshiba DT model as seen on my hard drive cover.
    Win7 32bit
    foxconn h55 
    core - i3
    tru rated power supply 500w
    other hdd wd 500gb
    *Additional info 
    Hard Disk Summary
    Hard Disk Number,0
    Interface,"S-ATA Gen3, 6 Gbps"
    Disk Controller,"Standard Dual Channel PCI IDE Controller (ATA) [VEN: 8086, DEV: 3B20]"
    Disk Location,"Channel 1, Target 0, Lun 0, Device: 0"
    Hard Disk Model ID,Hitachi HDS721050DLE630
    Firmware Revision,MS1OA650
    Hard Disk Serial Number,MSK423Y20Y68LC
    Total Size,476937 MB
    Power State,Active
    Logical Drive(s)
    Logical Drive,H: [MUSIC-MOVIES-BACKUP]
    Logical Drive,H: [MUSIC-MOVIES-BACKUP]
    ATA Information
    Hard Disk Cylinders,969021
    Hard Disk Heads,16
    Hard Disk Sectors,63
    ATA Revision,ATA8-ACS version 4
    Transport Version,SATA Rev 2.6
    Total Sectors,122096646
    Bytes Per Sector,4096 [Advanced Format]
    Buffer Size,23652 KB
    Multiple Sectors,16
    Error Correction Bytes,56
    Unformatted Capacity,476940 MB
    Maximum PIO Mode,4
    Maximum Multiword DMA Mode,2
    Maximum UDMA Mode,6 Gbps (6)
    Active UDMA Mode,6 Gbps (5)
    Minimum multiword DMA Transfer Time,120 ns
    Recommended Multiword DMA Transfer Time,120 ns
    Minimum PIO Transfer Time Without IORDY,120 ns
    Minimum PIO Transfer Time With IORDY,120 ns
    ATA Control Byte,Valid
    ATA Checksum Value,Valid
    Acoustic Management Configuration
    Acoustic Management,Not supported
    Acoustic Management,Disabled
    Current Acoustic Level,Default (00h)
    Recommended Acoustic Level,Default (00h)
    ATA Features
    Read Ahead Buffer,"Supported, Enabled"
    DMA,Supported
    Ultra DMA,Supported
    S.M.A.R.T.,Supported
    Power Management,Supported
    Write Cache,Supported
    Host Protected Area,Supported
    Advanced Power Management,"Supported, Disabled"
    Extended Power Management,"Supported, Enabled"
    Power Up In Standby,Supported
    48-bit LBA Addressing,Supported
    Device Configuration Overlay,Supported
    IORDY Support,Supported
    Read/Write DMA Queue,Not supported
    NOP Command,Supported
    Trusted Computing,Not supported
    64-bit World Wide ID,0050A3CCCD7F5346
    Streaming,Supported
    Media Card Pass Through,Not supported
    General Purpose Logging,Supported
    Error Logging,Supported
    CFA Feature Set,Not supported
    CFast Device,Not supported
    Long Physical Sectors (8),Supported
    Long Logical Sectors,Not supported
    Write-Read-Verify,Not supported
    NV Cache Feature,Not supported
    NV Cache Power Mode,Not supported
    NV Cache Size,Not supported
    Free-fall Control,Not supported
    Free-fall Control Sensitivity,Not supported
    Nominal Media Rotation Rate,7200 RPM
    SSD Features
    Data Set Management,Not supported
    TRIM Command,Not supported
    Deterministic Read After TRIM,Not supported
    S.M.A.R.T. Details
    Off-line Data Collection Status,Successfully Completed
    Self Test Execution Status,Successfully Completed
    Total Time To Complete Off-line Data Collection,4444 seconds
    Execute Off-line Immediate,Supported
    Abort/restart Off-line By Host,Not supported
    Off-line Read Scanning,Supported
    Short Self-test,Supported
    Extended Self-test,Supported
    Conveyance Self-test,Not supported
    Selective Self-Test,Supported
    Save Data Before/After Power Saving Mode,Supported
    Enable/Disable Attribute Autosave,Supported
    Error Logging Capability,Supported
    Short Self-test Estimated Time,1 minutes
    Extended Self-test Estimated Time,74 minutes
    Last Short Self-test Result,Never Started
    Last Short Self-test Date,Never Started
    Last Extended Self-test Result,Never Started
    Last Extended Self-test Date,Never Started
    Security Mode
    Security Mode,Supported
    Security Erase,Supported
    Security Erase Time,98 minutes
    Security Enhanced Erase Feature,Not supported
    Security Enhanced Erase Time,Not supported
    Security Enabled,No
    Security Locked,No
    Security Frozen,Yes
    Security Counter Expired,No
    Security Level,High
    Serial ATA Features
    S-ATA Compliance,Yes
    S-ATA I Signaling Speed (1.5 Gps),Supported
    S-ATA II Signaling Speed (3 Gps),Supported
    S-ATA Gen3 Signaling Speed (6 Gps),Supported
    Receipt Of Power Management Requests From Host,Supported
    PHY Event Counters,Supported
    Non-Zero Buffer Offsets In DMA Setup FIS,"Supported, Disabled"
    DMA Setup Auto-Activate Optimization,"Supported, Disabled"
    Device Initiating Interface Power Management,"Supported, Disabled"
    In-Order Data Delivery,"Supported, Disabled"
    Asynchronous Notification,Not supported
    Software Settings Preservation,"Supported, Enabled"
    Native Command Queuing (NCQ),Supported
    Queue Length,32
    Disk Information
    Disk Family,Deskstar 7K1000.D
    Form Factor,"3.5"" "
    Capacity,"500 GB (500 x 1,000,000,000 bytes)"
    Number Of Disks,1
    Number Of Heads,1
    Rotational Speed,7200 RPM
    Rotation Time,8.33 ms
    Average Rotational Latency,4.17 ms
    Disk Interface,Serial-ATA/600
    Buffer-Host Max. Rate,600 MB/seconds
    Buffer Size,32768 KB
    Drive Ready Time (typical),? seconds
    Average Seek Time,? ms
    Track To Track Seek Time,? ms
    Full Stroke Seek Time,? ms
    Width,101.6 mm (4.0 inch)
    Depth,147.0 mm (5.8 inch)
    Height,26.1 mm (1.0 inch)
    Weight,450 grams (1.0 pounds)
    Required power for spinup,"3,300 mA"
    Power required (seek),7.0 W
    Power required (idle),5.0 W
    Power required (standby),2.0 W
    Manufacturer,Hitachi Global Storage Technologies
    Manufacturer Website,http://www.hgst.com

    Hi! Sense no one is replying. If your getting bad sectors; it's time to save your data and replace your HD. It's only a matter of time before your HD fails.
    Dokie!!
    PS I'm feeling a little crazy tonight. Nice friend you have (not)
    I Love my Satellite L775D-S7222 Laptop. Some days you're the windshield, Some days you're the bug. The Computer world is crazy. If you have answers to computer problems, pass them forward.

  • Disk Utility verfy found too many clusters allocated

    I just purchaced a new LaCie firewire drive. While copying files over to it I got some errors. I ran Disk Utility "verfy" on it and got the following:
    Verifying disk "LACIE".
    ** /dev/disk3s1
    ** Phase 1 - Read FAT
    ** Phase 2 - Check Cluster Chains
    ** Phase 3 - Checking Directories
    /Desktop DF has too many clusters allocated
    Drop superfluous clusters? no
    ** Phase 4 - Checking for Lost Files
    1284 files, 1097600 free (2393596 clusters)
    ** Phase 1 - Read FAT
    ** Phase 2 - Check Cluster Chains
    ** Phase 3 - Checking Directories
    /Desktop DF has too many clusters allocated
    Drop superfluous clusters? no
    ** Phase 4 - Checking for Lost Files
    1284 files, 1097600 free (2393596 clusters)
    Verify completed.
    Do I need to do a repair? Are these errors? Will repare delete the files I've already copied? I want to send this drive to my father in Venezuela. I need to make sure the drive is ok. The errors I got concern me. Should I re-initialize the disk? He is running OS 9 so I need to make sure he will be able to read it.
    Thanks,
    Alfredo

    Hello! The drive is formatted to fat32. Reformat (in disk utility)it to Mac OSX entended and check the box "install os9 drivers". If you use the "zero" option it will lock out any bad blocks....it's a good idea to do this on a new drive. Tom

  • Portal creates too many database sessions. fix doesn't work!!

    Oracle Database 9i, Application Server 9iAS,
    Operating System SUSE Linux 7.2
    CPU - Athlon 1400
    Ram - 1GB
    There is a modification for the http server that aims to eliminate a problem on unix that causes the database to create too many sessions. The script can be found at
    http://portalstudio.oracle.com/servlet/page?_pageid=1787&_dad=ops&_schema=OPSTUDIO
    However the script fails to work. The http server is on port 80 with the redirect on port 7778. However going to http://myhost/pls/ results in a server error - connection refused. Is it possible that there may be an error in the script.
    Thanks in anticipation

    ok so my sound is ok now I can check that off of the list... I just looked around and found some external USB speakers that work just right. My built in computer speakers are just not the best quality.
    As for the microphone, I still haven't been able to find out what is causing it to not work. I notice that it isn't actually broken since if I make a really loud noise right next to the input it will register a little but only a little bit.
    $ arecord -L
    null
    Discard all samples (playback) or generate zero samples (capture)
    pulse
    PulseAudio Sound Server
    default
    Default ALSA Output (currently PulseAudio Sound Server)
    sysdefault:CARD=Intel
    HDA Intel, ALC269 Analog
    Default Audio Device
    front:CARD=Intel,DEV=0
    HDA Intel, ALC269 Analog
    Front speakers
    surround40:CARD=Intel,DEV=0
    HDA Intel, ALC269 Analog
    4.0 Surround output to Front and Rear speakers
    surround41:CARD=Intel,DEV=0
    HDA Intel, ALC269 Analog
    4.1 Surround output to Front, Rear and Subwoofer speakers
    surround50:CARD=Intel,DEV=0
    HDA Intel, ALC269 Analog
    5.0 Surround output to Front, Center and Rear speakers
    surround51:CARD=Intel,DEV=0
    HDA Intel, ALC269 Analog
    5.1 Surround output to Front, Center, Rear and Subwoofer speakers
    surround71:CARD=Intel,DEV=0
    HDA Intel, ALC269 Analog
    7.1 Surround output to Front, Center, Side, Rear and Woofer speakers
    $
    here is some additional information about my sound. Also, when I open alsamixer, I find that when I go to "select sound card"(F6) I see default and HDA Intel. If the default is pulseaudio, then is it possible that pulseaudio is causing the problem?

  • Too many activations

    HI,
    I'm getting the message "Too many activations" when trying to install Dreamweaver CS3.
    It's a volume license key so you can't deactivate it from the other computer.
    I've uninstalled from the other computer and it still says "Too many activations".
    You can't call Adobe anymore about this product. They just want you to post in the forums.
    So here I am......anyone from Adobe listening who can deactivate my previous installs?
    Thanks.

    Transcript
    info: Thank you for your patience.
    While you wait, you can try our community forums where experts are available 24 hours a day, 7 days a week.
    info: We are still assisting other customers, thank you for your patience. You can also try our community forums, available 24 hours a day, 7 days a week.
    info: You are now chatting with Prasad.
    Prasad: Hello! Welcome to Adobe Customer Service.
    Kenneth Binney: There is an unhappy customer in the DW Forum. https://forums.adobe.com/thread/1535916 can you please assist him.? I am an MVP in the DW Forum but I can't handle a licensing issue
    Prasad: Hi Kenneth.
    Prasad: Thank you for the information.
    Prasad: May I please have your First name?
    Kenneth Binney: Kenneth
    info: Your chat transcript will be sent to [email protected] at the end of your chat.
    Prasad: Kenneth, since it is  volume license, please call  1800 614 863 Australian Phone support number.
    Kenneth Binney: Here is what he wrote "Yes, I understand this is a user-to-user forum. That's why I asked if anybody from Adobe was listening. I've called Customer Service, (the Australian number). When you choose the relevant options, it tells you phone support is not available and to get in touch on the web."
    Prasad: May I place your chat on hold for 2-3  minutes while I check  that for you?
    Kenneth Binney: Yes
    Prasad: Thank you for staying online.
    Prasad: I see that this is a technical issue.
    Prasad: I will transfer your chat to our technical support team.
    Prasad: They will help you on this issue. Please allow me 2 minutes while I transfer the chat to our Technical team. Would that be okay?
    Kenneth Binney: Thanks
    Prasad: Thank you.
    info: Please wait while I transfer the chat to the appropriate group.
    info: You are now chatting with Raman. To ensure we stay connected throughout our interaction , please don't click on the 'x' in the chat window. Doing so will disconnect our chat session.
    Raman: Hello, Welcome to Adobe support.
    Kenneth Binney: Thanks
    Raman: Could you please provide me the serial number of the product to assist accordingly?
    Kenneth Binney: Hello Issue in Forum not mine
    Kenneth Binney: https://forums.adobe.com/thread/1535916
    Raman: Keneneth, I need to go ahead and escalate the issue to the team who replies on forums.
    Raman: Please allow me 2 minutes to check
    Kenneth Binney: Thanks
    Raman: Kenneth, I have checked and Adobe CS 3 is an an end of life product.
    Raman: Adobe doesn't support it anymore.
    Kenneth Binney: What can I tell him - he is reinstalling corrupted install
    Raman: It may be an issue with the installer he is using. It depends upon the license he has.
    Raman: If he has volume license, he needs to download the product from Volume licensing product.
    Raman: It may be an issue with the system as well.
    Kenneth Binney: I'll tell him
    Raman: Sure
    Kenneth Binney: If he can chat with case number might that be helpful?
    Kenneth Binney: Thanks

  • Got message trying to log in to email 'too many l...

    Hi BT,
    This morning when first logging into BT Mail I got a failure message to the effect that there had been too many login failures on my account try again in 10 minutes.
    Since this was on my FIRST login in attempt it could not have been me that caused the login failures.
    Questions for BT:-
    1. Is someone trying to hack my email account?
    2. How many times does it have to fail before causing this message?
    3. Are you able to detect (IP Address?) where these failures are coming from?
    4. If you are able to detect there are too many login fails how about displaying the number of login fails?
    Looking fwd to your reply.

    radclifm wrote:
    re:  Did you receive any replies?
    Yes, in the form of delivery failure messages. So I could see that the real email address used (it was mine) and I could see what the spam message was (it contained a link to a dodgy website). Not being an email spoofer I can only go by what I have read and it would appear that when an email address is spoofed it will be returned by the delivery failure system to the spoofed address because no authorisation checks are done on spoofed address. That is the point of spoofing a genuine address.
    re: ... email would return to you so there would be no gain for the spammer
    The messages sent out contained a web page link, so the spammers (and there have been several) would not be trying to get replies from the email, more trying to get the recipients to look at some dodgy web site. Fair enough.
    re: The volume of emails sent by spammers is I suspect more than the 49 permitted group email limit set by BTMail on your account so this would be rather limiting for any spammer.
    I've seen the failed deliveries come back in batches, which would fit the group email limit, and mean they were trying to circumvent the group mail limit.
    As an aside, it's worth setting up a non-existent email address in Contacts to see if anyone is doing this, then you'd see the failed delivery come back. I have already done that. It is the only email address I have in my contacts. I don't keep contacts or emails on the servers. I would rather be responsible for my own emails and contacts security.
    re: deleting the emails to cover their tracks - I've found that with BT Mail webmail if you delete from Sent box they don't appear in Trash, so you wouldn't see the spam mails there either. I have found that in BTMail if you do delete from sent box they do get put in the trash so your spammers are very tidy not only deleting it from the sent folder but also the Trash folder.
    I am not trying to convince you one way or the other, you have your views that your email account is/has been logged onto and emails sent from it and I have my view that it was only your email account address that was spoofed. I think we will just have respect each others views and agree to differ.
    The bottom line is you do what you feel is best for your security and piece of mind when using your email account.

  • What have "Too many open Files" to do with FIFOs?

    Hi folks.
    I've just finished a middleware service for my company, that receives files via a TCP/IP connection and stores them into some cache-directory. An external program gets called, consumes the files from the cache directory and puts a result-file there, which itself gets sent back to the client over TCP/IP.
    After that's done, the cache file (and everything leftover) gets deleted.
    The middleware-server is multithreaded and creates a new thread for each request connection.
    These threads are supposed to die when the request is done.
    All works fine, cache files get deleted, threads die when they should, the files get consumed by the external program as expected and so on.
    BUT (there's always a butt;) to migrate from an older solution, the old data gets fed into the new system, creating about 5 to 8 requests a second.
    After a time of about 20-30 minutes, the service drops out with "IOException: Too many open files" on the very line where the external program gets called.
    I sweeped through my code, seeking to close even the most unlikely stream, that gets opened (even the outputstreams of the external process ;) but the problem stays.
    Things I thought about:
    - It's the external program: unlikely since the lsof-command (shows the "list of open files" on Linux) says that the open files belong to java processes. Having a closer look at the list, I see a large amount of "FIFO" entries that gets bigger, plus an (almost) constant amount of "normal" open file handles.
    So perhaps the handles get opened (and not closed) somehwere else and the external program is just the drop that makes the cask flood over.
    - Must be a file handle that's not closed: I find only the "FIFO" entries to grow. Yet I don't really know what that means. I just think it's something different than a "normal" file handle, but maybe I'm wrong.
    - Must be a socket connection that's not closed: at least the client that sends requests to the middleware service closes the connection properly, and I am, well, quite sure that my code does it as well, but who knows? How can I be sure?
    That was a long description, most of which will be skipped by you. To boil it down to some questions:
    1.) What do the "FIFO" entries of the lsof-command under Linux really mean ?
    2.) How can I make damn sure that every socket, stream, filehandle etc. pp. is closed when the worker thread dies?
    Answers will be thanked a lot.
    Tom

    Thanks for the quick replies.
    @BIJ001:
    ls -l /proc/<PID>/fdGives the same information as lsof does, namely a slowly but steadily growing amount of pipes
    fuserDoesn't output anything at all
    Do you make exec calls? Are you really sure stdout and stderr are consumed/closed?Well, the external program is called by
    Process p = Runtime.getRuntime().exec(commandLine);and the stdout and stderr are consumed by two classes that subclass Thread (named showOutput) that do nothing but prepending the corresponding outputs with "OUT:" and "ERR" and putting them into a log.
    Are they closed? I hope so: I call the showOutput's halt method, that should eventually close the handles.
    @sjasja:
    Sounds like a pipe.Thought so, too ;)
    Do you have the waitFor() in there?Mentioning the waitFor():
    my code looks more like:
    try  {
         p = Runtime.getRuntime.exec(...);
         outshow = new showOutput(p.getInputStream(), "OUT").start;
         errshow = new showOutput(p.getErrorStream(), "ERR").start;
         p.waitFor();
    } catch (InterruptedException e) {
         //can't wait for process?
         //better go to sleep some.
         log.info("Can't wait for process! Going to sleep 10sec.");
         try{ Thread.sleep(10000); } catch (InterruptedException ignoreMe) {}
    } finally {
         if (outShow!=null) outShow.halt();
         if (errShow!=null) errShow.halt();
    /**within the class showOutput:*/
    /**This method gets called by showOutput's halt:*/
    public void notifyOfHalt() {
         log.debug("Registered a notification to halt");
         try {
              myReader.close(); //is initialized to read from the given InputStream
         } catch (IOException ignoreMe) {}
    }Seems as if the both of you are quite sure that the pipes are actually created by the exec command and not closed afterwards.
    Would you deem it unlikely that most of the handles are opened somewhere else and the exec command is just the final one that crashes the prog?
    That's what I thought.
    Thanks for your time
    Tom

  • Too many widget toolkits?

    Do we have too many widget toolkits (GTK, Qt, etc.)?
    In the context of the KISS principle (I like the pythonic interpretation - "There should be one[-- and preferably only one --]obvious way to do it" ), it might be violating the DRY rule and the abstraction principle. On the other hand, it's always good to have choice. Again, in the context of the KISS principle, another toolkit should not be created unless it is necessary. Sometimes too many choices leads to the choice paradox and information overload; we have too many choices for our own good, The answer, depends on what perspective you take...
    From the Linux enduser's point of view, they all output the same widgets for the most part (and the dev's viewpoint). The problem for the enduser is they are somewhat forced to choose one if they desire a consistent look. To make matters worse, if they choose one, they may miss out on a program because there is no corresponding GUI.
    From the developer's point of view is where the differences are more striking for reasons I'll mention. Qt is developed in C++ and GTK uses C (+GObjects). Qt is an entire application framework. GTK seems to be heading that way - tighter integration with the DE (or bloat...). This is where FLTK and FOX Toolkit come in. However, FLTK and Tk lack advanced widgets. wxWidgets focuses on native widgets. The cross-platform availabilty varies. Then there's freeglut, GNUstep, and so on. Finally, somewhat related, Ubuntu's Unity.
    Using their similarities, is there some way we can form a common abstraction layer/interface/framework?
    To kind of pull out the abstraction, leaving only the differences needing to be implemented. I think of the way we embrace Linux and the X Windowing System (Wayland is stirring things up) or are we not at that stage yet (no "obvious" solution)? There is the freedesktop organization but it seems limited to protocols. If not cross-platform, at least a common framework on Linux? Not one to rule them all, but one to lead them all.
    The underlying issue seems to be that the developers' needs are outweighing the endusers' needs. It's no surprise as UNIX/Linux was developed by developers for developers. Fortunately, as the Linux userbase grows, the enduser is taken more into consideration, like with Ubuntu and VALVE's rejection of Windows 8. It's like two devs arguing in a room over xyz is better, when the enduser walks in and makes the answer obvious, before they can go fork it. This idea of balancing out the endusers and developers' needs, could easily apply to other Linux issues, such as the Linux Desktop, the ever growing number of distributions, package managers, WM/DEs, etc. especially the WM/DEs which from the enduser's POV varies in terms of their integration with other software and services. If Linux was able to unite, it would dominate! Currently, FOSS/Linux's greatest strength is being used as it's biggest weakness If only it could turn fragmentation into modularity.
    I ask all of this because I may be potentially creating another toolkit, thereby altering the already crowded toolkit ecosystem, though it may be able to bring balance to the force...;)
    Last edited by KCE (2013-03-04 10:23:25)

    Awebb wrote:
    KCE wrote:Using their similarities, is there some way we can form a common abstraction layer/interface/framework?
    What is it to the end user, that makes working with different toolkits difficult? It is themes as well as component behavior. While I was never really troubled by the fact, that GTK and Qt do not respect each others settings (especially: "Text below/next to/instead of icons"), the most annoying thing about those toolkits is the lack of unified themes. I am aware of the fact, that the themes are not even a standardized thing inside one toolkit, as different themes use different engines.
    I would not mind if the following happened:
    1.1 One tool to configure them all. A script, that knows all about those toolkits, like position of config files and equivalent settings for the same phenotypes.
    1.2 Since we're all about -kits these days, I had a look at packagekit and was surprised about the pacman (or libalpm?) support. How about a toolkitkit?
    2. A common theme file type. Since most elements of such a theme package are descriptor files, it should not be too hard to interconnect the equivalent lines over the different toolkits. Qt, in fact, already does this, by being able to import the GTK settings. This could be another part of, err, toolkitkit.
    3. People should abandon toolkits, that are tied too closely to desktop environments. The number of times I read "change in GTK broke XYZ" over the past two years, for example, is alarming.
    I thought I mentioned it in the OP, but mainly getting a consistent look besides Qt/GTK, and the availability of programs.
    1. Packagekit, if only it supported the AUR, but I like the idea. That's another issue. It seems to me that the main differences among the package managers are 1) source vs binary 2) dependency handling 3) release cycle 4) configurability like make flags. And as a result, varying levels of repo size and program availability. I like pacman because you get the best of all features, although #2 for example, Slackware users would not agree with automatic dependency resolution.
    2. Getting the devs of each toolkit to support the others would take some convincing.
    3. I agree, although I see why they are tied to desktop environments - because endusers want that consistency. I think that's good but being implemented inefficiently, leading to bloat. Ofc, as you mention, the interdependency means when one thing breaks, everything breaks. What should be included, should be absolutely necessary, but needs are somewhat subjective. Like does the user need all those features? It's always a delicate balance. Linux's availability on so many platforms is a good example of this. Imagine if Xorg had been merged into the Kernel?
    Redundancy can be beneficial but in this case it's detrimental. All the toolkit devs are reimplementing the same thing over and over; you could say this about distros - reconfigurations of the same software - such small differences yet an entire distro must be created each time. If there were a more convenient way, wouldn't the person have done it? The purpose of toolkits was so that the developer did not have to manually draw the widgets and event handling etc. We solved that issue but now we have another issue..redundancy at the toolkit level.
    Trilby wrote:
    KCE wrote:I like the pythonic interpretation - "There should be one[-- and preferably only one --]obvious way to do it"
    It's even more simple to have zero.
    Toolkits serve a purpose when one wants a fancy gui.  But I am driven mad by all the programs that use a toolkit just to say they did when they have no clear need for them.  They serve only as a banner of support for one of the "brands": the K brand, or the G brand.  All the while Xlib would have done everything they needed just fine.
    The problem isn't reinventing the wheel, the problem is buying a shiny new sports car when all one really needs is a wheel.
    This reminds me of something I read, unfortunately I can't find it right now but it's like your ability to not use a GUI determines your hardcoreness lol. Geek culture...flex your intellectual muscle haha
    Well, if you were only going a short distance a wheel might suffice...lol
    drcouzelis wrote:
    Here are some of my thoughts:
    A couple years ago when I was ready to write my first GUI application, I was frustrated by all of the different and competing GUI toolkits. I decided I was going to use the native X GUI (I can't remember what it's called). Then I looked at the API, and decided to write a GUI application for the Haiku operating system instead. Also, the nice default Haiku API has pretty much been stable for, like, 15 years now.
    Using a high level toolkit, such as wxWidgets, that interacts with many different other toolkits is nice, but comes at the cost of only having the option to use the lowst common denominator. Any type of special widgets are kind of hard to make.
    I recently decided to stop worrying and love the different toolkits. I don't care about having more packages installed or that each application looks a little different, as long as it's well written and does what I want. Also, I think people used to complain that applications on Linux look less consistent than they do on Windows or Mac OS X. I think anyone who thinks that needs to take a step back and look again. All computers look like a mess!
    KCE wrote:I ask all of this because I may be potentially creating another toolkit, thereby altering the already crowded toolkit ecosystem
    BURN HIM!
    I agree about having all the different toolkits but there must be a more efficient way of implementing them - providing more consistency but allowing variation. Maybe I'm wrong...I think awebb had a good point about a common theme file, that'd be a small step in the right direction, but I guess GUIs are too independent.
    Last edited by KCE (2013-03-04 23:43:51)

  • Unable to create report. Query produced too many results

    Hi All,
    Does someone knows how to avoid the message "Unable to create report. Query produced too many results" in Grid Report Type in PerformancePoint 2010. When the mdx query returns large amount of data, this message appears. Is there a way to get all
    the large amount in the grid anyway?
    I have set the data Source query time-out under Central Administration - Manager Service applications - PerformancePoint Service Application - PerformancePoint Service Application Settings at 3600 seconds.
    Here Event Viewer log error at the server:
    1. An exception occurred while running a report.  The following details may help you to diagnose the problem:
    Error Message: Unable to create report. Query produced too many results.
            <br>
            <br>
            Contact the administrator for more details.
    Dashboard Name:
    Dashboard Item name:
    Report Location: {3592a959-7c50-0d1d-9185-361d2bd5428b}
    Request Duration: 6,220.93 ms
    User: INTRANET\spsdshadmin
    Parameters:
    Exception Message: Unable to create report. Query produced too many results.
    Inner Exception Message:
    Stack Trace:    at Microsoft.PerformancePoint.Scorecards.Server.PmServer.ExecuteAnalyticReportWithParameters(RepositoryLocation analyticReportViewLocation, BIDataContainer biDataContainer)
       at Microsoft.PerformancePoint.Analytics.ServerRendering.OLAPBase.OlapViewBaseControl.ExtractReportViewData()
       at Microsoft.PerformancePoint.Analytics.ServerRendering.OLAPBase.OlapViewBaseControl.CreateRenderedView(StringBuilder sd)
       at Microsoft.PerformancePoint.Scorecards.ServerRendering.NavigableControl.RenderControl(HtmlTextWriter writer)
    PerformancePoint Services error code 20604.
    2. Unable to create report. Query produced too many results.
    Microsoft.PerformancePoint.Scorecards.BpmException: Unable to create report. Query produced too many results.
       at Microsoft.PerformancePoint.Scorecards.Server.Analytics.AnalyticQueryManager.ExecuteReport(AnalyticReportState reportState, DataSource dataSource)
       at Microsoft.PerformancePoint.Scorecards.Server.PmServer.ExecuteAnalyticReportBase(RepositoryLocation analyticReportViewLocation, BIDataContainer biDataContainer, String formattingDimensionName)
       at Microsoft.PerformancePoint.Scorecards.Server.PmServer.ExecuteAnalyticReportWithParameters(RepositoryLocation analyticReportViewLocation, BIDataContainer biDataContainer)
    PerformancePoint Services error code 20605.
    Thanks in advance for your help.

    Hello,
    I would like you to try the following to adjust your readerquotas.
    Change the values of the parameters listed below to a larger value. We recommend that you double the value and then run the query to check whether the issue is resolved. To do this, follow these steps:
    On the SharePoint 2010 server, open the Web.config file. The file is located in the following folder:
    \Program Files\Microsoft Office Servers\14.0\Web Services\PpsMonitoringServer\
    Locate and change the the below values from 8192 to 16384.
    Open the Client.config file. The file is located in the following folder:
    \Program Files\Microsoft Office Servers\14.0\WebClients\PpsMonitoringServer\
    Locate and change the below values from 8192 to 16384.
    After you have made the changes, restart Internet Information Services (IIS) on the SharePoint 2010 server.
    <readerQuotas
    maxStringContentLength="2147483647"
    maxNameTableCharCount="2147483647"
    maxBytesPerRead="2147483647"
    maxArrayLength="2147483647"
                  maxDepth="2147483647"
    />
    Thanks
    Heidi Tr - MSFT
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Too many columns to be shown in the Enterprise Manager 11g?

    Hello,
    we are having some problems with the Enterprise Manager 11g. When we want to VIEW DATA of a specific table, we get this exception. We think that our table has too many columns to be displayed. If we delete some of the columns, the data is shown in the enterprise manager. But this cannot be a solution for us. Can you help us with this point?
    2009-08-03 10:07:04,210 [EMUI_10_07_04_/console/database/schema/displayContents] ERROR svlt.PageHandler handleRequest.639 - java.lang.ArrayIndexOutOfBoundsException: -128
    java.lang.ArrayIndexOutOfBoundsException: -128
         at oracle.sysman.emo.adm.DBObjectsMCWInfo.getSqlTimestampIndexes(DBObjectsMCWInfo.java:194)
         at oracle.sysman.emo.adm.schema.TableViewDataBrowsingDataSource.executeQuery(TableViewDataBrowsingDataSource.java:167)
         at oracle.sysman.emo.adm.DatabaseObjectsDataSource.populate(DatabaseObjectsDataSource.java:201)
         at oracle.sysman.emo.adm.DatabaseObjectsDataSource.populate(DatabaseObjectsDataSource.java:151)
         at oracle.sysman.emo.adm.schema.DisplayContentsObject.populate(DisplayContentsObject.java:369)
         at oracle.sysman.db.adm.schm.DisplayContentsController.onDisplayAllRows(DisplayContentsController.java:303)
         at oracle.sysman.db.adm.schm.DisplayContentsController.onDisplayContents(DisplayContentsController.java:290)
         at oracle.sysman.db.adm.schm.DisplayContentsController.onEvent(DisplayContentsController.java:136)
         at oracle.sysman.db.adm.DBController.handleEvent(DBController.java:3431)
         at oracle.sysman.emSDK.svlt.PageHandler.handleRequest(PageHandler.java:577)
         at oracle.sysman.db.adm.RootController.handleRequest(RootController.java:205)
         at oracle.sysman.db.adm.DBControllerResolver.handleRequest(DBControllerResolver.java:121)
         at oracle.sysman.emSDK.svlt.EMServlet.myDoGet(EMServlet.java:781)
         at oracle.sysman.emSDK.svlt.EMServlet.doGet(EMServlet.java:337)
         at oracle.sysman.eml.app.Console.doGet(Console.java:318)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:743)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
         at com.evermind.server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:64)
         at oracle.sysman.eml.app.EMRepLoginFilter.doFilter(EMRepLoginFilter.java:109)
         at com.evermind.server.http.EvermindFilterChain.doFilter(EvermindFilterChain.java:15)
         at oracle.sysman.db.adm.inst.HandleRepDownFilter.doFilter(HandleRepDownFilter.java:153)
         at com.evermind.server.http.EvermindFilterChain.doFilter(EvermindFilterChain.java:17)
         at oracle.sysman.eml.app.BrowserVersionFilter.doFilter(BrowserVersionFilter.java:122)
         at com.evermind.server.http.EvermindFilterChain.doFilter(EvermindFilterChain.java:17)
         at oracle.sysman.emSDK.svlt.EMRedirectFilter.doFilter(EMRedirectFilter.java:102)
         at com.evermind.server.http.EvermindFilterChain.doFilter(EvermindFilterChain.java:17)
         at oracle.sysman.eml.app.ContextInitFilter.doFilter(ContextInitFilter.java:336)
         at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:627)
         at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:376)
         at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:870)
         at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:451)
         at com.evermind.server.http.HttpRequestHandler.serveOneRequest(HttpRequestHandler.java:218)
         at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:119)
         at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:112)
         at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
         at java.lang.Thread.run(Thread.java:595)When we select the table via SQL, everything works fine.

    Hi,
    I'm Galit from the QE team of VIN.
    All the things that you've described are correct.
    It is actually an edge case where the only VM, that the manual App can be managed from its Map view, was removed from the App.
    The Manual App management is as designed, and may be changed in the future.
    There are 2 ways to overcome this situation:
    1.You can, as you stated, create another Manual App with similar name and remain with the "Zombie App".
    2. To run a specific command that will remove the Zombie App from the DB.
    Please note that option no. 2 involves using an API that we do not publish.
    If you would like to use option no. 2 contact me in private and we will see about supplying the relevant commands to run in order to delete the "zombie" application.
    Thanks,
    Galit Gutman

  • Router connection problems - DISH satellite receivers or too many devices?

    (Sorry about that - I thought I had broken the message up.)
    I'm having significant issues with my Linksys router timing out, and need help. I'll try to be as detailed as I can.
    I have the following:
    - Linksys WRT54G v3 wireless router I purchased off eBay 3-4 years ago. Using 128-bit WEP secirity. Even though the model says WRT54G, my Linksys setup page says I have a WRT54GL. Don't know if that's pertinent but thought I'd include the info)
    - HP desktop running Vista Home Premium
    - Gateway Solo laptop (circa 3Q 2002) running WinXP SP3, and also have WPC54G - Wireless-G Notebook Adapter
    - Gateway M275 Notebook running Windows XP Tablet Edition w/SP3, internal wireless card
    - Linksys WRE54G Wireless Range Expander, v3
    My broadband internet connects to my cable modem, which then runs via ethernet to my router. I have my desktop connected to my router via (wired) ethernet cable (port 1), and the two laptops connect wirelessly. All three computers on my network run just fine and have had no connection issues until I had two (2) DISH satellite receivers installed this past Friday.
    Prior to installation, I knew I would need the two receivers hooked up to my network via ethernet cable. Since one receiver was in the same room as my router, it was easy to run an ethernet cable from my router to the receiver (port 2). I had a challenge with the upstairs receiver, because I didn't have a direct connection, and wasn't sure how to wire it. So, I used my range expander by plugging in the expander into a nearby outlet, then connecting the receiver to the expander via ethernet cable.
    I had some issues getting a good signal, and did some troubleshooting but made it work. Now I had five devices connected to my router: two with a wired ethernet cable and three wirelessly.
    I started having connection timeouts within about 3-4 hours of satellite [receiver] installation. All of the sudden I couldn't connect to the internet on ANY device; both laptops couldn't connect wirelessly, my desktop couldn't connect, and the receivers were telling me my connection was bad. I checked modem but there were no issues. Still, I unplugged the power cable from the modem for a minute, then reconnected - still no internet. I called my cable company to have them troubleshoot the modem, but they pinged it several times & got positive results - still no internet, so I ruled the modem out.
    I tried using the Windows connection troubleshooter to repair the problem, and got a DNS error message (which I don't know how to fix). I decided to unplug my router for 10-15 seconds, then plug back in - that got my internet connection going again. That lasted a couple hours & then failed. I unplugged the router again (is that a soft reset or a power cycle?), then reconnected & was able to connect to the internet. This happened a few more times over the weekend, and finally I decided the expander might be the issue (both DISH and Linksys tech support was not very helpful).
    I found a way to wire my second receiver via ethernet cable (port 3), so now I had three wired devices, and two wireless devices. I thought this would fix the problem; it didn't, but at least I learned how to wire CAT5 cable. So I got that going for me... which is nice.
    I plugged the ethernet cable directly from my modem to my desktop to test the timeout, but had no issues - the modem just wasn't the problem.
    I was getting some IP address conflicts on my Norton Inernet Security, so I uninstalled that from my desktop, disconnected the power from the modem, disconnected the power from the router, shut down all devices, reset the IP addresses on the receivers, deleted the wireless connection from the laptops, shut down the desktop, and just left the whole mess alone for half a day. Then I reinstalled the Norton Internet Security, connected my wired devices, plugged the modem in, plugged the router in, reset my security, connected wirelessly with my laptops.
    Within an hour my connection timed out.
    Trying to chat with tech support wasn't feasible, as my connection kept going out. A guy at work said I shoudl ping my IP address, and let it repeat until my connection goes out. So I unplugged the router and plugged it back in to get an internet connection, the opened a cmd prompt and typed
    ping 192.168.1.1 -t
    I left it alone for a few hours, and when I came back, my internet connection was down, but I was still getting active pings - no problems there.
    At this point I thought I had done everything except replacing my router (which I'm still tempted to do), but I called my broadband provider to see if there was anything they could do. One of the techs said I had too many devices connected to the internet, but I thought these routers were supposed to handle dozens of devices?
    I finally called Linksys Tech Support and had a conversation for 90 minutes. We went through all the steps of unplugging the the modem, router & all connected devices, resetting the router, etc, etc. The only thing different he did was had me change my security from WPA to 64-bit WEP, and added passwords for DNA1 and DNS 2 (same password for each).
    That was at 1:30am last night, and when I woke up to check my connection this morning, it was still connected. I have to check it again when I get home, but I'm wondering if I should just be prepared to get another router (and if so, any recommendations), or if there's something I'm still not doing that could resolve my issue - if I still have connection losses.
    Also, I'm concerned about the security thing. If changing from WEP 128 or WPA to WEP 64 fixed my problem, I'm not sure I feel completely protected from intrusion - isn't that pretty much the least amount of security I can have (without forgoing it altogether)??
    Finally, I've read a few threads suggesting possibly changing to static IP from DHCP; however, my satellite receiver installation documentation specifically advises against this for the receivers.
    Anyway, I would very much appreciate some help.
    Message Edited by CKdoubleU on 10-01-2008 08:43 AM

    First please break that long text into separate paragraphs. No one wants to read one long runon sentences
    With Knowledge… Impossible means nothing.
    Credentials
    Computer experience: 11 years
    Cisco networks experience: 8 years
    Network administrator: 6 years
    N.E.T. networks experience: 6 years
    Linksys networks experience: 6 years
    Recent additions: A+ | Networks+ | Linux+ | Security+

  • Too many connections - even after closing ResultSets and PreparedStatements

    I'm getting a "Too many connections" error with MySQL when I run my Java program.
    2007-08-06 15:07:26,650 main/CLIRuntime [FATAL]: Too many connections
    com.mysql.jdbc.exceptions.MySQLNonTransientConnectionException: Too many connections
            at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:921)
            at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2870)
            at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:812)
            at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3269)
            at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1182)
            at com.mysql.jdbc.Connection.createNewIO(Connection.java:2670)I researched on this and found out that I wasn't closing the ResultSet and the PreparedStatement.
    The JDBC connection is closed by a central program that handles connections (custom connection pooling).
    I added the code to close all ResultSets and PreparedStatements, and re-started MySQL as per the instructions here
    but still get "Too many connections" error.
    A few other things come to mind, as to what I may be doing wrong, so I have a few questions:
    1) A few PreparedStatements are created in one method, and they are used in a 2nd method and closed in the 2nd method
    does this cause "Too many connections" error?
    2) I have 2 different ResultSets, in nested while loops where the outer loop iterates over the first ResultSet and
    the inner loop iterates over the second ResultSet.
    I have a try-finally block that wraps the inner while loop, and I'm closing the second ResultSet and PreparedStement
    in the inner while loop.
    I also have a try-finally block that wraps the outer while loop, and I'm closing the first ResulSet and PreparedStatement
    in the outer while loop as soon as the inner while loop completes.
    So, in the above case the outer while loop's ResultSet and PreparedStatements remain open until the inner while loop completes.
    Does the above cause "Too many connections" error?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    The following is relevant sections of my code ( it is partially pseudo-code ) that shows the above 2 cases:
    init( Connection jdbcConnection ){
       String firstSQLStatement = "....";
       PreparedStatement ps1 = jdbcConnection.prepareStatement( firstSQLStatement );
       String secondSQLStatement = "....";
       PreparedStatement ps2 = jdbcConnection.prepareStatement( secondSQLStatement );
       String thirdSQLStatement = "....";
       PreparedStatement ps3 = null;
       ResultSet rsA = null;
       try{
            ps3 = jdbcConnection.prepareStatement( thirdSQLStatement );
            rsA = ps3.executeQuery();
            if( rsA.next() ){
                   rsA.getString( 1 );
       }finally{
            if( rsA != null )
                   rsA.close();
            if( ps3 != null )
              ps3.close();
       //Notice, how ps1 and ps2 are created here but not used immediately, but only ps3 is
       //used immediately.
       //ps1 and ps2 are used in another method.
    run( Connection jdbcConnection ){
         ResultSet rs1 = ps1.executeQuery();
            try{
               while(rs1.next()){
                    String s = rs1.getString();
                    ps2.setString(1, s);
              ResultSet rs2 = ps2.executeQuery();
                    try{
                   while(rs2.next()){
                        String s2 = rs2.getString();
                    }finally{
                   if( rs2 != null )
                     rs2.close();
                   if( ps2 != null )
                     ps2.close();
         }catch( Exception e ){
              e.printStackTrace();
         }finally{
            if( rs1 != null )
                  rs1.close();
               if( ps1 != null )
                  ps1.close();
    //Notice in the above case rs1 and ps1 are closed only after the inner
    //while loop completes.
    }I appreciate any help.

    Thanks for your reply.
    I will look at the central connection pooling mechanism ( which was written by someone else) , but that is being used by many other Java programs others have written.
    They are not getting this error.
    An addendum to my previous note, I followed the instructions here.
    http://dev.mysql.com/doc/refman/5.0/en/too-many-connections.html
    There's probably something else in my code that is not closing the connection.
    But I just wanted to rule out the fact that opening a PreparedStatement in one method and closing it in another is not a problem.
    Or, if nested ResultSet loops don't cause the problem.
    I've read in a few threads taht "Too many connections" can occur for unclosed RS and PS , and not just JDBC connections.

  • An embarrassment of riches - too many tablets! Should I consolidate and get an iPad Air?

    Currently I have the following tablets:
    - Nexus 7 2013 wifi only
    - iPad 2 32 GB wifi + cellular, but no longer in contract
    - iPad 1 wifi + cellular, no longer in contract
    - an extra iPad 1, also wifi + cellular, not in contract that I got for free for helping somebody with some computer stuff
    That same friend who gave me the iPad 1, wants to upgrade from his iPad 4th generation to an iPad Air, and he want to give me his 4th G iPad for free. It is a 64 GB wifi only model.
    The reason I never upgraded beyond the iPad 2 was because all the later models got thicker and heavier, and I thought my iPad 2 was already too heavy, which is why I got my Nexus 7. I hardly touch my iPad 2 these days. They are both within arm's reach, but the Nexus 7 is easier to hold, and allows voice input, which I often find convenient, and most of the time I reach for it. I like the voice input on my iPhone 5 too, and I feel like something is "missing" when I pick up my iPad 2. Plus it feels like it weighs a ton next to the Nexus 7.
    Well, anyway I have too many tablets. And soon I'll have the iPad 4th G. So I was thinking of selling the iPad 2, the two iPad 1s and the iPad 4 and getting an iPad Air, wifi + cellular with a data sharing plan with my iPhone 5. After all, compared to the iPad 2, the iPad Air is (1) lighter, (2) has Siri, (3) has a retina display, (4) has much better specs and (5) now the cellular model is universal, so if I take a trip to the U.S. it would be easier to use there.
    Yet there are things I now prefer about my Nexus 7 with Android 4.4 over iOS 7. Like it has better sharing features, the ability to send all kinds of attachments, more customization, better "hooks" into the UI from 3rd party apps, the ability to choose your own default mail apps and browsers. It seems just more flexible overall. And the Facebook app runs "smoother" than on my iPad 2.
    But I like iOS 7 too.
    I wonder if I would use an iPad Air any more than I do my iPad 2 since it is lighter than the recent iPads. The screen being larger might make reading magazines and news sites easier. And Siri is still better than the Android voice dictation in my opinion. Or will the weight and size actually keep driving me back to the Nexus 7?
    I guess I still haven't found the right place in my life for using a tablet for anything other than casual use. I don't want to get into a religious argument about whether tablets are production devices versus consumption devices, but basically I am in the camp that does not see them as true productivity devices. My iMac is for that.
    Conundrums, conundrums. I do have too many "devices" right now though. Within an arm's reach I can count 12 computers, tablets and phones all on wifi! Who needs that much stuff?!
    And if I do get the iPad Air, do I need 128 GB? The way they sell here in Japan, you don't put any cash down at all, and basically get an interest-free loan for 2 years. So "for just a few hundred yen more a month" you can go from the 32 GB model to the 64 GB model. And then "for just a few hundred yen more a month" you can go to the 128 GB model. It seems a waste not to get the 128 GB model. But in my current iPad 2 I only use a little more than half the current 32 GB.
    Decisions, decisions. Any thoughts?
    doug

    The weight is somewhat of an issue for me. I find the iPad 2 irritatingly heavy whenever I pick it up, and the iPad 4th generation is heavier even than my iPad 2.
    But for the amount I use it I could get used to that. That option is certainly the least headache, contract-wise, and I would avoid the ever-present possibility of dealing with AppleCare if something is "not perfect" (which it never is).
    I guess the only "practical" reason would be that I would be getting rid of all my iPads with a cellular option. So if I did make a trip to the U.S. in the spring, it would be harder to use the iPad 4. Here, in Japan, of course I could use the free tethering option on my iPhone 5.
    So I guess it is just the cellular option I'm giving up that is in the back of my mind.
    doug

Maybe you are looking for