Why such horrible performance and unwillingness for Verizon to help?

I have fios and Internet, TV, five cell phones and terrible performance. I can barely connect from one room to another using wireless. Occasionally it works often substandard strength. When I come home from people's houses the connections are great, even when the connection is from the neighbor next door! I have complained several times. I have BEGGED to have a modern router put in. The router is the original antiquated machine the originally put in. I think it runs on vacuum tubes. Tech support will not swap it out. Meanwhile my neighbor across the street mentions a little problem and, boom, they get a new router. It is still an 802.11g. They want you to pay for an n level router. I pay $600 dollars a MONTH for crumby service with no willingness to help out.
Can anyone tell me if cable vision has better service? I just got back from my cousin and one of their service guys came over, spent two hours checking things out and went out of his way to verify the service was up to or better than standard. I was very impressed. The verizon guys seem not the sleigh east bit interested in helping but they di want to prove they were smarter than anybody else.
Any advice or recommendations would be very welcome. Please help. I give up on these corporate thieves. Please don' hesitate to advise. Thanks.
Bill
Email info removed as required by the Terms of Service.
Message was edited by: Admin Moderator

lagagnon wrote:
MrKsoft wrote:I've run a very usable Ubuntu/GNOME/Compiz based system on my P2/450, 320MB RAM, with a Radeon 7500 before, and that's even older hardware, with bulkier software on top of it.
I'm sorry but I find that very hard to believe. I work with older computers all the time - I volunteer with a charity that gets donated computers and we install either Puppy or Ubuntu on them, depending on vintage. On a P450 with only 320MB almost any machine of that vintage will run like a dog with Ubuntu and Compiz would be a no go. It would be using the swap partition all the time and the graphics will be pretty slow going.
Hey, believe it: http://www.youtube.com/watch?v=vXwGMf141VQ
Of course, this was three years ago.  Probably wouldn't go so well now.
To start helping you diagnose your problems please reboot your computer and before you start loading software show us the output to "ps aux", "free", "df -h", "lspci" and "lsmod" so we can check your system basics. You could paste all those over to pastebin.ca if you wish.
Here's everything over at pastebin: http://pastebin.ca/2005110

Similar Messages

  • My AppStore wont update my apps because my old iTunes account keeps coming up and asks for billing info HELP!!!

    My AppStore wont update my apps because my old iTunes account keeps coming up and asks for billing info HELP!!!

    Apps are always associated with the Apple ID that was used to purchase them. This cannot be changed. To avoid getting the request you are seeing delete all apps that request the former Apple ID and download them again.
    Any apps for which you paid a fee will require that you purchase them again.

  • TS4036 My iPhone5 was replaced today and when I did a restore I was only able to access a previous backup to the backup that I did yesterday? Hence, I do not have access to some apps such as ebooks and games. Can you help resolve this issue?

    My iPhone5 was replaced today and when I did a restore I was only able to access a previous backup to the backup that I did yesterday? Hence, I do not have access to some apps such as ebooks and games. Can you help resolve this issue?

    My girlfriend had a similar issue because her new device was not updated to IOS7 when she turned it on, but her backup point was created using IOS7.  Check to be sure the restore point was created and then factory reset the phone after you have updated to 7. 

  • Increase Performance and ROI for SQL Server Environments

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

  • IPad set up and syncing for my mom--help!

    I just bought an iPad for my mom. I want to set it up for her (she's not too tech savvy). A couple of things to know about my situation:
    - she is a PC user
    So, I can set up her iPad at work where I have access to a PC. I have her iTunes log-in information to do so, however, when she goes to sync it on her very own PC at her house, will it be okay? ( ie: everything will be intact and accounted for?) I won't be able to hands on help her if it does not... She's in WI and I'm in CA.
    I'm thinking the initial activation at my work PC should be fine ( just as the Apple sales staff activates iPads or iPhones at their store and then we go home to sync).
    Please provide me with some guidance and answers to this.... I really want her to use this seamlessly
    -D

    If you purchase anything she'll have to transfer purchases via iTunes, but I've reformatted and then synched fine so all should go well.
    If all else fails you can always use a free trial of the likes of GoToMeeting or even easier for her GoToAssist to take control of her pc and fix things.
    Note: don't work for Citrix, just like their products and know they have 30 day trial accounts.

  • Forgotten username and password for G4 notebook help

    Hi there i have forgotten my username and password for a G4, also I do not have a disk.
    Any ideas
    Steve

    See Sigs post in this thread: https://discussions.apple.com/thread/2318925?threadID=2318925&tstart=0

  • I just used a firewire connection to transfer files from an older MAC to a newer MAC. My newer MAC has lost all that was on it, and all the applications are opening like they have never been used before and asking for Rosetta. Help!!!

    I just used a firewire connection to transfer files from a PowerBook G4 to a MacBook Pro. The MacBook Pro has lost all that was on it, and all the applications are opening like they have never been used before and asking for Rosetta. How do I find all my files on the MacBook Pro, or undo what I have done???

    IGNORE the last post: he thought you are using Lion or Mountain Lion.
    Since you are using Snow Leopard, you need to install Rosetta.  It can be found on your Snow Leopard Install DVD in the Options section.
    Once installed all should be well again.
    Also, if your computer is connected to the internet when the Dialog Box about PowerPC applications comes up, it MIGHT allow you to download Rosetta from the internet.  There has been some discussion that Apple has discontined internet download of Rosetta into Snow Leopard.
    Someone else will have to address your lost files problem.

  • Happy New Year everyone, and thanks for all your help & insight this year

    Just wanted to thank everyone for all the help & consideration you have given to people who have come to the Forum this year for help & assistance. And the help you have given me, too, when you may not have thought that your answers were helping anyone other than the person who made the initial post.
    Tom, Ian, Piero, Alchroma, Michel, Victor, Mitchell, David and others too numerous for me to recall at the moment - thanks so much for your contributions. You make this Forum rock !

    Hi(Bonjour)!
    +J'utiliserai le français pour vous transmettre mes meilleurs voeux de Nouvel An 2009.+
    +Le forum Final Cut Express est fort et amical.+
    +Bonne année à notre bon ami fishfillet également....+
    I will use the french language to send you my best wishes fore the year 2009.
    The Final Cut Forum is strong and friendly.
    Best wishes to our good friend fishfillett too...
    Michel Boissonneault

  • Why such poor performance?

    Hi.  New Arch Linux user here, but not new to Linux (I'm no expert though, I've merely tinkered with it here and there over the past 7 years or so)
    I have this old laptop that I've installed Arch on in order to give it some new life.  While it's old, it's not exactly ancient either, and it's a system on which I thought Linux would run well on, at least for modest web surfing/paper writing and the likes.
    Dell Inspiron 8200
    Pentium 4 1.6ghz
    1 GB RAM
    Nvidia GeForce4 Go 440 graphics (64MB) -- using Nvidia 96xx driver, because based on what I read it seems open-source 3D drivers aren't usable/stable yet
    Using XFCE4 for a desktop environment.
    I'm having an issue with performance on the system.  I check with top and it shows Xorg using about 20% of the processor idle.  That goes up when I'm moving windows or if I make top refresh faster than the default.  It seems like pretty much anything having to due with X redrawing hammers the CPU.  The biggest offender would be AbiWord.  I was working on a document and scrolling takes the CPU usage all the way to 100%.  I thought it might be compositing, so I turned it off and performance is even worse.  Xorg uses about 30% CPU idle and there is about a 2 second delay when I try to minimize/maximize a window.  I have no idea why it would actually run better with the compositing on.  Of course, the performance with compositing isn't impressive either.
    Also, although this may be expected since general 2D performance seems bad... 3D/OpenGL performance is terrible.  I installed a few games and they run like crap.  For instance, SuperTux runs at about half speed at best.  I am almost certain that it isn't that the computer's too weak for it, because I remember running it a few years ago on my Pentium II/450.  I even tried some games in WINE and they lag as well (I know WINE imposes a bit of a performance penalty but these games never really pushed the CPU too badly under Windows XP)
    I'm not sure if I misconfigured something or if this laptop just hates me.  Any suggestions?
    EDIT 4/1/2011 - I'm not bumping this thread because of its age, but if any searchers in the future come across this, I'd like to note that I discovered this issue is not linux-related.  My laptop has a faulty heat sensor that is underclocking the processor to about the equivalent of a 233mhz Pentium when the temperature rises just a tiny bit.  However when it does this the operating system will still report that it is running at the full 1.6ghz, as it's basically just overheating protection that is kicking in.  The system thinks that it is going to catch fire or something and freaks out.  It is apparently a common flaw with these machines as they age.  I am able to maintain decent performance by keeping the temperature low (of course this means running the fans all the time on low as a minimum, but I'm finally replacing this in a few months so I am not concerned about its well-being)  If it does underclock the processor for some reason, pressing Fn+Z will temporarily return it to the original speed and it should stay there providing the temperature doesn't rise again.
    Last edited by MrKsoft (2011-04-01 18:00:05)

    lagagnon wrote:
    MrKsoft wrote:I've run a very usable Ubuntu/GNOME/Compiz based system on my P2/450, 320MB RAM, with a Radeon 7500 before, and that's even older hardware, with bulkier software on top of it.
    I'm sorry but I find that very hard to believe. I work with older computers all the time - I volunteer with a charity that gets donated computers and we install either Puppy or Ubuntu on them, depending on vintage. On a P450 with only 320MB almost any machine of that vintage will run like a dog with Ubuntu and Compiz would be a no go. It would be using the swap partition all the time and the graphics will be pretty slow going.
    Hey, believe it: http://www.youtube.com/watch?v=vXwGMf141VQ
    Of course, this was three years ago.  Probably wouldn't go so well now.
    To start helping you diagnose your problems please reboot your computer and before you start loading software show us the output to "ps aux", "free", "df -h", "lspci" and "lsmod" so we can check your system basics. You could paste all those over to pastebin.ca if you wish.
    Here's everything over at pastebin: http://pastebin.ca/2005110

  • Performance and HA for HttpClusterServlet

              Hi,
              I didn't see much information in the documentation about the HttpClusterServlet:
              - can it be (easily) set up in a HA configuration (to avoid it being a SPOF)?
              - how does it perform?
              - is it possible to cluster it?
              Regards,
              Frank Olsen
              Stonesoft
              

              "Cameron Purdy" <[email protected]> wrote:
              >You can run HttpClusterServlet on a whole slew of Weblogic Express servers
              >with a h/w load balancer in front and a cluster in back, for example.
              > That
              >gives you no SPOF (assuming secondary h/w load balancer etc.) and some
              >scale.
              >
              OK.
              >I don't know how the software load balancer fits in there ...
              >
              The answer is ... well, sorry for "plugging" our product -- (selling it would
              of course be nice); but getting feedback on what we can do better is also a good
              reason to tell yo about a possible alternative.
              As I see it, it could be an alternative to (i.e., it replaces) the dispatchers:
              - you run a cluster of WLS instances with in-memory replication to ensure failover
              of session state (or, JDBC persistence for a less performant alternative)
              - our StoneBeat WebCluster product can do this:
              . as I've explained in a thread on the in-memory replication group, this works
              fine in the major cases
              . I've been able to detect some scenarios that causes problems with sessions
              being lost, but it was in cases where the Dynamic Tcp feature of the WebCluster
              was not used (or keepalives where disabled)
              . I'm contacting BEA to see if they'd be willing to consider (certify) this
              as an alternative to the HW/SW dispacther solutions
              . of course, each have pros and cons, but if the choice is there...
              . one advantage of WebCluster would be that it is simple to set up and manage;
              it is distributed and has no inherent SPOF; it has a very good test subsystem
              to allow for dynamic load balancing
              . we also have a whole range of products from load balancing for firewalls (and
              soon our own firewall), to load balancing of web servers, ..., to a HA solution
              for databases and other applications based on shared storage
              Regards,
              Frank Olsen
              Stonesoft
              >--
              >Cameron Purdy
              >Tangosol, Inc.
              >http://www.tangosol.com
              >+1.617.623.5782
              >WebLogic Consulting Available
              >
              >
              >"Frank Olsen" <[email protected]> wrote in message
              >news:[email protected]...
              >>
              >> Hi,
              >>
              >> I didn't see much information in the documentation about the
              >HttpClusterServlet:
              >> - can it be (easily) set up in a HA configuration (to avoid it being
              >a
              >SPOF)?
              >> - how does it perform?
              >> - is it possible to cluster it?
              >>
              >> Regards,
              >> Frank Olsen
              >> Stonesoft
              >>
              >
              >
              

  • Why not both trash and archive for gmail? & Save draft?

    1. For gmail that I'm through with, I understand I can go to settings and choose to have it either trashed (moved to trash file) or archived (moved to all mail file). But, why an either/or choice? Why can't there be both icons for trash and archive? They are pretty different functions, each useful.
    If Mail is incapable of this, is there another email app that can do it?
    2. How do you save a draft of an email?

    Setting it to archive is a one time step that is applied to the account.
    Settings>Mail,Contacts,Calenders>
    Sellect the gmail account you want then  slide "On" the Archive Messages feature on this page.
    From the inbox you do have the the 2 options you're after and both take 2 clicks. One is the archive icon (an arrow pointing into a box. This send mail to the All Mail folder and the 2nd is an arrow label on a folder. The second option is the move function and with 2 click a message is trashed from your inbox.
    Drafts Mailbox set to Drafts (On the Server)
    Deleted Mailbox set to Trash

  • Mac Lifer using MacBook Pro - why is the performance and network SO slow now?

    I'm a 15+ year experience designer that lives on my machine. I'm in Adobe, Safari, Mail, Skype and Gotomeeting every day. Skype audio is spotty, Safari lags, Mail lags, and Adobe Photoshop CS6 fonts seem to give it a ton of grief. I would take it in to a store, but don't have the physical time.

    See ds store's excellent user tip - Why is my computer slow?
    You don't say which model you have, how much RAM, how much free space on your internal hard drive, etc. Information like that can be useful.
    Clinton

  • Horrible performance and millions of rows to compress.

    Hello. I wrote a script to compress a table that's readings from a computer terminal. I read the original table then insert a row in a new table for the beginning of a new reading and then update a column for the and end time and amount of the duplicate readings. The compression is working fine but as I'm testing and increasing the rows the longer it is taking to run. There are 11,000,000 rows to compress and it's hanging after I try and do anything over 20,000. I don't know if it's hanging or taking a long time to start. I put in checks to see when it's running and committing and when I have a small amount of records it takes a minute then starts and run through fast. When I add a more records it just site there.
    Any suggestions?
    set serveroutput on;
    set serveroutput on size 1000000;
    truncate table dms_am_nrt_test;
    declare
    rel_counter number := 1;
    row_counter number := 0;
    curr_station number;
    curr_rel_conc number := 0;
    begin_date date;
    station number;
    test varchar2(5);
    curr_index number;
    trip number := 5000;
    counter number := 1;
    cursor c1 is Select *
         From
              gilbert_r.loop_test;
    begin
    select station_num into station from gilbert_r.loop_test
    where rownum = 1;
    curr_station := station;
    for c1_rec in c1 loop
    select station_num into station from gilbert_r.loop_test
    where nrt_temp_id = c1_rec.nrt_temp_id;
    if curr_station != c1_rec.station_num
    then
    row_counter := 0;
    curr_station := station;
    end if;
    if c1_rec.rel_conc > 0 or row_counter = 0 or (curr_rel_conc > 0 and c1_rec.rel_conc = 0)
    then
    curr_rel_conc := c1_rec.rel_conc;
    rel_counter := 1;
    begin_date := c1_rec.begin_datetime;
    select dms_am_nrt_seq.nextval into curr_index from dual;
    Insert into site.dms_am_nrt_test
                   (nrt_id,
                    struc_record_id,
                    begin_datetime,
                    end_date_time,
                    num_readings,
                    station_num,
                    port_num,
                    port_loc,
                    agent,
                    abs_conc,
                    rel_conc,
                    units,
                    height,
                    area,
                    rt,
                    peak_width,
                     station_status,
                     alarm,
                    error,
                    error_code1,
                    error_code2,
                    error_code3,
                    error_code4,
                    error_code5,
                    flow_rate)
                   values
                   (curr_index,
                    c1_rec.Struc_Record_ID,
                    begin_date,
                    begin_date,
                     rel_counter,
                     c1_rec.Station_Num,
                     c1_rec.Port_Num,
                     c1_rec.Port_Loc,
                c1_rec.Agent,
                     c1_rec.Abs_Conc,
                     c1_rec.Rel_Conc,
                     c1_rec.Units,
                     c1_rec.Height,
                     c1_rec.Area,
                     c1_rec.RT,
                     c1_rec.Peak_Width,
                     trim(c1_rec.Station_Status),
                     c1_rec.Alarm,
                     trim(c1_rec.Error),
                     trim(c1_rec.Error_Code1),
                     trim(c1_rec.Error_Code2),
                     trim(c1_rec.Error_Code3),
                     trim(c1_rec.Error_Code4),
                     trim(c1_rec.Error_Code5),
                      c1_rec.Flow_Rate);
    else if
    c1_rec.rel_conc = 0 and curr_rel_conc = 0
    then
    rel_counter := rel_counter + 1;
    curr_rel_conc := c1_rec.rel_conc;
    begin_date := c1_rec.begin_datetime;
    Update site.dms_am_nrt_test
                   set
                        end_date_time = begin_date,
                        num_readings = rel_counter
                   where
                        nrt_id = curr_index;
    end if;
    end if;
    if counter = trip
    then
    commit;
    trip := trip + 5000;
    dbms_output.put_line('Commit');
    end if;
    row_counter := row_counter + 1;
    counter := counter + 1;
    end loop;
    commit;
    end;Message was edited by:
    SightSeeker1
    Message was edited by:
    SightSeeker1
    Message was edited by:
    SightSeeker1

    Hello
    Well, after a bit of playing around, and looking at this ask tom article, I've come up with this:
    create table dt_test_loop (station number, begin_time date, rel_conc number(3,2))
    insert into dt_test_loop values(1, to_date('12:00','hh24:mi'), 0 );
    insert into dt_test_loop values(1, to_date('12:01','hh24:mi'), 0);
    insert into dt_test_loop values(1, to_date('12:02','hh24:mi'), 0);
    insert into dt_test_loop values(1, to_date('12:03','hh24:mi'), 3.3);
    insert into dt_test_loop values(1, to_date('12:04','hh24:mi'), 0);
    insert into dt_test_loop values(2, to_date('12:00','hh24:mi'), 0 );
    insert into dt_test_loop values(2, to_date('12:01','hh24:mi'), 0);
    insert into dt_test_loop values(2, to_date('12:02','hh24:mi'), 0);
    insert into dt_test_loop values(2, to_date('12:03','hh24:mi'), 4.2);
    insert into dt_test_loop values(1, to_date('12:04','hh24:mi'), 0);
    insert into dt_test_loop values(1, to_date('12:05','hh24:mi'), 0);
    select
         station,
         begin_time,
         end_time,
         rel_conc,
         num_readings
    FROM
         SELECT
              station,
              min(begin_time) over(partition by max_rn order by max_rn) begin_time,
              max(begin_time) over(partition by max_rn order by max_rn) end_time,
              rel_conc,
              count(*) over(partition by max_rn order by max_rn) num_readings,
              row_number() over(partition by max_rn order by max_rn) rn,
              max_rn
         FROM
              SELECT
                   station,
                   rel_conc,
                   begin_time,
                   max(rn) over(order by station,begin_time) max_rn
              FROM
                   (select
                        rel_conc,
                        station,
                        begin_time,
                        case
                             when      rel_conc <> lag(rel_conc) over (order by station,begin_time) OR
                                  station <> lag(station ) over (order by station,begin_time) then
                                  row_number() over (order by station,begin_time)
                             when row_number() over (order by station,begin_time) = 1 then 1
                        else
                                  null
                        end rn
                   from
                        dt_test_loop
    WHERE
         rn = 1Which gives:
      STATION BEGIN END_T  REL_CONC NUM_READINGS
            1 12:00 12:02         0            3
            1 12:03 12:03       3.3            1
            1 12:04 12:05         0            3
            2 12:00 12:02         0            3
            2 12:03 12:03       4.2            1As you can see, it's slightly wrong in that the readings for station 1 have all been grouped together from 12:04->12:05. The reason for this is the
    over (order by station,begin_time)
    part. What you really need is another column (which hopefully you have) that records the sequence of readings i.e. an insert timestamp or sequence number. If you can use that instead of station adn begin time, you are rocking! :-)...
    create table dt_test_loop (station number, begin_time date, rel_conc number(3,2), ins_seq number)
    insert into dt_test_loop values(1, to_date('12:00','hh24:mi'), 0 ,1);
    insert into dt_test_loop values(1, to_date('12:01','hh24:mi'), 0,2);
    insert into dt_test_loop values(1, to_date('12:02','hh24:mi'), 0,3);
    insert into dt_test_loop values(1, to_date('12:03','hh24:mi'), 3.3,4);
    insert into dt_test_loop values(1, to_date('12:04','hh24:mi'), 0,5);
    insert into dt_test_loop values(2, to_date('12:00','hh24:mi'), 0,6 );
    insert into dt_test_loop values(2, to_date('12:01','hh24:mi'), 0,7);
    insert into dt_test_loop values(2, to_date('12:02','hh24:mi'), 0,8);
    insert into dt_test_loop values(2, to_date('12:03','hh24:mi'), 4.2,9);
    insert into dt_test_loop values(1, to_date('12:04','hh24:mi'), 0,10);
    insert into dt_test_loop values(1, to_date('12:05','hh24:mi'), 0,11);
    select
         station,
         begin_time,
         end_time,
         rel_conc,
         num_readings
    FROM
         SELECT
              station,
              min(begin_time) over(partition by max_rn order by max_rn) begin_time,
              max(begin_time) over(partition by max_rn order by max_rn) end_time,
              rel_conc,
              count(*) over(partition by max_rn order by max_rn) num_readings,
              row_number() over(partition by max_rn order by max_rn) rn,
              max_rn
         FROM
              SELECT
                   station,
                   rel_conc,
                   begin_time,
                   max(rn) over(order by ins_seq ) max_rn
              FROM
                   (select
                        rel_conc,
                        station,
                        begin_time,
                        ins_seq,
                        case
                             when      rel_conc <> lag(rel_conc) over (order by ins_seq ) OR
                                  station <> lag(station ) over (order by ins_seq ) then
                                  row_number() over (order by ins_seq )
                             when row_number() over (order by ins_seq ) = 1 then 1
                        else
                                  null
                        end rn
                   from
                        dt_test_loop
    WHERE
         rn = 1Which gives:
      STATION BEGIN END_T  REL_CONC NUM_READINGS
            1 12:00 12:02         0            3
            1 12:03 12:03       3.3            1
            1 12:04 12:04         0            1
            2 12:00 12:02         0            3
            2 12:03 12:03       4.2            1
            1 12:04 12:05         0            2Also, I'm sure it can be simplified a bit more, but that's what I got....:-)
    HTH
    David

  • How do I get a more modern look and feel for a compiled help file

    I gather from reading various posts that there is no easy way
    to generate a .chm file with a more modern look and feel (am I
    right about this?). Are there no plug-ins available from Adobe that
    will enable us to generate a help file that looks more modern?
    I hesitate to move to flash help or something like that
    because, of course, then the developers would have to hookup the
    hundreds of dialog boxes we have in our product to the new files. I
    also think that having one file is way easier to manage in terms of
    getting updated help to the customers. What are online Help
    developers generally delivering for apps where users are not
    connected to the Internet except perhaps periodically while using
    the software? I'm a writer (not a developer) so need soemthing that
    is easy to implement :-)
    Thanks for your help. It is much appreciated.

    To add to the advice already offered, you can apply a skin to
    a CHM file which allows you to customise it. However as with most
    things there are issues with this approach. You can read about this
    by downloading Rick Stone's excellent
    tips
    'n tricks file. Just look in the index under "skins". Something
    else you may want to look at is the beta of Adobe's
    AIR.

  • Message being sent and wait for event - BPM help.

    Hi Experts,
    I am doing BPM interface, in that i am receiving two files. Between these two files, i have set the coorelation in the BPM and included in the BPM's receive step1 and receive step2. After execution, when i look into the sxmb_moni, message status shows processed successfully, but when i look the Process engine status, it shows Message Being Sent and when i look into the BPM monitoring it shows message wait for event in the coorelation object.
    I have gone through the following forum :
    Link:[https://forums.sdn.sap.com/click.jspa?searchID=27600139&messageID=7561954]
    but everything is green and messages are set to Message being sent status.
    Experts kindly advise me.
    Regards
    Mani.

    Hi Abhishek,
    Here i have given my BPM design,
    Start
    Fork - two branches
    Branch1 - Receive File1
    Branch2 - Receive File2
    Transformation
    Send
    End.
    In coorelatin editor, i have specified container,involved messages and properties clearly.
    I have gone through the SAP basis contents as per your previous reply.
    One thing is, i have used the same design for my other interface, it is going through, but specifically for this interface i am getting error. If you could send me your mail, i will send you the exact screen shots, so that you can advise me.
    Pls i need to complete this in a short period, kindly help me to come out.
    Regards
    Mani..
    Edited by: mani_sg on Jun 18, 2009 10:21 AM

Maybe you are looking for