Glxgears run as root increases performance on Intel 945G [SOLVED]

I ran glxgears as normal user and fullscreen fps was 16.4.
I ran it as root and fullscreen fps was 126.2.
Here are the exact commands in order:
$ glxgears
$ su
# glxgears
I did nothing between the commands.  Does anybody know why?
Last edited by Lexion (2009-06-13 11:31:02)

You can also add your useraccount to the video group, as the devices are created as root:video, 660. Without DRI, all your rendering operations have to go through AIGLX, which is another abstraction layer to pass before you get to the hardware.

Similar Messages

  • Intel "Save Power / Increase Performance" popup

    Currently starting to replace the older X series laptops with the X220's here at work.  The problem we are running into is that annoying green popup window from Intel HD graphics asking to save power or increase performance.  The window can't be moved and the users can't do much of anything for the couple minutes it stays on the screen.  It's becoming a real hassle for us helpdesk guys.
    How do we get rid of this?
    Removing the drivers doesn't work, they just reinstall after reboot.

    From an unrelated thread out there on the web:
    When unplugging the mains ac adapter in Speed mode I had an annoying green Icon in the middle of my screen for 2 minutes that said "Save power" -> "Increased performance" stays for some time on top of other windows. I renamed the file C:\Windows\System32\nvvsvc.exe This however unelegant appears to have eliminated the annoying popup, if anyone has a better solution for this do let me know.
    Untested, unlikely, and a little scary.  It's the only thing I could find that even suggested a solution.  Otherwise, there are just a few other people complaining about the same thing on a variety of platforms.
    Z.
    The large print: please read the Community Participation Rules before posting. Include as much information as possible: model, machine type, operating system, and a descriptive subject line. Do not include personal information: serial number, telephone number, email address, etc.  The fine print: I do not work for, nor do I speak for Lenovo. Unsolicited private messages will be ignored. ... GeezBlog
    English Community   Deutsche Community   Comunidad en Español   Русскоязычное Сообщество

  • Error while running the root.sh script during Grid installation on a pre-installed 11g database .

    Hi Oracle Experts,
    I am trying to setup a new GRID Standalone Infrastructure on a previously installed Oracle 11g database.
    It runs all fine but when prompts to run the root.sh script it does not allow to proceed as it prompts to overwrite the existing path for /usr/bin/local
    well, I google'd and tried with overwrite : Y . It prompted to run the script but it failed ...
    Could you please help me on this ..
    [root@asm ~]# /u01/app/11.2.0/grid/root.sh
    Running Oracle 11g root.sh script...
    The following environment variables are set as:
        ORACLE_OWNER= oracle
        ORACLE_HOME=  /u01/app/11.2.0/grid
    Enter the full pathname of the local bin directory: [/usr/local/bin]:
    The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
    [n]: y
       Copying dbhome to /usr/local/bin ...
    The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
    [n]: y
       Copying oraenv to /usr/local/bin ...
    The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
    [n]: y
       Copying coraenv to /usr/local/bin ...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user:
    /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/roothas.pl
    [root@asm ~]# /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/                              crs/install/roothas.pl
    2015-03-18 01:42:25: Checking for super user privileges
    2015-03-18 01:42:25: User has super user privileges
    2015-03-18 01:42:25: Parsing the host name
    Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
    Creating trace directory
    LOCAL ADD MODE
    Creating OCR keys for user 'oracle', privgrp 'oinstall'..
    Operation successful.
    CRS-4664: Node asm successfully pinned.
    Adding daemon to inittab
    CRS-4123: Oracle High Availability Services has been started.
    ohasd is starting
    acfsroot: ACFS-9320: Missing advmutil.
    acfsroot: ACFS-9320: Missing advmutil.bin.
    acfsroot: ACFS-9320: Missing fsck.acfs.
    acfsroot: ACFS-9320: Missing fsck.acfs.bin.
    acfsroot: ACFS-9320: Missing mkfs.acfs.
    acfsroot: ACFS-9320: Missing mkfs.acfs.bin.
    acfsroot: ACFS-9320: Missing mount.acfs.
    acfsroot: ACFS-9320: Missing mount.acfs.bin.
    acfsroot: ACFS-9320: Missing acfsdbg.
    acfsroot: ACFS-9320: Missing acfsdbg.bin.
    acfsroot: ACFS-9320: Missing acfsutil.
    acfsroot: ACFS-9320: Missing acfsutil.bin.
    acfsroot: ACFS-9301: ADVM/ACFS installation can not proceed:
    acfsroot: ACFS-9302: No installation files found at /u01/app/11.2.0/grid/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5xen-x86_64/bin.
    asm     2015/03/18 01:43:06     /u01/app/11.2.0/grid/cdata/asm/backup_20150318_014306.olr
    Successfully configured Oracle Grid Infrastructure for a Standalone Server
    when I checked for ASM instance its not running ... but just ohasd service and nothing else ..
    [root@asm grid]# ps -ef | grep pmon
    oracle    5831     1  0 01:15 ?        00:00:01 ora_pmon_db11g1
    root     12625  8794  0 02:30 pts/2    00:00:00 grep pmon
    [root@asm grid]#
    [root@asm grid]#
    [root@asm grid]# ps -ef | grep d.bin
    oracle   12643     1  5 02:30 ?        00:00:00 /u01/app/11.2.0/grid/bin/ohasd.bin reboot
    root     12715  8794  0 02:30 pts/2    00:00:00 grep d.bin
    [root@asm grid]#
    Could you please help .

    hi,
    The issue not with /usr/bin/local.  When you excute root.sh, it will try configure the ASM with New GRID HOME. Issue started, whent tried to start
    CRS-4123: Oracle High Availability Services has been started.
    ohasd is starting <<=================================================================
    acfsroot: ACFS-9320: Missing advmutil.
    Please let us know the below details
    ==> Acfs is configured the servers??
         ==>acfsutil registry
                   acfsutil info fs output.
    ==> With out ASM instance , How the database, CRS STarted ???
    ==> Please try stop and start the crs.
    ==> crsctl query crs activeversion output
    Regards
    Krishnan

  • Increase Performance and ROI for SQL Server Environments

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

  • How can you increase performance ?

    Hello,
    I am building a site but it's framerate is very slow. 
    I read some articles that Sprites are much faster then movieclips. 
    For example i have 5 long movieclips (4300 pixels in width). These clips scroll in horizontal direction to get a parallax effect.
    One of these movieclips contains other movieclips. (all movieclips are not animating and are just graphics).
    So I thought to increase performance to convert all movieclips to Sprites as I read that Sprites are better for performance.
    I came up with this method:
    function castMovieClipToSprite(source:MovieClip, recursive:Boolean = true):void {
         for (var i:int = 0; i < source.numChildren; i++) {
         var child:DisplayObject = source.getChildAt(i) as DisplayObject;
         if (child is MovieClip && recursive) {
         castMovieClipToSprite(MovieClip(child), recursive);
         child = Sprite(child);
    But the performance has slightly improved, but if i run the debugger I see that the child objects are still of type MovieClip.
    Does anyone know other way to increase performance?
    Thanks,
    Chris.

    I also run some more test and noticed some strange behaviours,
    Testlinks:
    - Without the gradient ( http://www.rhbmprogress.nl/temp/cs/performanceTest1/ )
    - With the gradient ( http://www.rhbmprogress.nl/temp/cs/performanceTest2/ );
    Ps. I did not see any difference if I disabled the SWFProfiler so I left it on for test purpuse.
    Desktop test,
    Firefox version:  4.0.1.
    Flash version: 10.2.152.32  (debugger version)
    Test1 - > 60 fps
    Test2 -> 55 fps
    Internet Explorer version: 9.0811.1642
    Flash version: 10.3.181.23 (debugger version)
    Test1 - >60 fps
    Test2 ->55 fps
    Other destop computer test:
    Firefox version 3.6.13
    Flash version: 10.1.52.14 (no debugger)
    Test 1-> 58 fps
    Test 2-> 35 fps
    Internet Explorer version 9.0.8112.1642
    Flash version: 10.3.181.23 (no debugger)
    Test1-> 60 fps
    Test2-> 37 fps
    As you can see there are some major difference between flash versions and browser types and versions.
    So the solid fill seems to have a stable fps, but the alpha gradient fill does not appear so.
    I wonder what flash version you used and what browser type and version

  • How can i increase performance of interface

    while i am runing interface in odi it takes 5 days.so how can i increase performance of interface.
    source contains: 30 crores of records
    i want copy 30 crores to target.
    source:oracle
    target:oracle
    i am using lkm:lkm sql to sql
    IKM:ikm control append
    Edited by: 967609 on 25 Oct, 2012 2:55 AM
    Edited by: 967609 on 25-Oct-2012 10:13

    IT IS CREATED VIEW AND SYNONYM.
    MY SERVER NAME IS REPA,
    ANTHOER SERVERNAE IS MISREPL
    create or replace view REPA.C$_0XX_TR
         C1_TJD,
         C2_CID,
         C3_BOO,
         C4_TYPE,
         C5_GRP,
         C6_POAM,
         C7_BALINT,
         C8_DUIN2,
         C9_CRLMT,
         C10_IRN,
         C11_TDUE,
         C12_CHKHLD,
         C13_WDLMT,
         C14_ZSBU,
         C15_BAL,
         C16_MCHG,
         C17_LCHG,
         C18_ACR,
         C19_CR,
         C20_DR,
         C21_CRCD
    ) as
    select * from (
    select     
         XX.TJD     C1_TJD,
         XX.CID     C2_CID,
         XX.BOO     C3_BOO,
         XX.TYPE     C4_TYPE,
         XX.GRP     C5_GRP,
         XX.POAM     C6_POAM,
         XX.BALINT     C7_BALINT,
         XX.DUIN2     C8_DUIN2,
         XX.CRLMT     C9_CRLMT,
         XX.IRN     C10_IRN,
         XX.TDUE     C11_TDUE,
         XX.CHKHLD     C12_CHKHLD,
         XX.WDLMT     C13_WDLMT,
         XX.ZSBU     C14_ZSBU,
         XX.BAL     C15_BAL,
         XX.MCHG     C16_MCHG,
         XX.LCHG     C17_LCHG,
         XX.ACR     C18_ACR,
         XX.CR     C19_CR,
         XX.DR     C20_DR,
         XX.CRCD     C21_CRCD
    from     REPA.XX@REMOTE XX
    where     (1=1)
    create synonym     STG.C$_0XX_TR
    for           REPA.C$_0XX_TR@remote
    insert into     STG.XX_TR
         TJD,
         CID,
         BOO,
         TYPE,
         GRP,
         POAM,
         BALINT,
         DUIN2,
         CRLMT,
         IRN,
         TDUE,
         CHKHLD,
         WDLMT,
         ZSBU,
         BAL,
         MCHG,
         LCHG,
         ACR,
         CR,
         DR,
         CRCD
    select
    TJD,     CID,
         BOO,
         TYPE,
         GRP,
         POAM,
         BALINT,
         DUIN2,
         CRLMT,
         IRN,
         TDUE,
         CHKHLD,
         WDLMT,
         ZSBU,
         BAL,
         MCHG,
         LCHG,
         ACR,
         CR,
         DR,
         CRCD
    FROM (
    select      
         C1_TJD TJD,
         C2_CID CID,
         C3_BOO BOO,
         C4_TYPE TYPE,
         C5_GRP GRP,
         C6_POAM POAM,
         C7_BALINT BALINT,
         C8_DUIN2 DUIN2,
         C9_CRLMT CRLMT,
         C10_IRN IRN,
         C11_TDUE TDUE,
         C12_CHKHLD CHKHLD,
         C13_WDLMT WDLMT,
         C14_ZSBU ZSBU,
         C15_BAL BAL,
         C16_MCHG MCHG,
         C17_LCHG LCHG,
         C18_ACR ACR,
         C19_CR CR,
         C20_DR DR,
         C21_CRCD CRCD
    from     STG.C$_0XX_TR
    where          (1=1)     
    ) ODI_GET_FROM
    i am getting following error
    ODI-1228: Task INT_DBLINK (Integration) fails on the target ORACLE connection STG.
    Caused By: java.sql.SQLException: ORA-12154: TNS:could not resolve the connect identifier specified
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:462)
         at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:405)
         at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:931)
         at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:481)
         at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:205)
         at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:548)
         at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:217)
         at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1115)
         at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1488)
         at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3769)
         at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3954)
         at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1539)
         at oracle.odi.runtime.agent.execution.sql.SQLCommand.execute(SQLCommand.java:163)
         at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:102)
         at oracle.odi.runtime.agent.execution.sql.SQLExecutor.execute(SQLExecutor.java:1)
         at oracle.odi.runtime.agent.execution.TaskExecutionHandler.handleTask(TaskExecutionHandler.java:50)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.processTask(SnpSessTaskSql.java:2913)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java:2625)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatAttachedTasks(SnpSessStep.java:558)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java:464)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java:2093)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$2.doAction(StartSessRequestProcessor.java:366)
         at oracle.odi.core.persistence.dwgobject.DwgObjectTemplate.execute(DwgObjectTemplate.java:216)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.doProcessStartSessTask(StartSessRequestProcessor.java:300)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor.access$0(StartSessRequestProcessor.java:292)
         at oracle.odi.runtime.agent.processor.impl.StartSessRequestProcessor$StartSessTask.doExecute(StartSessRequestProcessor.java:855)
         at oracle.odi.runtime.agent.processor.task.AgentTask.execute(AgentTask.java:126)
         at oracle.odi.runtime.agent.support.DefaultAgentTaskExecutor$2.run(DefaultAgentTaskExecutor.java:82)
         at java.lang.Thread.run(Thread.java:662)

  • What is the Porcess behind root.sh and orainstRoot.sh scripts run by root

    What is the Porcess behind root.sh and orainstRoot.sh scripts run by root, please replay the details wts behinds.

    http://sites.google.com/site/catchdba/Home/what-orainstroot-sh-and-root-sh-scripts-will-do
    $ sudo /local/mnt/oraInventory/orainstRoot.sh
    AFS Password:
    Changing permissions of /local/mnt/oraInventory to 770.
    Changing groupname of /local/mnt/oraInventory to dba.
    The execution of the script is complete
    $ sudo /local/mnt/oracle/product/11.1.0.6/root.sh
    Running Oracle 11g root.sh script...
    The following environment variables are set as:
       ORACLE_OWNER= oracle
       ORACLE_HOME= /local/mnt/oracle/product/11.1.0.6
    Enter the full pathname of the local bin directory: [usr/local/bin]:
       Copying dbhome to /usr/local/bin ...
       Copying oraenv to /usr/local/bin ...
       Copying coraenv to /usr/local/bin ...
    Creating /etc/oratab file...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    Finished product-specific root actions.

  • Mail has 16k  messages, and performance is very slow, with loading times taking up to 5 seconds every time I open Mail. How can I increase performance?

    Mail has 16k  messages, and performance is very slow, with loading times taking up to 5 seconds every time I open Mail.
    How can I increase performance?
    I'm running a MacBook Air 4GB 1.7GHz  10.7.2.
    Graham

    One possible solution would be to organise your inbox into folders.
    Its never relly good on any system to have one folder that has everything in it.
    Try going to you web gui for that mail account and organise your folders and move mails from your inbox into corresponding folders for better organisation.
    Several folders containing the same amount of one folder will usually load a little quicker as the folder may not be accessed to download its content unless veiwed.
    So having 10 folders with organised content, and you inbox as an area thats to hold only new emails would work much much quicker with imap.
    Most imap servers will only update the contents of a folder when its veiwed.

  • Managed Client running under root

    Hi
    Can any one of you here tell me what is Managed Client???
    I saw that one in the Activity Monitor and it is running under root account.
    Is this one of the Apple Remote Desktop?
    Or other Application is running.

    Try a google search for *managed client site:apple.com* and peruse the hits.

  • How do I run iCal 1.5 on an intel mac or fix iCal 2.0 webdav publishing?

    I've been using iCal 1.5 (any version, 1.5.2, 1.5.5, etc) to publish to a zope server using webdav for a few years now. I just bought a new Mac Pro, and discovered that while iCal 2.0 still publishes using webdav, the zope server is no longer able to recognize the .ics file. (it still can perfectly with any .ics published via iCal 1.5)
    If I export an iCal 2.0 calendar as an .ics file to my desktop and then manually ftp that up, the server recognizes that, so the problem lies somewhere in 2.0's webdav publishing process.
    So, in short - is there a way to run iCal 1.5 on an intel mac, or to edit how iCal 2.0 publishes via webdav?
    Any ideas at all would be welcome.

    Hi DaddyPaycheck,
    Thanks for your helpful reply.
    However, I'm still wondering if the SMC reset includes all other possible resets like PRAM and NVRAM or should I still try the option + command + P + R ? In other words, is it the ultimate complete reset you can do on an Intel Mac ?
    It is just to know it the right way, because since the last SMC reset I described everything is stable and fast without any trouble whatsoever, so I prefer to leave things unaltered and be sure about the procedure when needed.
    Thanks in advance for replying,
    Beliarus

  • When we run $CRS_HOME/root.sh scripts-This hangs for a very long time

    Hi,
    At the time of oracle cluster ware installation, when we run $CRS_HOME/root.sh scripts…
    bash-3.00# /export/home/oracle/product/10.2.0/crs/root.sh
    WARNING: directory '/export/home/oracle/product/10.2.0' is not owned by root
    WARNING: directory '/export/home/oracle/product' is not owned by root
    WARNING: directory '/export/home/oracle' is not owned by root
    WARNING: directory '/export/home' is not owned by root
    WARNING: directory '/export' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    Setting the permissions on OCR backup directory
    Setting up NS directories
    Oracle Cluster Registry configuration upgraded successfully
    WARNING: directory '/export/home/oracle/product/10.2.0' is not owned by root
    WARNING: directory '/export/home/oracle/product' is not owned by root
    WARNING: directory '/export/home/oracle' is not owned by root
    WARNING: directory '/export/home' is not owned by root
    WARNING: directory '/export' is not owned by root
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 1: rac1 rac1-priv rac1
    node 2: rac2 rac2-priv rac2
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    Now formatting voting device: /dev/rdsk/c0d1s1
    Format of 1 voting devices complete.
    Startup will be queued to init within 30 seconds.
    This hangs for a very long time; I controlled out of it and re-run it. The same result, it stops at the last line for hours now. Any idea?
    Thanks
    ANup
    Edited by: user485641 on २५ अप्रैल, २००९ ७:०५ अपराह्न

    what os ? what Oracle version?
    What you find on cluster log.... ?

  • I downloaded Firefox 5.0. Firefox will not open. States running offline. I have a Intel Pentium/Windows XP system.

    I downloaded Firefox 5.0. Firefox will not open. States running offline. I have a Intel Pentium/Windows XP system.
    This also happened when I tried to download Firefox 4.
    Firefox 3.6.18 works fine

    You may need to delete the Firefox program folder before (re)installing the latest Firefox.<br />
    In case you still have problems after installing then check your security software.
    A possible cause is security software (firewall) that blocks or restricts Firefox or the plugin-container process without informing you, possibly after detecting changes (update) to the Firefox program.
    Remove all rules for Firefox from the permissions list in the firewall and let your firewall ask again for permission to get full unrestricted access to internet for Firefox and the plugin-container process and the updater process.
    See:
    *https://support.mozilla.org/kb/Server+not+found
    *https://support.mozilla.org/kb/Firewalls
    *http://kb.mozillazine.org/Browser_will_not_start_up

  • Best way in using models & Increasing performance

    Hi all,
    I had some doubts in creation of model objects.
    1.How many RFCs can a model object can contain?
    2.I had a business senario where i had to use 4 Fm for performing a task.If i craete a single model object for this 4 Fun modules.Will it increase performance or else creating a model object for each fun module.will incresae the performance.
    3.Are there any good docs in SDN for Best practices or performance increasing in creation and using of model objects.please do paste the links or anyone have any docs plz send me.
    Thanks & Rgards,
    Lokesh

    HI...
    1.How many RFCs can a model object can contain?
    SAP recommonds...
    RFC Connection pools are specific to JCO Destination.
    Therefore, all deployed applications using the same model object pointing to the same JCO destination will share the SAME CONNECTION POOL.
    This fact defines both the scope of the connection management and determines the number of oncurrent application that may use the JCO destination.
    A MODEL OBJECT SHOULD CONTAIN THOSE RFMS THAT SUPPLY THE FUNCTIONALITY OF EITHER A DISCRETE BUSINESS TASK OR SOME ATOMIC SUBSET OF THE BUSINESS TASK
    -> HAVING ONE RFM PER MOEDL IS INEFFICIENT FROM A CONNECTION MANAGEMENT POINT OF VIEW.
    -> HAVING ALL YOUR RFMS IN ONE BIG MODEL OBJECT IS INEFFICIENT FROM A REUSE POINT OF VIEW
    2.I had a business senario where i had to use 4 Fm for performing a task.If i craete a single model object for this 4 Fun modules.Will it increase performance or else creating a model object for each fun module.will incresae the performance.
    As above described if the RFMs supply the functionality for a single task then put it in one model
    3.Are there any good docs in SDN for Best practices or performance increasing in creation and using of model objects.please do paste the links or anyone have any docs plz send me.
    This is described in JA310 ( Web dynpro JAVA) book. you can download it from marketplace.
    PradeeP

  • How to increase performance of adobe forms of MSS Business package

    Hi
    We have implemented MSS business package with  PCR adobe forms.
    Portal NW04 SP18, ERP 2004 , ADS is Nw04sp16 and abode reader 7.0.7.
    we have develped own PCR using existing ISR frame work.
    every thing working fine.but user facing performance problem like some times while opening pcr form ,browser gets hang up.
    Is there any way to increase performance of adobe forms of PCR.
    thanks In advance
    Gopal

    Hi!
    Interactive Forms need a lot of performance on the client side. If the client hangy up I think this is realted to client issues.
    Also I would update the forms server to be the same version as the other NW components (Portal).
    Sigi

  • Getting root's crontab to run with root's privs

    I am trying to schedule a cron job to run some serveradmin scripts. If I edit /etc/crontab, the script runs as scheduled, but while it appears to run as user root, it doesn't appear to run with root's privileges.
    Here is /etc/crontab and a simplified script:
    headquarters:~ admin$ cat /etc/crontab
    # The periodic and atrun jobs have moved to launchd jobs
    # See /System/Library/LaunchDaemons
    # mi hr md mo wd who command
    0-59/15 * * * * root /Users/admin/log_afs >> /Users/admin/afs.log
    headquarters:~ admin$ cat log_afs
    echo ----
    echo 'Timestamp: ' `date`
    CONNS=`serveradmin command afp:command = getConnectedUsers | grep ipAddress | wc -l`
    echo 'Number of Connections: ' $CONNS
    Once I install this crontab, I get an afs.log created in the correct location with a hh:15 timestamp, owned by root. To me, this should mean two things
    a) The job is running as scheduled
    b) The job ran as root
    Yet I'm getting output identical to that I would get if I'd run the script as admin (without the sudo prefix). Below is an interactive session:
    headquarters:~ admin$ ./log_afs
    Timestamp: Wed Nov 14 19:33:05 MST 2007
    serveradmin must be run as root
    Number of Connections: 0
    headquarters:~ admin$ sudo ./log_afs
    Timestamp: Wed Nov 14 19:33:29 MST 2007
    Number of Connections: 10
    headquarters:~ admin$
    And below is the output from the cron job:
    headquarters:~ admin$ ls -al afs.log
    -rw-r--r-- 1 root staff 287 Nov 14 19:30 afs.log
    headquarters:~ admin$ cat afs.log
    Timestamp: Wed Nov 14 19:30:00 MST 2007
    Number of Connections: 0
    The script I want to run does a lot more than just count connections; I've simplified it here for discussion. My main question now is "How do I schedule jobs that require root access?" ... not "How do I report X about afs?"

    I don't understand ... if the script is running as root (as evidenced by root owning the output file, and /etc/crontab sets root as the scripts owner) ... then why doesn't root have privs? Essentially, this is my original question.
    Besides, my original solution was sudo crontab -u root -e with the same results. Editing /etc/crontab and setting the user to root was an alternate approach.
    headquarters:~ admin$ sudo crontab -u root -l
    Password:
    # mi hr md mo wd command
    0-59/15 * * * * /Users/admin/log_afs >> /Users/admin/afs.log
    headquarters:~ admin$ cat afs.log
    Timestamp: Thu Nov 15 08:45:00 MST 2007
    Number of Connections: 0
    headquarters:~ admin$ sudo ./log_afs
    Timestamp: Thu Nov 15 08:48:40 MST 2007
    Number of Connections: 10

Maybe you are looking for

  • Remote Desktop Initiating Command Line Software Update to local server

    Hi there- We have a Software Update server (Mac OS X 10.4.4 Server) that works great. I'm trying to initiate a software update on a lab of 20 computers with ARD through the "send unix command." Now, because it's a software update, you need admin priv

  • Plant and Profit center wise Turnover and  gross profit for the Month for s

    Hi We need a report Plant and Profit center wise Turnover and gross profit for the Month for selected month and GL's We are using new GL. but our bi system old version. SAP_BW -- 7                      Patch:SAPKW70015 BI_CONT - 703                  

  • Access Manager Policy Agent 2.2

    Hello Has anyone experienced the error noted below. This is occurring after Access Manager has validated the user and redirected the request back to the agent on the protected box. PolicyEngine: am_policy_evaluate: InternalException in Service::do_up

  • GETWA_NOT_ASSIGNED. runtime error

    error stmt: field symbol ha not yet been assigned pleae help me to solve this error thanks in advance Moderator Message: Instead of dumping your code here, please read through the Short Dump Error Message and try to analyze where the error is. You co

  • Vmware tools intallation -- linux 4 update 5 was not happening

    Hi , I want to install Vmware tools for my machine. I installed VMware and Linux 4 update 5 but was unable to install vmware tools once i installed linux. I couldnt find vmware tools rpm. I just find vmwaretools-8.1.4-227600.tar.gz file. I was not ab