Oracle 9i Transaction Performance (Real World vs. TPC)?

Hello:
I was curious if anybody reading this forum has a server/disk architecture that has achieved over 10,000 Oracle Database transactions (with logging enabled) per second?
If so, can you please supply me with the clustered or standalone server and disk farm (NAS, SAN, DAS) details.
Thanks,
Robert
[email protected]
310-536-0018 x124
http://www.imperialtech.com

Hi,
have a look at http://publications.uu.se/theses/.
This site is now up for nearly one year. We're using Oracle 8.1.7, Apache 1.3.24 and Tomcat 4.0.3. We've got about 4500 hits per day with a maximum up to 9000 hits.
Regards Uwe Hi there Uwe,
Given you are running this in a production environment, are you running the XSQLRequest object or are you running the servlet directly. Running the servlet directly in particular, is just not production ready, unless I am missing something.
1. There are no direct filesystem logging capabilities in the servlet. Doing inserts into a db table for the purposes of logging, is not adequate as the critical points in the servlet operation is either connecting to the database or in the post query phase during the xml transform.
2. Errors like invalid db connections, timeouts, invalid xsql/xsl files, ie All xsql-xxxx errors other than query errors are not trappable from the servlet either via xml or otherwise. They get written directly to the Output stream as text.
Using the XSQLRequest object programmatically, we can work around some of these limitations, but that defeats the purpose and the otherwise ease of use of this servlet in a XML envirnoment.
- Manish

Similar Messages

  • SAP Screen Personas Performance in the real world

    I've read the blogs, Personas are amazing the user does much less clicking, etc, but in the real world is anyone using this in production, for a widescale deployment?
    I've tried this in my system and although it's cute, and at first people have fun clearing all the fields they never used, I felt the performance was severely lacking, and worst of all is that it inherits all the problems of SAP GUI for HTML, specfically bad formating. I don't use SAP GUI for HTML because some transactions don't work well in it, namely FBCJ, CV01N, etc. Do you feel this problem is fixed by Personas?
    It's the same as NWBC, awesome, great, but when I try to use it in production... the consistency just isn't there. I need a reliable GUI and both NWBC and Personas crash one too many times.
    Am I seeing this wrong? Are you succesfully using this in production?
    Best regards

    If you compare Personas to native SAPgui in unaltered transactions, then for sure Personas comes off worse. Its interface into the backend system is the HTMLgui, which isn't as responsive as a native SAPgui, and then it adds another layer on top. That layer is getting more efficient all the time, but still in most cases Personas is a little slower than HTMLgui.
    But that's missing the point of Personas. Look at this blog - Simplifying a multi-screen transaction with Personas. Here I simplify a transaction that requires a user to do a lot of clicking around to find all the information they need. This can take 30-60 seconds to get to everything. My Personas flavour takes a little while to render - maybe 5-6 seconds. That feels a little sluggish at first sight, until you realise it has everything the user needs in one screen with not further clicking or scrolling. Slow to render? Yes. Slower that the equivalent in SAPgui? Absolutely not. Being conservative, this saves our users 20-25 seconds each time they run the transaction. Automation is the key.
    Yes we are running this in production, and our users love it. Performance is not an issue for us or them. We have only 80-90 users using this live at the moment. There are customers with 10x that many Personas live in production.
    Steve.

  • Real World Item Level Permission Performance?

    I am considering implementing item level permission on a list we use. I've seen all the articles online cautioning not to do this with lists of more than 1000 items, but the articles seem to have little detailed information about the actual impact and what
    causes the performance issues. Additionally, they seem to refer to document libraries more than lists. I'd like some feedback about what might occur if we were to use item level security in our situation.
    Our situation is this: list of current ~700 items in a sharepoint list. Expected to grow around 700 items per year. The list has about 75 fields on it. We have 8 active-directory groups that have access to the list, based upon company department. Each
    item in the list can apply to one or more departments. The groups represent around 100-150 different unique users.
    We would like to use item level security to be set via workflow, to enable particular groups to access the item based upon their group membership. For example, if the list item is for the HR department, then the HR group has access. If the item is for IT,
    then the IT group has access (and HR wouldn't).
    That's it. There would be no nesting of items with multiple permission levels, no use of user-level ACLs on the items, etc.
    Thoughts about this configuration and expected performance issues?  Thanks for any feedback!

    Just an update for anyone who finds this thread:
    I converted our data into a test SharePoint list with 1500 rows. I then enabled full item-level security, with restrictions to hide data not created by the person.
    I then set individual permissions for each item that included 2-3 AD groups with different permissions--contribute, full ownership, etc, and 2-3 individuals with varying permissions. The individuals represented around 50 total people.
    After the permissions were set I then did a comparison of loading individual views and the full data set in Standard and Datasheet views, for both myself as an administrator with full list access and with several of the individuals who only had access to
    their designated items--typically 75-100 of the total list.
    The results were that I found no discernable difference in system performance from the user interface level while loading list views after the item level security was configured in this way. I understand this will vary based up
    hardware configuration and exact permission configuration, but in our situation the impact of item level security on a list of 1500 items had very little, if any, negative performance impact. Note that I didn't check performance at the database server level,
    but I'm assuming the impact there was minimal since the front-end user experience was unaffected.
    I expect we'll put this solution into place and if we do I'll update this post when we have additional real-world usage information.

  • T61p 200gb at 7200rpm buffer size and real world performance?

        Hi, I wanted to confirm what the buffer size on the 200gb 7200 option of the T61p had... there seems to be conflicting information regarding this model. I'd also appreciate any information regarding the real world performance of this particular drive...
    Thanks!
    Message Edited by carthikv12 on 05-19-2008 09:31 AM
    Solved!
    Go to Solution.

    both the hitachi and seagate 200GB/7200RPM drives used in the T61p have 16MB of onboard cache.   performance tests of these two drives are scattered across the internet.   search google for "hitachi 200GB 7K200" and "seagate 200GB 7200.3" respectively.
    ThinkStation C20
    ThinkPad X1C · X220 · X60T · s30 · 600

  • Real world performance/speed difference between 3ghz and 3.2

    Hello,
    I have a 2008, (not clovertown) Harpertown 3.0ghz Mac Pro. I wanted to know if its feasible to upgrade the cpus to the 3.2 ones? Also, what is the REAL WORLD speed/performance difference between the 3.0 and 3.2? I am so stressed out over this that I really need to have an answer to this.
    Not that I am going to buy the 3.2 processors, just wanting to see what I am missing here in terms of percentage overall between my mac pro and a 3.2ghz mac pro from 2008.
    Thank you,

    You realize that by now you can probably guess what some of our answers might be?
    Real world... well, in the real world you drive to work stuck in traffic most of the time, too.
    Spend your money on a couple new solid state drives.
    You want this for intellectual curiosity, so look at your Geekbench versus others.

  • Weblogic-oracle max transactions

    Hi,
    I have to design a system for process ordering in my company in which I am planning
    to use weblogic. I have a bug concern about database performance.
    The fact I face is that the systems concentrates its workload during half an hour
    each day, during this time the database has to perform 77 tps (both reading and
    inserts). I have experience enough with weblogic to know that the pages served
    and logical transactions performance can be managed by the systems we have, BUT
    I don´t think we'll be able to perform 77 tps to oracle using jdbc...
    I have thought about, putting tuxedo in the middle to "enqueue" the database work...
    but I am affraid we'll need a pretty big machine.
    Another possibility could be using JTA ... but the problem is in the database,
    not in weblogic capacity to manage queues, it will be in the oracle eating rows
    etc.
    Other way can be using an object relational mapping tool like versant's enjin...
    but I don't have any experience on how it performs, or works.
    and... that's all I have can anybody give me hints???
    THANKS a lot in advance
    pedro

    Hi Pedro,
    Rob gave you as precise answer as possible
    in this situation. The only way to know how an application
    scales is running adequate load tests. Otherwise
    you'll be guessing only. For example, for the same
    hardware configuration you may get completely
    different results for different data - you may be inserting
    20 bytes records, or 2Mb records, you may have a single
    client and you may have a bunch of clients concurrently
    hitting the server etc. There are too many variables and this
    is what a good load test should take care about.
    Regards,
    Slava Imeshev
    "Pedro Trujillano" <[email protected]> wrote in message
    news:[email protected]...
    >
    Well of course we'll make such testing before writting the app.
    The question was more open in the meaning that having a highlytransactional environment;
    around 50% of that txs are going to be inserts, will a sun 4500 (to givemore
    data) with 8 cpus eat them?
    Will a product like versant's Ejin help? (or toplink)
    Are there any other ways to do it?
    thanks again
    Pedro
    Rob Woollen <[email protected]> wrote:
    I would suggest starting by writing a simple jdbc prototype that runs
    the reads/inserts and measure the performance. You should be able to
    tune this and get a good idea of your real configuration needs.
    It's hard for anyone to tell you whether 77 tps against Oracle is
    reasonable or not because it totally depends on what your transaction
    is
    doing, your hardware, etc.
    -- Rob
    Pedro Trujillano wrote:
    Hi,
    I have to design a system for process ordering in my company in whichI am planning
    to use weblogic. I have a bug concern about database performance.
    The fact I face is that the systems concentrates its workload duringhalf an hour
    each day, during this time the database has to perform 77 tps (bothreading and
    inserts). I have experience enough with weblogic to know that the pagesserved
    and logical transactions performance can be managed by the systemswe have, BUT
    I don´t think we'll be able to perform 77 tps to oracle using jdbc...
    I have thought about, putting tuxedo in the middle to "enqueue" thedatabase work...
    but I am affraid we'll need a pretty big machine.
    Another possibility could be using JTA ... but the problem is in thedatabase,
    not in weblogic capacity to manage queues, it will be in the oracleeating rows
    etc.
    Other way can be using an object relational mapping tool like versant'senjin...
    but I don't have any experience on how it performs, or works.
    and... that's all I have can anybody give me hints???
    THANKS a lot in advance
    pedro

  • Anyone have some real-world experience with clustering and RMI?

    We are getting ready to do some stress testing on a clustered WLS RMI
              environment. Does anyone have any real world experience with this type
              of environment. I'm looking for information related to performance
              (transactions per second), how does it scale as you add new instances
              (does it get to a point where another instance actually degrades
              performance), how easy is it to administer the cluster?
              Any input would be greatly appreciated,
              thanks,
              Edwin
              

    Edwin:
              You might also want to refer to my earlier posting in this newsgroup re:
              setting an optimal value for executeThreadCount and percentSocketReaders
              on the client side in a cluster setup. Look for the subject "variable
              performance in a cluster".
              Srikant.
              Edwin Marcial wrote:
              > We are getting ready to do some stress testing on a clustered WLS RMI
              > environment. Does anyone have any real world experience with this type
              > of environment. I'm looking for information related to performance
              > (transactions per second), how does it scale as you add new instances
              > (does it get to a point where another instance actually degrades
              > performance), how easy is it to administer the cluster?
              >
              > Any input would be greatly appreciated,
              >
              > thanks,
              >
              > Edwin
              

  • Making Effective Use of the Hybrid Cloud: Real-World Examples

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, and it was clear that NetApp's approach to hybrid cloud and Data Fabric resonated with the crowd. NetApp solutions such as NetApp Private Storage for Cloud are solving real customer problems.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that allows you to move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    Check out the following blogs for more perspectives:
    Microsoft Ignite Sparks More Innovation from NetApp
    ASR Now Supports NetApp Private Storage for Microsoft Azure
    Four Ways Disaster Recovery is Simplified with Storage Management Standards
    Introducing OnCommand Shift
    SHIFT VMs between Hypervisors
    Infront Consulting + NetApp = Success
    Richard Treadway
    Senior Director of Cloud Marketing, NetApp
    Tom Shields
    Senior Manager, Cloud Service Provider Solution Marketing, NetApp
    Enterprises are increasingly turning to cloud to drive agility and closely align IT resources to business needs. New or short-term projects and unexpected spikes in demand can be satisfied quickly and elastically with cloud resources, spurring more creativity and productivity while reducing the waste associated with over- or under-provisioning.
    Figure 1) Cloud lets you closely align resources to demand.
    Source: NetApp, 2015
    While the benefits are attractive for many workloads, customer input suggests that even more can be achieved by moving beyond cloud silos and better managing data across cloud and on-premises infrastructure, with the ability to move data between clouds as needs and prices change. Hybrid cloud models are emerging where data can flow fluidly to the right location at the right time to optimize business outcomes while providing enhanced control and stewardship.
    These models fall into two general categories based on data location. In the first, data moves as needed between on-premises data centers and the cloud. In the second, data is located strategically near, but not in, the cloud.
    Let's look at what some customers are doing with hybrid cloud in the real world, their goals, and the outcomes.
    Data in the Cloud
    At NetApp, we see a variety of hybrid cloud deployments sharing data between on-premises data centers and the cloud, providing greater control and flexibility. These deployments utilize both cloud service providers (CSPs) and hyperscale public clouds such as Amazon Web Services (AWS).
    Use Case 1: Partners with Verizon for Software as a Service Colocation and integrated Disaster Recovery in the Cloud
    For financial services company BlackLine, availability, security, and compliance with financial standards is paramount. But with the company growing at 50% per year, and periodic throughput and capacity bursts of up to 20 times baseline, the company knew it couldn't sustain its business model with on-premises IT alone.
    Stringent requirements often lead to innovation. BlackLine deployed its private cloud infrastructure at a Verizon colocation facility. The Verizon location gives them a data center that is purpose-built for security and compliance. It enables the company to retain full control over sensitive data while delivering the network speed and reliability it needs. The colocation facility gives Blackline access to Verizon cloud services with maximum bandwidth and minimum latency. The company currently uses Verizon Cloud for disaster recovery and backup. Verizon cloud services are built on NetApp® technology, so they work seamlessly with BlackLine's existing NetApp storage.
    To learn more about BlackLine's hybrid cloud deployment, read the executive summary and technical case study, or watch this customer video.
    Use Case 2: Private, Nonprofit University Eliminates Tape with Cloud Integrated Storage
    A private university was just beginning its cloud initiative and wanted to eliminate tape—and offsite tape storage. The university had been using Data Domain as a backup target in its environment, but capacity and expense had become a significant issue, and it didn't provide a backup-to-cloud option.
    The director of Backup turned to a NetApp SteelStore cloud-integrated storage appliance to address the university's needs. A proof of concept showed that SteelStore™ was perfect. The on-site appliance has built-in disk capacity to store the most recent backups so that the majority of restores still happen locally. Data is also replicated to AWS, providing cheap and deep storage for long-term retention. SteelStore features deduplication, compression, and encryption, so it efficiently uses both storage capacity (both in the appliance and in the cloud) and network bandwidth. Encryption keys are managed on-premises, ensuring that data in the cloud is secure.
    The university is already adding a second SteelStore appliance to support another location, and—recognizing which way the wind is blowing—the director of Backup has become the director of Backup and Cloud.
    Use Case 3: Consumer Finance Company Chooses Cloud ONTAP to Move Data Back On-Premises
    A leading provider of online payment services needed a way to move data generated by customer applications running in AWS to its on-premises data warehouse. NetApp Cloud ONTAP® running in AWS proved to be the least expensive way to accomplish this.
    Cloud ONTAP provides the full suite of NetApp enterprise data management tools for use with Amazon Elastic Block Storage, including storage efficiency, replication, and integrated data protection. Cloud ONTAP makes it simple to efficiently replicate the data from AWS to NetApp FAS storage in the company's own data centers. The company can now use existing extract, transform and load (ETL) tools for its data warehouse and run analytics on data generated in AWS.
    Regular replication not only facilitates analytics, it also ensures that a copy of important data is stored on-premises, protecting data from possible cloud outages. Read the success story to learn more.
    Data Near the Cloud
    For many organizations, deploying data near the hyperscale public cloud is a great choice because they can retain physical control of their data while taking advantage of elastic cloud compute resources on an as-needed basis. This hybrid cloud architecture can deliver better IOPS performance than native public cloud storage services, enterprise-class data management, and flexible access to multiple public cloud providers without moving data. Read the recent white paper from the Enterprise Strategy Group, “NetApp Multi-cloud Private Storage: Take Charge of Your Cloud Data,” to learn more about this approach.
    Use Case 1: Municipality Opts for Hybrid Cloud with NetApp Private Storage for AWS
    The IT budgets of many local governments are stretched tight, making it difficult to keep up with the growing expectations of citizens. One small municipality found itself in this exact situation, with aging infrastructure and a data center that not only was nearing capacity, but was also located in a flood plain.
    Rather than continue to invest in its own data center infrastructure, the municipality chose a hybrid cloud using NetApp Private Storage (NPS) for AWS. Because NPS stores personal, identifiable information and data that's subject to strict privacy laws, the municipality needed to retain control of its data. NPS does just that, while opening the door to better citizen services, improving availability and data protection, and saving $250,000 in taxpayer dollars. Read the success story to find out more.
    Use Case 2: IT Consulting Firm Expands Business Model with NetApp Private Storage for Azure
    A Japanese IT consulting firm specializing in SAP recognized the hybrid cloud as a way to expand its service offerings and grow revenue. By choosing NetApp Private Storage for Microsoft Azure, the firm can now offer a cloud service with greater flexibility and control over data versus services that store data in the cloud.
    The new service is being rolled out first to support the development work of the firm's internal systems integration engineering teams, and will later provide SAP development and testing, and disaster recovery services for mid-market customers in financial services, retail, and pharmaceutical industries.
    Use Case 3: Financial Services Leader Partners with NetApp for Major Cloud Initiative
    In the heavily regulated financial services industry, the journey to cloud must be orchestrated to address security, data privacy, and compliance. A leading Australian company recognized that cloud would enable new business opportunities and convert capital expenditures to monthly operating costs. However, with nine million customers, the company must know exactly where its data is stored. Using native cloud storage is not an option for certain data, and regulations require that the company maintain a tertiary copy of data and retain the ability to restore data under any circumstances. The company also needed to vacate one of its disaster-recovery data centers by the end of 2014.
    To address these requirements, the company opted for NetApp Private Storage for Cloud. The firm placed NetApp storage systems in two separate locations: an Equinix cloud access facility and a Global Switch colocation facility both located in Sydney. This satisfies the requirement for three copies of critical data and allows them to take advantage of AWS EC2 compute instances as needed, with the option to use Microsoft Azure or IBM SoftLayer as an alternative to AWS without migrating data. For performance, the company extended its corporate network to the two facilities.
    The firm vacated the data center on schedule, a multimillion-dollar cost avoidance. Cloud services are being rolled out in three phases. In the first phase, NPS will provide disaster recovery for the company's 12,000 virtual desktops. In phase two, NPS will provide disaster recover for enterprise-wide applications. In the final phase, the company will move all enterprise applications to NPS and AWS. NPS gives the company a proven methodology for moving production workloads to the cloud, enabling it to offer new services faster. Because the on-premises storage is the same as the cloud storage, making application architecture changes will also be faster and easier than it would be with other options. Read the success story to learn more.
    NetApp on NetApp: nCloud
    When NetApp IT needed to provide cloud services to its internal customers, the team naturally turned to NetApp hybrid cloud solutions, with a Data Fabric joining the pieces. The result is nCloud, a self-service portal that gives NetApp employees fast access to hybrid cloud resources. nCloud is architected using NetApp Private Storage for AWS, FlexPod®, clustered Data ONTAP and other NetApp technologies. NetApp IT has documented details of its efforts to help other companies on the path to hybrid cloud. Check out the following links to lean more:
    Hybrid Cloud: Changing How We Deliver IT Services [blog and video]
    NetApp IT Approach to NetApp Private Storage and Amazon Web Services in Enterprise IT Environment [white paper]
    NetApp Reaches New Heights with Cloud [infographic]
    Cloud Decision Framework [slideshare]
    Hybrid Cloud Decision Framework [infographic]
    See other NetApp on NetApp resources.
    Data Fabric: NetApp Services for Hybrid Cloud
    As the examples in this article demonstrate, NetApp is developing solutions to help organizations of all sizes move beyond cloud silos and unlock the power of hybrid cloud. A Data Fabric enabled by NetApp helps you more easily move and manage data in and near the cloud; it's the common thread that makes the uses cases in this article possible. Read Realize the Full Potential of Cloud with the Data Fabric to learn more about the Data Fabric and the NetApp technologies that make it possible.
    Richard Treadway is responsible for NetApp Hybrid Cloud solutions including SteelStore, Cloud ONTAP, NetApp Private Storage, StorageGRID Webscale, and OnCommand Insight. He has held executive roles in marketing and engineering at KnowNow, AvantGo, and BEA Systems, where he led efforts in developing the BEA WebLogic Portal.
    Tom Shields leads the Cloud Service Provider Solution Marketing group at NetApp, working with alliance partners and open source communities to design integrated solution stacks for CSPs. Tom designed and launched the marketing elements of the storage industry's first Cloud Service Provider Partner Program—growing it to 275 partners with a portfolio of more than 400 NetApp-based services.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    Dave:
    "David Scarani" <[email protected]> wrote in message
    news:3ecfc046$[email protected]..
    >
    I was looking for some real world "Best Practices" of deploying J2EEapplications
    into a Production Weblogic Environment.
    We are new at deploying applications to J2EE application servers and arecurrently
    debating 2 methods.
    1) Store all configuration (application as well as Domain configuration)in properties
    files and Use Ant to rebuild the domain everytime the application isdeployed.
    I am just a WLS engineer, not a customer, so my opinions have in some
    regards little relative weight. However I think you'll get more mileage out
    of the fact that once you have created your config.xml, checking it into src
    control, versioning it. I would imagine that application changes are more
    frequent than server/domain configuration so it seems a little heavy weight
    to regenerate the entire configuration everytime an application is
    deployed/redeployed. Either way you should check out the wlconfig ant task.
    Cheers
    mbg
    2) Have a production domain built one time, configured as required andalways
    up and available, then use Ant to deploy only the J2EE application intothe existing,
    running production domain.
    I would be interested in hearing how people are doing this in theirproduction
    environments and any pros and cons of one way over the other.
    Thanks.
    Dave Scarani

  • RAID test on 8-core with real world tasks gives 9% gain?

    Here are my results from testing the software RAID set up on my new (July 2009) Mac Pro. As you will see, although my 8-core (Octo) tested twice as fast as my new (March 2009) MacBook 2.4 GHz, the software RAID set up only gave me a 9% increase at best.
    Specs:
    Mac Pro 2x 2.26 GHz Quad-core Intel Xeon, 8 GB 1066 MHz DDR3, 4x 1TB 7200 Apple Drives.
    MacBook 2.4 GHz Intel Core 2 Duo, 4 GB 1067 MHz DDR3
    Both running OS X 10.5.7
    Canon Vixia HG20 HD video camera shooting in 1440 x 1080 resolution at “XP+” AVCHD format, 16:9 (wonderful camera)
    The tests. (These are close to my real world “work flow” jobs that I would have to wait on when using my G5.)
    Test A: import 5:00 of video into iMovie at 960x540 with thumbnails
    Test B: render and export with Sepia applied to MPEG-4 at 960x540 (a 140 MB file) in iMovie
    Test C: in QuickTime resize this MPEG-4 file to iPod size .m4v at 640x360 resolution
    Results:
    Control: MacBook as shipped
    Test A: 4:16 (four minutes, sixteen seconds)
    Test B: 13:28
    Test C: 4:21
    Control: Mac Pro as shipped (no RAID)
    Test A: 1:50
    Test B: 7:14
    Test C: 2:22
    Mac Pro config 1
    RAID 0 (no RAID on the boot drive, three 1TB drives striped)
    Test A: 1:44
    Test B: 7:02
    Test C: 2:23
    Mac Pro config 2
    RAID 10 (drives 1 and 2 mirrored, drives 3 and 4 mirrored, then both mirrors striped)
    Test A: 1:40
    Test B: 7:09
    Test C: 2:23
    My question: Why am I not seeing an increase in speed on these tasks? Any ideas?
    David
    Notes:
    I took this to the Apple store and they were expecting 30 to 50 per cent increase with the software RAID. They don’t know why I didn’t see it on my tests.
    I am using iMovie and QuickTime because I just got the Adobe CS4 and ran out of cash. And it is fine for my live music videos. Soon I will get Final Cut Studio.
    I set up the RAID with Disk Utility without trouble. (It crashed once but reopened and set up just fine.) If I check back it shows the RAID set up working.
    Activity Monitor reported “disk activity” peaks at about 8 MB/sec on both QuickTime and iMovie tasks. The CPU number (percent?) on QT was 470 (5 cores involved?) and iMovie was 294 (3 cores involved?).
    Console reported the same error for iMovie and QT:
    7/27/09 11:05:35 AM iMovie[1715] Error loading /Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHDAudio: dlopen(/Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHD Audio, 262): Symbol not found: _keymgr_get_per_threaddata
    Referenced from: /Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHDAudio
    Expected in: /usr/lib/libSystem.B.dylib

    The memory controllers, one for each cpu, means that you need at least 2 x 2GB on each bank. If that is how Apple set it up, that is minimal and the only thing I would do now with RAM is add another 2 x 2GB. That's all. And get you into triple channel bandwidth.
    It could be the make and model of your hard drives. If they are seagate then more info would help. And not all drives are equal when it comes to RAID.
    Are you new to RAID or something you've been doing? seems you had enough to build 0+1 and do some testing. Though not pleased, even if it works now, that it didn't take the one time.
    Drives - and RAIDs - improve over the first week or two - which, before commiting good data to them - is the best time to torture, run them ragged, use Speedtools to break them in, loosen up the heads, scan for media errors, and run ZoneBench (and with 1TB, partition each drive into 1/4ths).
    If Drive A is not identical to B, then they may deal with an array even worse. And no two drives are purly identical, some vary more than others, and some are best used in hardware RAID controller environments.
    Memory: buying in groups of three. okay. But then adding 4 x 4GB? So bank A with 4 x 2GB and B with twice as much memory. On Mac Pro, 4 DIMMs on a bank you get 70% bandwidth, it drops down from tri-channel to dual-channel mode.
    I studied how to build or put together a PC for over six months, but then learned more in the month (or two) after I bought all the parts, found what didn't work, learned my own short-comings, and ended up building TWO - one for testing, other for backup system. And three motherboards (the best 'rated' also had more trouble with BIOS and fans, the cheap one was great, the Intel board that reviewers didn't seem to "gork" actually has been the best and easiest to use and update BIOS). Hands on wins 3:1 versus trying to learn by reading for me, hands-on is what I need to learn. Or take car or sailboat out for drive, spin, see how it fares in rough weather.
    I buy an Apple system bare bones, stock, or less, then do all the upgrades on my own, when I can afford to, gradually over months, year.
    Each cpu needs to be fed. So they each need at least 3 x 1GB RAM. And they need raw data fed to RAM and cpu from disk drives. And your mix of programs will each behave differently. Which is why you see Barefeats test with Pro Apps, CINEBENCH, and other apps or tools.
    What did you read or do in the past that led you to think you need RAID setup, and for how it would affect performance?
    Photoshop Guides to Performance:
    http://homepage.mac.com/boots911/.Public/PhotoshopAccelerationBasics2.4W.pdf
    http://kb2.adobe.com/cps/401/kb401089.html
    http://www.macgurus.com/guides/storageaccelguide.php
    4-core vs 8-core
    http://www.barefeats.com/nehal08.html
    http://www.barefeats.com/nehal03.html

  • New Enterprise Manager Book on Advanced EM Techniques for the Real World

    Dear Friends,
    I am pleased to say my first EM book can be ordered now.
    Oracle Enterprise Manager Grid Control: Advanced Techniques for the Real World
    [http://www.rampant-books.com/book_1001_advanced_techniques_oem_grid_control.htm]
    Please let your colleagues and friends and clients know – it is the first book in the world to include EM 11g Grid Control. It is a great way for people to understand the capabilities of EM.
    Oracle’s Enterprise Manager Grid Control is recognized as the IT Industry’s leading Oracle database administration and management tool. It is unrivalled in its ability to monitor, manage, maintain and report on entire enterprise grids that comprise hundreds (if not thousands) of Oracle databases and servers following an approach that is consistent and repeatable.
    However, Enterprise Manager Grid Control may seem daunting even to the most advanced Oracle Administrator. The problem is you know about the power of Enterprise Manager but how do you unleash that power amongst what initially appears to be a maze of GUI-based screens that feature a myriad of links to reports and management tasks that in turn lead you to even more reports and management tasks?
    This book shows you how to unleash that power.
    Based on the Author’s considerable and practical Oracle database and Enterprise Manager Grid Control experience you will learn through illustrated examples how to create and schedule RMAN backups, generate Data Guard Standbys, clone databases and Oracle Homes and patch databases across hundreds and thousands of databases. You will learn how you can unlock the power of the Enterprise Manager Grid Control Packs, PlugIns and Connectors to simplify your database administration across your company’s database network, as also the management and monitoring of important Service Level Agreements (SLAs), and the nuances of all important real-time change control using Enterprise Manager.
    There are other books on the market that describe how to install and configure Enterprise Manager but until now they haven’t explained using a simple and illustrated approach how to get the most out of your Enterprise Manager. This book does just that.
    Covers the NEW Enterprise Manager Grid Control 11g.
    Regards,
    Porus.

    Abuse reported.

  • What is Oracle 10g Release 2 Real Application Cluster?

    Hello Guys,
    I have been working in Oracle database 9i for last 2 years. Just using the simple features of oracle. Now our data have grown and company wants to enhance the database version and also want to perform load balancing at oracle's end.
    Have heard a lot about Oracle RAC, but i am not sure what actually is oracle rac and its installation.
    What is the difference between Oracle 10g Release 2 Enterprise Installation and Oracle 10g Release 2 Real Application Cluster installation. Are these 2 different softwares available on different CDs or RAC is only a different way to install oracle 10g database in RAC mode.
    Please clerify and explain. Sorry this is a very beginner level question but i want to learn it and implement it.
    Regards,
    Imran Baig

    What is the difference between Oracle 10g Release 2
    Enterprise Installation and Oracle 10g Release 2 Real
    Application Cluster installation. Are these 2Enterprise Edition is the database management software provided byu Oracle.
    RAC is an extension to the Standard and Enterprise Edition to permit the same database to be managed from two or more computers. (Withe Enterprise Edition, it is an extra cost option. With Standard Edition, it is included in the price, since SE has other limitations.)
    Conceptually, RAC is simple - IF you understand the difference between Instance and Database. RAC = 1 database managed by many instances
    Physically, RAC requires a healthy investment in infrastructure - the infrastructure and the RAC software addresses the question: how do you get several computers able to read and write to the same disk and the same block without data corruption?
    If you are serious about RAC, take a course. If you are in the US, consider http://www.psoug.org as an 'Intro to RAC' course provider.

  • Infocubes - TRANSACTIONAL - STANDARD - REAL TIME

    <h3>Some clarity has to be brought in to these three types of CUBES.
    <h5>
    I am using SALES ANALYSIS & SALES BUDGET here as an example.
    <h1>Assumptions & Scenario:
    <h5>1. Sales Budget is prepared once a year, Division / Group / Category / Material wise (may be customerwise, but does not really matter). Once the budget is approved, it more or less remains static, with very little changes. Budgeting is done for Quantity to be sold as well as an approximate unit cost.
    <h5>
    2. Sales Analysis captures the actual sales in the same order Division / Group / Category / Material wise but now the quantity and the unit price are actual values.
    <h5>
    3. Assume that the accounting period for this organization is January to December for a calendar year, and the Sales budget is finalized and approved in November for the next Accounting year. Example, sales budget for Jan, 2010 - Dec, 2010 is finalized and approved in November, 2009.
    <h5>
    4. Review and fine tuning and updates to Sales budget is carried out once in three months for the next quarter. In the case in Mid March, Mid Jun, Mid Sep.
    <h1>BI Requirement:
    <h5>1. Actual sales figures are always compared with Sales Budget and also compared with Last years Actuals.
    <h2>
    Solution Scenario - I.
    <h5>
    Maintain Sales Budget figures in a separate INFOCUBE, FACTS being quantity and value, Dimensions being Division / Group / Category / Material . (since there is very little update to this cube, SHOULD WE CALL IT STANDARD or REALTIME or TRANSACTIONAL)
    <h5>
    Maintain Actual sales in another INFOCUBE, again FACTS being quantity and value, Dimensions being Division / Group / Category / Material . Because of the Data Volume, we want to update this cube on nightly basis, using DELTA. Should we call this STANDARD or TRANSACTIONAL ?
    <h5>
    All the reports, queries will JOIN (dont know how) both the cubes and display BUDGET and ACTUALS in adjacent collumns.
    <h2>_Solution Scenario -II._
    <h5>
    Since the BUDGET figures and ACTUAL figures are always viewed together, create a single INFOCUBE for this department.
    <h5>
    FACTS being BudgetQuantity, BudgetAmount, ActualQuantity and ActualAmount. (include if you want, some computational fields to show Percentage of Variance which can also be done at the query level). DIMENSIONS remain the same as SCENARIO - I.
    <h5>
    You will note immediately that TWO OF THE FACTS are almost STATIC while two of the FACTS are updated everyday.
    <h5>
    What would you call this type of CUBE? TRASACTIONAL / STANDARD / REALTIME ?
    <h5>
    Difference between STANDARD and REALTIME cubes being the way the INDICES are maintained. (Hope my understanding is correct, one designed for better performance for RETRIEVAL and the other for UPDATE).
    <h5>
    In all the cases (Transactional / Standard / Real time) the data is moved from R/3 to PSA and then to the INFOCUBE. so why use the term REAL TIME. And cubes are always built based on Business Transactions, so why use the term TRANSACTIONAL?
    <h5>
    The term "Real Time" somehow conveys the idea that the CUBE Gets updated, when a transaction is Committed in R/3 database. The Help document also is Ambiguous. If the data is moved from R/3 to Data Source and then to PSA and finally reaches Infocube, then  what is REAL TIME?
    <h5>
    I think it would benefit everyone in SAP BI domain, if these things are explained in plain english.
    <h5>
    I apologize for writing such a long QUESTION , but hope it will help many of you,
    <h3>
    Thanks,
    <h5>
    Gold
    Edited by: HornigoldA on Jan 21, 2010 2:33 PM --- I just edited it to make it more readable with MARKUPS

    <h3>Some clarity has to be brought in to these three types of CUBES.
    <h5>
    I am using SALES ANALYSIS & SALES BUDGET here as an example.
    <h1>Assumptions & Scenario:
    <h5>1. Sales Budget is prepared once a year, Division / Group / Category / Material wise (may be customerwise, but does not really matter). Once the budget is approved, it more or less remains static, with very little changes. Budgeting is done for Quantity to be sold as well as an approximate unit cost.
    <h5>
    2. Sales Analysis captures the actual sales in the same order Division / Group / Category / Material wise but now the quantity and the unit price are actual values.
    <h5>
    3. Assume that the accounting period for this organization is January to December for a calendar year, and the Sales budget is finalized and approved in November for the next Accounting year. Example, sales budget for Jan, 2010 - Dec, 2010 is finalized and approved in November, 2009.
    <h5>
    4. Review and fine tuning and updates to Sales budget is carried out once in three months for the next quarter. In the case in Mid March, Mid Jun, Mid Sep.
    <h1>BI Requirement:
    <h5>1. Actual sales figures are always compared with Sales Budget and also compared with Last years Actuals.
    <h2>
    Solution Scenario - I.
    <h5>
    Maintain Sales Budget figures in a separate INFOCUBE, FACTS being quantity and value, Dimensions being Division / Group / Category / Material . (since there is very little update to this cube, SHOULD WE CALL IT STANDARD or REALTIME or TRANSACTIONAL)
    <h5>
    Maintain Actual sales in another INFOCUBE, again FACTS being quantity and value, Dimensions being Division / Group / Category / Material . Because of the Data Volume, we want to update this cube on nightly basis, using DELTA. Should we call this STANDARD or TRANSACTIONAL ?
    <h5>
    All the reports, queries will JOIN (dont know how) both the cubes and display BUDGET and ACTUALS in adjacent collumns.
    <h2>_Solution Scenario -II._
    <h5>
    Since the BUDGET figures and ACTUAL figures are always viewed together, create a single INFOCUBE for this department.
    <h5>
    FACTS being BudgetQuantity, BudgetAmount, ActualQuantity and ActualAmount. (include if you want, some computational fields to show Percentage of Variance which can also be done at the query level). DIMENSIONS remain the same as SCENARIO - I.
    <h5>
    You will note immediately that TWO OF THE FACTS are almost STATIC while two of the FACTS are updated everyday.
    <h5>
    What would you call this type of CUBE? TRASACTIONAL / STANDARD / REALTIME ?
    <h5>
    Difference between STANDARD and REALTIME cubes being the way the INDICES are maintained. (Hope my understanding is correct, one designed for better performance for RETRIEVAL and the other for UPDATE).
    <h5>
    In all the cases (Transactional / Standard / Real time) the data is moved from R/3 to PSA and then to the INFOCUBE. so why use the term REAL TIME. And cubes are always built based on Business Transactions, so why use the term TRANSACTIONAL?
    <h5>
    The term "Real Time" somehow conveys the idea that the CUBE Gets updated, when a transaction is Committed in R/3 database. The Help document also is Ambiguous. If the data is moved from R/3 to Data Source and then to PSA and finally reaches Infocube, then  what is REAL TIME?
    <h5>
    I think it would benefit everyone in SAP BI domain, if these things are explained in plain english.
    <h5>
    I apologize for writing such a long QUESTION , but hope it will help many of you,
    <h3>
    Thanks,
    <h5>
    Gold
    Edited by: HornigoldA on Jan 21, 2010 2:33 PM --- I just edited it to make it more readable with MARKUPS

  • How do I send Firefox information about It's helpful for Mozilla's engineers to be able to measure how Firefox behaves in the real world. The Telemetry feature

    It's helpful for Mozilla's engineers to be able to measure how Firefox behaves in the real world. The Telemetry feature provides this capability by sending performance and usage info to us. As you use Firefox, Telemetry measures and collects non-personal information, such as memory consumption, responsiveness timing and feature usage. It then sends this information to Mozilla on a daily basis and we use it to make Firefox better for you.
    Telemetry is an opt-in feature. Bring it to me

    Hi Terrancecallins,
    I understand that you would like to opt-in for the Telemetry feature in your Firefox browser. Thank you for your interest in helping to make Firefox better!
    Here is the help article explaining how to turn on this feature:
    * [[Send performance data to Mozilla to help improve Firefox]]
    Please let us know if you have any other questions.
    Thanks,
    - Ralph

  • Real World X99 5960X 4.4ghz, 36MP files & Lightroom 5 - ~1.09sec JPG Export

    Hi All,
    I've been trawling the net for the longest time since the X99 was launched. There wasn't really real world LR performance results I could find, so I am sharing this info with the community here.
    I  am a professional photographer, formerly trained as a systems analyst so I am keen with pc/mac hardware etc. I was torn between a new MacPro or setting up a new PC. I did some research and ended up with a PC. Here are the specs for the new PC: Intel 5960x 4.4ghz (8core), Rampage V, MSI 980, Cosair H110, CosairAX1200, Corsair Carbide 540. Running Intel SSDs (old ones). Old PC was an i7 960 3.9ghz (4core)
    I kept my old SSDs (they were still working) and transplanted them over to the new system. Hence, SSD speeds were the constant in the experiment.
    36MP file export to JPG using 3 process stacking method: ~1.09sec (about 100-160% faster than the 3.9ghz 4core)
    1:1 preview generation: ~1-2sec (maybe about 10-20% faster)
    Browsing 1:1 previews: <0.2sec (about 100% faster)
    Other stuff like sliders etc: About the same speed as the old machine 3.9ghz.
    The CPU was maximised during the file export, all CPU threads were firing. However, the 1:1 preview was as expected likely no optimised for multithreading. Browsing was surprisingly fast, likely due to the X99 architecture. That said, I was not very disappointed with the results as I read that LR wasn't multi-threaded much (I saw the CPU % usage in my old PC) but found it interesting that most operations weren't affected much.
    I hope this short review will be helpful to folks in their search for 4/6/8 core machines.
    Best
    Wes

    Yep, useful and informative, Wes - thanks.

  • Any real-world e-commerce application using HTMLDB?

    Hi,
    Any real-world e-commerce application using HTMLDB?
    If yes, can you please provide the web links?
    Thanks!
    Steve

    That's why I said "depends on your definition"
    According to the Wikipedia, the definition of e-commerce is -
    "Electronic commerce, e-commerce or ecommerce consists primarily of the distributing, buying, selling, marketing,
    and servicing of products or services over electronic systems such as the Internet and other computer networks."So nothing mentioned there about the size/number of transactions ;)
    So, is your question "Has anybody built a site in HTMLDB that handles the number of transactions that Amazon handles?"
    I'd be surprised if the answer were Yes, however HTMLDB (depending on your architecture) is capable of scaling to a huge number of users.
    Do you have a particular application/project in mind, or is it just a hypothetical question?

Maybe you are looking for

  • Logical Database in Abap Objects

    Hi to All I want do it a program report using a Logical Database. Is this possible ??? But when I make a GET <node>, occurs the following error:          "" Statement "ENDMETHOD" missing.  "" I'm doing the following: CLASS MONFIN IMPLEMENTATION.     

  • Problem with importing

    Hi all, I recently downloaded elements 11, and I imported some photos okay, but now it starts to import, gets about half way, and a box saying elements organizer has stopped working. When I went into problem details, it said appcrash. any ideas wot b

  • Stock posting for recurring

    Hi Group, At the screen of Usage decision I selects the "accept" UD code, then jump on to the Stock posting screen where system is not showing any quantity in the field for Sample posting. And not calculating the sample while i press on the Proposal

  • How to get best Pricing on iPhone(s)

    I'm hoping to switch my Verizon 4-phone Family plan to an ATT plan that might include 1, 2 , or 3 iPhone3GS'. I am not looking for any gray or black market phone or even "unlocked" versions. I just don't want to walk out of an Apple Store and have so

  • Outlook unaware of mailbox migration

    Hi We are migrating mailboxes from E2007 to E2013 and most of the time users correctly receive the prompt: The Microsoft Exchange Administrator has made a change that requires you quit and restart Outlook However, in some cases users do not receive t