Hardware compare in CVI

Dear all,
I want to use Hardware compare function in CVI. I had check CVI examples , Those pattern file and channel are separate,(generation & acquisition) . I find some information in developer zone (http://zone.ni.com/devzone/cda/tut/p/id/7281) , I want to do it in CVI ,Because its pattern are compound of generation & acquisition data,(figure 3) ,Is anyone can tell me how to do that?Or give me your suggestion. Thanks!

Dear all,
I want to use Hardware compare function in CVI. I had check CVI examples , Those pattern file and channel are separate,(generation & acquisition) . I find some information in developer zone (http://zone.ni.com/devzone/cda/tut/p/id/7281) , I want to do it in CVI ,Because its pattern are compound of generation & acquisition data,(figure 3) ,Is anyone can tell me how to do that?Or give me your suggestion. Thanks!

Similar Messages

  • HSDIO conditionally fetch hardware compare sample errors (script trigger to flag whether or not to wait for software trigger)

    I am moderately new to Labview and definitely new to the HSDIO platform, so my apologies if this is either impossible or silly!
    I am working on a system that consists of multiple PXI-6548 modules that are synchronized using T-CLK and I am using hardware compare.  The issue I have is that I need to be able to capture ALL the failing sample error locations from the hardware compare fetch VI... By ALL I mean potentially many, many more fails than the 4094 sample error depth present on the modules.
    My strategy has been to break up a large waveform into several subsets that are no larger than 4094 samples (to guarantee that I can't overflow the error FIFO) and then fetch the errors for each block.  After the fetch is complete I send a software reference trigger that is subsequently exported to a scriptTrigger that tells the hardware it is OK to proceed (I do this because my fetch routine is in a while loop and Labview says that the "repeated capbility has not yet been defined" if I try to use a software script trigger in a loop).
    This works fine, but it is also conceivable that I could have 0 errors in 4094 samples.  In such a case what I would like to do is to skip the fetching of the hardware compare errors (since there aren't any) and immediately begin the generation of the next block of the waveform.  That is, skip the time where I have to wait for a software trigger.
    I tried to do this by exporting the sample error event to a PFI and looping that PFI back in to generate a script trigger.  What I thought would happen was that the script trigger would get asserted (and stay asserted) if there was ever a sample error in a block, then I could clear the script trigger in my script.  However, in debug I ended up exporting this script trigger back out again and saw that it was only lasting for a few hundred nanoseconds (in a case where there was only 1 injected sample error)... The sample error event shows up as a 1-sample wide pulse.
    So, my question is this:  is there a way to set a flag to indicate that at least one sample error occurred in a given block  that will persist until I clear it in my script?  What I want to do is below...
    generate wfmA subset (0, 4094)
    if scriptTrigger1
      clear scriptTrigger1
      wait until scriptTrigger0
    end 
    clear scriptTrigger0
    generate wfmA subset (4094, 4094)
    I want scriptTrigger1 to be asserted only if there was a sample error in any block of 4094 and it needs to stay asserted until it is cleared in the script.  scriptTrigger0 is the software trigger that will be sent only if a fetch is performed.  Again, the goal being that if there were no sample errors in a block, the waiting for scriptTrigger0 will not occur.
    I am probably going about it all wrong (obviously since it doesn't work), so any help would be much appreciated!

    Please disregard most of my previous post... after some more debug work today I have been able to achieve the desired effect at slower frequencies.  I did straighten out my script too:
    generate wfmA
    if scriptTrigger1
      clear scriptTrigger0
      wait until scriptTrigger0
    end if
    generate wfmA
    scriptTrigger1 = sample error event flag
    scriptTrigger0 = software trigger (finished fetching error backlog in SW)
    However, I am still having a related issue.
    I am exporting the Sample Error Event to a PFI line, looping that back in on another PFI line, and having the incoming version of the Sample Error Event generate a script trigger.  My stimulus has a single injected sample error for debug. For additional debug I am exporting the script trigger to yet another PFI; I have the sample error event PFI and the script trigger PFI hooked up to a scope.
    If I run the sample clock rate less than ~133MHz everything works... I can see the sample error event pulse high for one clock period and the script trigger stays around until it is consumed by my script's if statement.
    Once I go faster than that I am seeing that the script trigger catches the sample error event occasionally.  The faster I go, the less often it is caught.  If I widen out the error to be 2 samples wide then it will work every time even at 200MHz.
    I have tried PFI0-3 and the PXI lines as the output terminal for the sample error event and they all have the same result (this implies the load from the scope isn't the cause).
    I don't know what else to try?  I can't over sample my waveform because I need to run a true 200MHz. I don't see anything that would give me any other control over the sample error event in terms of its pulsewidth or how to export it directly to a script trigger instead of how I'm doing it.
    Any other ideas?

  • How can I use the hardware compare feature of the 6551 card to trigger scripts

    I can dynamically and seamlessly generate different waveforms by triggering different scriptTriggers that drive one or more of the 4 PFI lines. However, I need to evaluate a channel at a specific location or locations set by a script marker in real-time and generate a different waveform stored in on-board memory based on the result of the evaluation. I have attempted to use the hardware compare feature without any success. I am trying to dynamically respond to an I2C device based on the ACKs or NACK response of the device under test. Can the 6551 card accomplish this? Has anyone successfully tested an I2C or SMBus communication stream with 6551 card?

    Hello,
    I can understand why hardware compare did not work out for you application.  Hardware compare uses a signal that was generated to make a digital pattern and then waits a few clock cycles before acquiring the signal that needs to be compared to the digital pattern. 
    I would look into using a script trigger to evaluate the channel value, where the script structure controls when it is evaluated.  Please note that script triggers will need to be cleared after they are detected for reasserting.  Some script syntax actually clear the script trigger and then use a wait until script trigger structure.  Please refer to the HS DIO Help for documentation on Common Scripting Use Cases. 
    Please provide us with further details about your application.  The more information, the better!  I was not too clear on what you where wanting to evaluate and where is it coming from. 
    I would also like to mention that National Instruments has a NI USB 8451 which is capable of I2C communication.
    NI USB 8451
    Samantha
    National Instruments
    Applications Engineer

  • Hardware compare issue - PXI-6552

    Hi!
    I'm using a PXI-6552 DIO module for testing SDRAM chip.
    I'm not sure about hardware compare operating. I'm using a sequence of 0 and 1 to drive the tester during write command and H/L as expected responde with read command.
    Does the H/L response data puts the driver to high impedance?
    The problem is that also when i skip the write command I retreive (fetch) the response data correctly.
    Thank you!
    MM

    Thank you for the answer!
    My configuration is a little bit different because I use the D0..D7 lines as bidirectionl lines.
    I noted that the system works while the clock frequency is below 25 MHz, but i don't know why.
    Has anyone tried to use this module above 25MHz?
    Thanks a lot!
    MM

  • Can i do hardware compare using hsdio when the generation and acq clock rate are different

    Hi there
    my application should generate a digital stream in clock rate A with bus width (number of bits) very from 1 to 4 bits
    The DUT will response always on on bit but with clock rate B.
    1. can i use the hardware compare mechanism of the hsdio boards.
    2. if answer is YES is there any sample that can help me getting started.
    Thanks for advance
    Gabel Daniel
    Solved!
    Go to Solution.

    Hi,
    One good example to use as a starting point can be found in the NI Example Finder.  Navigate to "Hardware Input and Output" >> "Modular Instruments" >> NI-HSDIO >> Dynamic Acquisition and Generation >> Hardware Compare - Error Locations.vi.  You'll need to explicitly wire in your new rate into the "niHSDIO Configure Sample Clock.vi" on your response task.
    There is also a portion of this code that warns if the stimulus and response waveforms are not the same size.  This is not necessary, and could be deleted from the program. You are allowed to have different size stimulus and response waveforms.
    Jon S
    Applications Engineer
    National Instruments

  • Hardware comparing...

    Hey guys,
    I have a little "lack of information" I'm hoping to get some clarifications on:
    What would the performance difference be between my current Duron 700/256MBRAM and a Dual-Celeron 433/384MB RAM?
    I know a little about SMP, so I already have an idea regarding applications that don't take advantage of SMP... but I'm pretty sure the ones I use would...

    What's up spongebob?  I really enjoy your cartoons, so I must preface my reply with a nagging question first.  In the episode of "Bubble Buddy", how did that one fish on the beach die from "high tide"?  I don't understand.  I thought you lived underwater to begin with.  Your cartoon is an enigma rapped inside a puzzle to me.  It's a good thing I have a beer in hand when I watch you on television.  Taking a 'sip is like hitting the reset switch on my computer case when I start to analyze your underwater environment...
    Anyway, to answer your question Mr. Spongebob, I'd go with the "dualy" rig.  There were a few motherboards, like the Abit BP6 which supported "dualy" celerons, and SMP w/o hardware modification.  I had one of those rigs and it worked quite well.  The HPPT IDE controller is nice, but I only used 4 IDE devices on it.  Either way, the BP6 was a sweet mobo which you could overclock.  Furthermore, you could always drop 2 new Celeron CPUs (from eBay @ a minimal cost ~< $15 for 2) up to what the latest BIOS would support, overclock 'em, and still outperform the Duron 700.  I think the latest BIOS would support 533s.  However, you will need to "uncheck" something in the BIOS so it will not reset your CPU speeds if you overclock past 533.  I forgot what that option was.
    My stock (non-OC'd) "dualy" 400's in the BP6 was on par with the P3-600 I had before it.  So, you have a lot of potential with your rig, and most importantly, with very little financial investment on your part, you can "stretch" your dollar even further...

  • Can Someone (with base model 2.3 i5 Mac mini) Confirm/Compare RAM...

    I post this on another forum, but no specifics.
    Can some please confirm or compare your ram usage for me.
    With 2 GB, examining the pie-chart in the "Activity Monitor," with NO application in used, I have only about 45% (green) free.
    I am just wondering, why am I left with ONLY less than 1 GB free to use???
    Thank you.
    Could this be the culprit: I have "mail5" set for three e-mail accounts,*but I closed it out completely--no light on the bottom...of any app.
    Could the e-mail accounts be used in "Wired memory"? A BIG red slice of the pie-chart. (Wired memory--Information in this memory can't be moved to the hard disk, so it must stay in RAM. The amount of Wired memory depends on the applications you are using.)

    mine is a bit the same in terms of hardware
    compared to mtn lion then it takes much much longer to run ok from a shutdown if you just use sleep it don't suffer if you shut it down then it takes a bit to compress the memory and the likes and then it runs at ok speed
    not sure if adding memory will help but maybe
    other then that use sleep
    or wait and see if the next update speed it up faster after a shutdown

  • TDMS functions much slower in CVI 2010

    Hello everyone.
    Today I noticed that at least some TDMS functions are much slower in CVI 2010 compared to CVI 2009 SP1 and prior. I have created and attached a simple sample project that creates a TDMS file with about 3000 file level properties and tries to read it back in afterwards. On all releases prior to 2010, this needs less than 10 seconds. On 2010 it's around an hour, if not more! Unfortunateky this is pretty much a show-stopper for me. Any comments?
    Thanks, Marcel 
    Attachments:
    tdmsTimingTest.zip ‏3 KB

    Hello Marcel -
    What you've reported is actually a known issue, and is unfortunately considered to be expected behavior.  Let me try to explain:
    There was a relatively large refactoring of the underlying TDMS code in LabWindows/CVI 2010.  This refactoring was intended to more closely align our internal implementation to that of LabVIEW.  As a result of this refactoring, we were able to address some internal issues we had previously been unable to address, as well as more correctly handle the data stored in the TDMS file.  Unfortunately, this refactoring unmasked a performance issue that had always been present when reading a large number of properties one at a time.
    This performance issue was not uncovered for LabWindows/CVI 2010 because we had previously focused our performance testing on reading and writing data to a file, not metadata.  We considered it unlikely that a customer would have more than dozens of properties for any one channel or group or file, and as a result, the performance issues you've reported were overlooked.
    However, we did recently find the performance issues you've reported.  As a result, there will be a handful of undocumented functions for returning all (or a subset of) properties on a channel, group, or file in LabWindows/CVI 2010 SP1.  This will allow for performance in line with what you'd seen in LabWindows/CVI 2009 SP1 and earlier, as long as you are OK with grabbing all the properties at once.  These functions are undocumented because, in general, we don't release new features with service packs.  Also, the functions are a little more difficult to use than normal CVI APIs, so we have not yet determined how or when they will be publicly documented.  When LabWindows/CVI 2010 SP1 releases (later this summer), feel free to reply back to this post or send me a private message, and I'll work with you on the details of calling these undocumented functions.
    Out of curiosity, we'd like to know your use case for creating that many properties.  You're the first customer we've encountered using such a large number of properties, and we'd like to ensure that we are able to satisfy your use case in future versions of the API.
    Thanks for the report, and I'm sorry for any inconvenience this has caused,
    NickB
    National Instruments

  • Return of void functions inconsiste​ncy between CVI 2013 and older

    Hello,
    I have discovered an inconsistency using CVI 2013 (SP2 or not) compared to CVI 2010 and CVI 8.5 (the versions I have).
    I have written, by mistake, a code where I'm returning a value for a void function. CVI 2013 does not complain (but should) while other CVI versions complain (and that's OK).
    Here is the code:
    static void pouet(void)
    return;
    static void hop(void)
    return pouet();
    int main(void)
    hop();
    return 0;
    Could this behaviour be fixed for the next CVI update ?
    Thanks.
    Frédéric Lochon.

    Well it's technically not causing wrong behaviour, but may cause less than best performance depending on the compiler code generation as the register usually used for function return values might be assigned some garbage value that never will get used though.
    Causing an error is likely a bit strict, issueing a warning would be the prefered behaviour, not generating either a warning or error is a bit lazy. Problematic would be if you could write this without getting an error:
    static void pouet(void)
    return;
    static int hop(void)
    return pouet();
    int main(void)
    hop();
    return 0;
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • The Verizon "Experience"

    Hello,
    I ordered a phone this afternoon off the Verizon Wireless page and though it was close to the end of the business day, I chose to pick it up at "my store" (mostly because Verizon stores open generally at 10am, WAY too late to do any business on my way to work).  I drove to the store to see if I could get it or at least wait the "about an hour" it takes to prepare my order.
    Incidentally, "about an hour"?  If I walked into a store and asked to buy a phone, they would pick it up off the shelf and hand it to me, charging my card and signing me up for some byzantine contract faster than I could pull out my wallet.  But because I'm taking advantage of your online store -- where I have care of taken ALL THE TROUBLE of dealing with a customer  by validating I'm a user, picking my phone, picking my plan options, picking every service option -- I have to wait an extra _hour_?  For what?  Does someone write these down by hand and send them carrier pigeon?
    Anyway, I arrive at the Verizon store (on Fields Ertel Road in Deerfield, Ohio) and am greeted by a man I'm going to call... Disinterested Employee #1.  This man in his thirties, is overweight, and wearing a football jersey.  I confused him for a customer because I didn't think people went to work dressed so sloppily.  Well, DE#1 asked me why I've come in and I tell him to pick up an order and I give him my name.  I have misgivings about the "kiosk" style service centers but that's beside the point.  I was the only person in the store, and there were 5 employees walking around or doing something behind one of the desks.  I waited a few minutes.
    At this point Disinterested Employee #2 arrived, who was dressed better (he had a collar) but couldn't bear to look me in the face.  Maybe my shirt was distracting to him, it was blue and had no logos or distinguishing features of any kind.  Perhaps he believed I was a walking Blue Screen of Death and was afraid for his home computer.  I'm not sure.  Anyway, he asked me AGAIN why I was here, and what my name was.  He wandered away to talk with the other 4 employees. 
    Five minutes passed.  DE#2 returned and asked me what kind of phone I ordered. Then he wandered into the stock room.
    DE#2 came back a 3rd time and told me my order had come through not long ago and they were still working it.  I asked if I could wait until they were finished and he said yes, without question.  He wandered away again.
    Ten minutes later, DE#1 decided to come back around.  He told me, waving his iPad around for emphasis, that "you know, it takes an hour and half at least for those orders to be processed and we're gonna close so, so..." then he mugged at me like this was a sitcom.  I PRESUMED he was asking me to leave the store and pick it up tomorrow.  I asked if there was anyway to get my order moved to a Verizon closer to my workplace due to the aforementioned poor opening hours issue.  He said no way.  So I left.
    As I got into my car, I got an e-mail.  It was from Verizon.  It was telling me that my order was ready for pick up now and I could come any time.  It was 5:57.  By the time I walked back to the store, they had locked the doors.
    I've been a Verizon customer since 2001 when a blown alternator left me stranded on the highway at 11pm and I had to hitch my way to a payphone. I picked Verizon because the coverage in upstate NY was good. And while I've not really had problems with signal, the last satisfied transaction with this company was when I got the original Droid on launch day in 2009.  Ever since then my experience has not been satisfactory. I wasn't even going to purchase the phone I ordered today, as I had committed myself to letting my contracts run out and leaving.  But I with LTE-A and VoLTE coming next year, I thought I might wait it out a little longer to see these new technologies, so I took a chance and extended my contract.
    Or tried to anyway.
    At this point, I'm not even sure Verizon is interested in customers who have been with them as long as I have.  They do nothing to court me or offer me any incentives over somebody who was on AT&T yesterday.  They seem ACTIVELY to resent customers who actually know how to use the phones they have, with their 'certification' process that guarantee's that Verizon always seems like a second string on hardware compared to the other carriers.  Also their logically conflicting statements saying that people can't have unlocked phones on their network... and then "certifying" Developer Editions of their hardware anyway.
    So tell me, Verizon -- twelve years and $20,000 later -- why shouldn't I leave you behind?  Republic Wireless has the excellent Moto X and they charge as little as $10/month as long as I spend most of my time covered by Wi-Fi signals.  Net10 wireless somehow uses Verizon network and offers better pricing.  And AT&T just launched AIO Wireless, who let me use any GSM phone on their network and offer unlimited data for $50/month with some reasonable throttling at 8MBPS.  Meanwhile, Verizon has launched Viva Mobil. An unsettling 'brand' that seems to imply that only Latinos value family.  Oh, and you keep trying to get me to agree to be charged $60 more _per month_ for the same service I've had for the last 4 years.
    This just is depressing.  My grocery store cares more about my experience with their services and stores than Verizon does.
    Meanwhile, I guess I'm going to have to wait until next weekend to get my phone from a store that keeps hours I'm not actually working, since I can no longer change my order to be shipped to my office.

    I agree with your post. However I would just take the time on the weekend to go early to a corporate store and do the whole transaction there.
    Like anyone especially low paid staff, they are looking to get out the door at quitting time faster than the Runner in the cartoons. Higher paid professional workers normally would stay to finish their task. I know I always did.
    To have the work done by you online and then have it transferred to the store of your choice for pick up should work fairly quickly. What takes an hour and half after submitting on line I will assume is to simply do all the order paperwork, pick and activate the phone and then have it sitting waiting for you. This I can read was not done.
    I don't like using online for my phones. Accessory purchases I don't care since its not a big purchase.
    Always go to the store for your devices. You will be much happier.

  • I don't understand why Symbian has been killed off...

    I recently posted my impressions of the 808 http://forums.guru3d.com/showthread.php?t=374416 and it has it's problems which means I will not be keeping the phone for very long. However if Nokia continued with Symbian which has far more potential than Windows Phone I'd have been happy with it and would have stuck with the phone knowing there will be more updates. 
    Now I've never used Symbian before, unless the 3310 run it, though that was a vastly different experience that you cannot compare. I decided just to give it a chance as the 808 was mega cheap on ebay and I thought what the hell, my upgrade comes in April and I can sell it then if I don't like it. I broke my phone so I needed to buy a new one and it's a bit of fun trying something other than Android or IOS for once. 
    I have to say now I'm not a fan of Windows Phone and IOS being closed systems, I think you can remain open like Android while offering a great selection of apps on the store. Developers really need to get over piracy, especially platform owners and the makers of Operating Systems. I feel like there is this trend to go towards a closed OS as an easy way to combat that, rather than offering a great open experience but having a service worth paying for over piracy. For example Steam has really defeated Piracy on the PC, people still do it, they always will and it isn't always a bad thing anyways. A lot of the time people will torrent stuff and then later buy it on Steam when they know it is a product they like. If that person goes on to tell other people about it then it creates more sales. 
    From what I've read Symbian has this bad image of piracy but I really think that is due to a lackluster service on Nokia's end more than anything. The store needs a lot of work, while it is functioning, it does need a bit of a UI tweak which obviously wont be happening. I also feel like you have to get your apps elsewhere to test them first as there is no protection from Nokia like you get with Android where you can get a refund within a certain time in case the app doesn't work. I've bought a few apps now which just plain don't work and it has put me off buying anything else.
    Back to the topic though, I've never used Symbian before or had a Nokia since the 3310 and I'm really surprised at how good it is. I said in that thread it feels like a mix between IOS and Android and it even has good functionality of it's own. Belle FP2 is either a massive step up far too late or Symbian has gotten a lot of hate for no reason, I don't know as It's the only version I've used. The core functionality is good, it is fast, it doesn't drain much battery and it does everything the other platforms do. Yeah it looks a bit drab but it's so easy to just change the theme unlike IOS and while you can do it on Android it isn't anywhere near as easy as Symbian. I really Respect Nokia for a lot of things with Symbian and while I said the store needs work, it does function. I feel like it probably lacks a lot of options like more categories due to the lack of apps on it. 
    I've only really got two problems with Symbian and that is the stock browser is bad, which wouldn't be a problem if Nokia was continuing development of Symbian because they could easily fix it in the future. I have switched to Opera which fixes most of those issues, though I do wish Symbian supported Flash 11 because of my next point. The second issue is lack of apps, really this is what has lead me to selling the phone come my upgrade in April. You cannot do much on the phone compared to the others because of the lack of apps, if there was Flash 11 it would be less of an issue, however not being able to watch stuff like ITV Player or TVcatchup is a big downer. The lack of games is also another depressing thing and really the lack of development on Symbian is depressing. 
    From what I've seen of Belle FP2, Symbian is a great starting point for Nokia's Smart Phone future, sadly it looks like it has come too late and they've always ditched it for Windows Phone, which I think is a shame. The other biggest issue Nokia had is they never really made good hardware compared to the rest in the past. They've just started making competitive Smart Phone hardware just as they killed off Symbian. If the Lumia 920 came in 2010 for example I feel like more people would have taken notice of Symbian, sadly it came to Windows Phone and not Symbian. Sadly like I said it looks like it took too long for Symbian to get good... though I'm only guessing on that as I've never used it before Belle FP2. 
    I do like it though and really wish'd Nokia stuck with it.

    See I don't think Symbian is the problem, Belle is really nice and what Nokia have done is make it function like IOS and Android. I've used both in the past, I left IOS because I found it too restrictive and used Android ever since. Making the jump to Symbian Belle, it's pretty much the same experience but with no apps, though the core Belle experience feels more refined than Android. It probably just came too late for Nokia....
    I also think Nokia hasn't made a Smart Phone to get excited for until the 920. Everything else they made was sub par, poor specs, poor screen and just very clunky and I'm guessing from reviews, Symbian wasn't good before Belle. Like I said though if they released the 920 last year on Symbian Belle instead of Windows Phone 8, I think Symbian could have survived. Really all Belle needed was the hardware and it just never got it, the 808 is great for a camera but the hardware isn't mainstream like the 920 is. 
    I just find it weird how Nokia handled the whole situation, it took years for them to get going and once they did they retired Symbian and never released it on a relevant Smart Phone to give it a chance.
    The 808 has opened my eyes to Nokia Hardware though, it is good and well made. I'll wait to see if they release a refreshed 808 with better specs and screen this year before I upgrade as I've just seen rumours about it. I'm just not convinced about Microsoft making good on Windows Phone, I don't like the experience it currently offers and Microsoft have never delivered on anything worth while in the past apart from Windows 7. If I look at all their software, I don't remember any of it being good and now with Windows Phone being a closed experience, you're stuck with Microsoft's software. 
    I just think it is a shame, I really like Belle, it just needs apps and Nokia to revive development for it. 

  • How to do performance tuning in EXadata X4 environment?

    Hi,  I am pretty new to exadata X4 and we had a database (oltp /load mixed) created and data loaded. 
    Now the application is testing against this database on exadata.
    However they claimed the test results were slower than current produciton environment. and they send out the explain plan, etc.
    I would like advices from pros here what are specific exadata tuning technics I can perform to find out why this is happening.
    Thanks a bunch.
    db version is 11.2.0.4

    Hi 9233598 -
    Database tuning on Exadata is still much the same as on any Oracle database - you should just make sure you are incorporating the Exadata specific features and best practice as applicable. Reference MOS note: Oracle Exadata Best Practices (Doc ID 757552.1) to help configuring Exadata according to the Oracle documented best practices.
    When comparing test results with you current production system drill down into specific test cases running specific SQL that is identified as running slower on the Exadata than the non-Exadata environment. You need to determine what is it specifically that is running slower on the Exadata environment and why. This may also turn into a review of the Exadata and non-Exadata architecture. How is application connected to the database in the non-Exadata vs Exadata environment - what's the differences, if any, in the network architecture in between and the application layer?
    You mention they sent the explain plans. Looking at the actual execution plans, not just the explain plans, is a good place to start... to identify what the difference is in the database execution between the environments. Make sure you have the execution plans of both environments to compare. I recommend using the Real Time SQL Monitor tool - access it through EM GC/CC from the performance page or using the dbms_sql_tune package. Execute the comparison SQL and use the RSM reports on both environments to help verify you have accurate statistics, where the bottlenecks are and help to understand why you are getting the performance you are and what can be done to improve it. Depending on the SQL being performed and what type of workload any specific statement is doing (OLTP vs Batch/DW) you may need to look into tuning to encourage Exadata smart scans and using parallelism to help.
    The SGA and PGA need to be sized appropriately... depending on your environment and workload, and how these were sized previously, your SGA may be sized too big. Often the SGA sizes do not usually need to be as big on Exadata - this is especially true on DW type workloads. DW workload should rarely need an SGA sized over 16GB. Alternatively, PGA sizes may need to be increased. But this all depends on evaluating your environment. Use the AWR to understand what's going on... however, be aware that the memory advisors in AWR - specifically for SGA and buffer cache size - are not specific to Exadata and can be misleading as to the recommended size. Too large of SGA will discourage direct path reads and thus, smart scans - and depending on the statement and the data being returned it may be better to smart scan than a mix of data being returned from the buffer_cache and disk.
    You also likely need to evaluate your indexes and indexing strategy on Exadata. You still need indexes on Exadata - but many indexes may no longer be needed and may need to be removed. For the most part you only need PK/FK indexes and true "OLTP" based indexes on Exadata. Others may be slowing you down, because they avoid taking advantage of the Exadata storage offloading features.
    You also may want evaluate and determine whether to enable other features that can help performance including configuring huge pages at the OS and DB levels (see MOS notes: 401749.1, 361323.1 and 1392497.1) and write-back caching (see MOS note: 1500257.1).
    I would also recommend installing the Exadata plugins into your EM CC/GC environment. These can help drill into the Exadata storage cells and see how things are performing at that layer. You can also look up and understand the cellcli interface to do this from command line - but the EM interface does make things easier and more visible. Are you consolidating databases on Exadata? If so, you should look into enabling IORM. You also probably want to at least enable and set an IORM objective - matching your workload - even with just one database on the Exadata.
    I don't know your current production environment infrastructure, but I will say that if things are configured correctly OLTP transactions on Exadata should usually be faster or at least comparable - though there are systems that can match and exceed Exadata performance for OLTP operations just by "out powering" it from a hardware perspective. For DW operations Exadata should outperform any "relatively" hardware comparable non-Exadata system. The Exadata storage offloading features should allow you to run these type of workloads faster - usually significantly so.
    Hope this helps.
    -Kasey

  • Data / Hosting Center design advice…

    Need advice, on how to build a Data- Hosting Center infrastructure, (Best practices)…
    I need to deliver costumer access on Ethernet, where costumers can get access on variable access rates (CAR ingress/Egress), some costumers are connected with no redundancy, and others need redundancy (hrsp) between to routers…
    Build on a number of 7507 routers with GEIP+ interface to connect to the backbone routers, and a number of PA-2FE-TX interfaces to provide costumer access with, each customer getting his own FE interface, that have been CAR’ed down to the access needed from the customer?, the big issue here is that some customers don’t need more that 4 – 10 Mbits (full duplex), and the use of a 100Mbit interface for a customer that only need under 10Mbit is over kill? … any solution on how to solve this issue ?... is the solution to this problem, to connect one 100Mbit port to a switch, and running a trunk (dot1q) interfaces out on the switch, and then connect the customers to at switch port,
    Or is the best solution setting up a number of 7606 routers??? …
    I also need to deliver the L2 infrastructure, so I also need to build op a L2 infrastructure, that can support the customers equipments (Firewalls / Servers) that are segmented up I Vlans, here I need to secure my infrastructure so a customer error I the L2 network don’t influence other customers??, is that setting up a number of 6500 switches that is connected to a number of smaller switches, using MST so each customer have its own MST instance?? …
    Thanks in advance
    /Peter

    Hello,
    1.PIX is the precursor to the ASA so at this point the ASA is probably a better choice since it'll be around longer plus I'm sure they have beefed up the base hardware compared to the pix.
    2.Your external router is dependant on how much traffic your going to be dropping into your hosting site. A 7200 series router is a fairly beefy router and should be able to handle what you need if your looking.
    3.One of the nice things about the 6500 is you can put a FWSM and segment all your different hosting servers to provide a more granular network control.
    I don't have any case studys but will look around and post them if I find some.
    Patrick

  • Layer 2 connect - data center web hosting

    hi, i need your help!!
    i have data center with the nexus 7000 , i have servers connecting to the cisco 7000 with web servers. my company do hosting for customers.
    the poing that we have shared resources like vmwares on blades and so on.. mean that the ports of the blade are connecting physically to the nexus 7000 with trunk and vlans for every customers.
    my nexus connecting to FW than to WAN stiches than to Routers connecting to the internet so if i asked to to hosting from the internet its easy.
    the problem is now i have cusomer that wants to connect his switch over the wan directly to his area at my datacenter....  we make for him servers that are the same like his servers with the same subnet and he makes replications...
    he dont have router, he connect his switch over wan provider at layer 2 to me..
    should i connect him direcly to my nexus??? with his vlan?? should i need other solution like eompls??? what is the safest way to connect him with layer 2.. and i repeat the problem that our servers are shared between many customers - the same nexus ports, please help!!

    Hello,
    1.PIX is the precursor to the ASA so at this point the ASA is probably a better choice since it'll be around longer plus I'm sure they have beefed up the base hardware compared to the pix.
    2.Your external router is dependant on how much traffic your going to be dropping into your hosting site. A 7200 series router is a fairly beefy router and should be able to handle what you need if your looking.
    3.One of the nice things about the 6500 is you can put a FWSM and segment all your different hosting servers to provide a more granular network control.
    I don't have any case studys but will look around and post them if I find some.
    Patrick

  • SNC Configuration

    Hi Gurus,
    I am working on the Contract Manufacturing Scenario for SNC.  The system landscape is ECC-XI-APO-ICH.
    When Purchase Order is created idoc is sent out to XI already but having problems with sending XI to APO.
    Do you know if there are SNC Configurations to be done in SCM as the receiver of the XML Message.
    Thanks a lot for your answers.
    Regards,
    Armi

    Hi Armi,
    Hi,
    SAProuter/SNC via Internet
    u2022 SNC secured SAProuter u2013 SAProuter connections are established between SAP and the customeru2019s SAProuter to provide data confidentiality and integrity services. These SNC connections complement the leased lines in the current SAPNet R/3 Frontend environment. State-of-the-art encryption, authentication, and access control technology will be employed. No additional hardware compared to a leased-line setup is required at either end of the connection. (See diagram below).
    u2022 Customers are required to install a SAProuter with an official, static IP address (DHCP Addresses will not work) running SNC inbound and outbound connection to SAP at their end of the connection in a Demilitarized Zone. This SAProuter must be accessible from the Internet. All service connections between SAP and the customer must be made over the respective SAProuters.
    u2022 Certificates needed are available on the SAP Service Marketplace.
    Requirement:-
    Internet connection: recommended
    minimum bandwidth = 64 kbps
    SAProuter machine
    Official IP address (static) for the SAProuter host.
    SAProuter installation package
    SAP SNC libraries and executables.
    These may be downloaded from the SAP Service Marketplace.
    A Demilitarized Zone at the customer site with a minimal setup as described in the networking section at: http://service.sap.com/SYSTEMMANAGEMENT Choose: Security > Technical Track
    SAP Security Guide.
    More information on SNC connections is also available in the SAP Service Marketplace.
    Since the host running the SAProuter software is a full computer with operating system, the security at the operating system level must be hardened in order to minimise the risk of the machine being hacked from the Internet. One recommendation will be for example to run a C2 security level compliant operating system. SAP takes no liability if the security of the companyu2019s network is compromised.
    Other networking equipment (routers and hubs) needed to form the network at the customeru2019s premises
    Comparisions
    Property SAProuter / SNC via Internet
    Hardware requirements Firewall + SAProuter host in DMZ
    Software SAProuter starting from NI version 35
    SAPSECULIB can be obtained from the Service Marketplace
    Network addresses (besides address of Internet router, firewall, u2026) 1 official static IP address for SAProuter
    Configuration issues Careful setup of saprouttab necessary for security. Saprouttab influences security strongly as access is controlled via saprouttab and firewall.
    Encryption By software
    Encrypted data TCP packets
    Only the data stream between SAProuters is encrypted
    Encryption is handled on Application layer (OSI network layer 7)
    Minimum required free bandwidth 64 kbit/s but may work also with
    32 kbit/s
    Supported services on SAP side All except FTP (files download)
    Key management Digital certificates being requested via Service Marketplace Public Key Infrastructure (PKI)
    Key storage In file system
    Operating system SAProuter resides on a computer
    therefore it is necessary to harden the security at the operating system level (for example, C2 level OS) to minimize the risk of the machine being hacked from the Internet
    Additional expertise SAProuter knowledge usually available, SNC configuration requires additional knowledge
    Standards Based on SNC, SAP proprietary standard
    Contributing to costs u2022 Firewall hardware and software
    u2022 Firewall administration costs
    u2022 No additional license fee for security library based on SECUDE
    Hope this helps.,
    Thanks,
    Naga

Maybe you are looking for

  • Saving BLOB to Unix folder...

    i have emp table having BLOB column storing PDF into that BLOB column how i need to save all those BLOBs into unix folder as <empno>.pdf format? does anyone having example procedures? i tried the given PROC at Re: Extract blob to file but giving Erro

  • Validation error in BDC

    Hi friends, I am doing a BDC for F-03 and everything is working fine except when the the BDC is executed at background mode i.e N. It is working good at A and E but it gives a custom validation error at background processing. I am searching SDN since

  • CS3 PPixHandleUtilities.cpp-114 error

    I am running Windows Vista 32 bit and Adobe Premiere Pro CS3.  I have been working in HD video for over 6 months now and haven't had any troubles exporting footage from Canon T2i or T3i. However, now when I export it will render out about 30-45% and

  • Opening URL generated in script.

    I'm a novice and could use a bit of help.I'm trying to generate a script that will capture the default gateway of the network I'm connected to and then open it in a browser. I've managed to get the IP of the default gateway but every time I try to op

  • Printing problem in Protected Mode - Adobe Reader X

    Hello, I have a PDF printer driver and Adobe Reader X. If I right click  on my PDF file and select the print option, the generated PDF file will  become "gibberish". I have checked the settings of the Adobe Reader X  and I have found the "protected"