Benchmark, scorecard, criteria, or measure the quality of requirements

Hello,
a customer asked me a question around quality of the requirements and whats the criteria for the business signoff.
Besides the criteria we have in ASAP which are:
•Uniqueness
•Traceability
•Completeness
•Unambiguousness
•Conciseness / Consistent
•Correctness
•Usability, ability to realize, can it be verified
Do we have other measures for the quality of the requirements in the methodology, other approaches?
Thank you in advance.
Best regards
Mariya

Hi Mariya,
Let me provide some inputs pointing towards the SAP ASAP roadmap and to some external link which can compliment the templates I have suggested.
**Confined to the Blueprint sign-off criteria**
1. Quality of the To-be process documentation - Completeness will depend on the level of details available as to the process is supported by SAP standard or if a WRICEF is required. If WRICEF is required the Functional Specification need to be complete in terms of the information captured. Clarity on the Measure of success, Scope of the process - what is included and what is not included should be very clear. Supporting Visualization tools like iRise can ensure further completeness of the To-be process documentation.
The To-be Process documentation templates will surely help you to define a tracker - Process Requirements Template, Business Process Description Template under Detailed Design - Business Process #1 - n
2. Ensure that the to-be process is closest to SAP best practices. Meaning least development requirements, if the process needs to be fine tuned or modified to bring it closer to SAP standards that will increase the ability to be realized. If there are development requirements there should be clarity on the ability to realize the additional requirements in terms of Complexity matrix.
3. Prioritization of WRICEF requirements in terms of which one are MoSCoW - must, should, could, would have will be helpful when you sign-off on the blueprint. This allows flexibility to prioritize could/would requirements based on progress.
4. You may also refer to a reference website like KPI library for benchmarking the industry performance indicators and use them for "Correctness" of your realized process. http://kpilibrary.com/
BBP Phase sign-off also requires the following Legacy data migration & archive, Techincal solution design, Dev environment etc...as you can see in the ASAP breakdown of the phase to be completed & likewise signedoff.
https://websmp107.sap-ag.de/~sapidb/011000358700000661042013E/Index.htm
Warm Regards,
Ranjith Raghunathan

Similar Messages

  • Measuring the performance of Networking code

    Lately I've had renewed interest in Java networking, and been doing some reading on various ways of optimizing networking code.
    But then it hit me.
    I dont know any way of benchmarking IO/Networking code. To take a simple example, how exactly am I supposed to know if read(buf,i,len) is more efficient than read() ? or how do I know the performance difference between setting sendBufferSize 8k and 32k? etc
    1)
    When people say "this networking code is faster than that", I assume they are referring to latency. Correct? Obviously these claims need to be verifiable. How do they do that?
    2)
    I am aware of Java profilers ( http://java-source.net/open-source/profilers), but most of them measure stuff like CPU, memory, heap, etc - I cant seem to find any profiler that measures Networking code. Should I be looking at OS/System level tools? If so, which ones?
    I dont want to make the cardinal sin of blindly optimizing because "people say so". I want to measure the performance and see it with my own eyes.
    Appreciate the assistance.
    Edited by: GizmoC on Apr 23, 2008 11:53 PM

    If you're not prepared to assume they know what they're talking about, why do you assume that you know what they're talking about?Ok, so what criteria determine if a certain piece of "networking code" is better/faster than another? My guess is: latency, CPU usage, memory usage - that's all I can think of. Anyway, I think we are derailing here.
    The rest of your problem is trivial. All you have to do is time a large download under the various conditions of interest.1)
    hmm.. well for my purpose I am mainly interested in latency. I am writing a SOCKS server which is currently encapsulating multiplayer game data. Currently I pay an apprx 100 latency overhead - I dont understand why.. considering both the SOCKS client (my game) and SOCKS server are localhost. And I dont think merely reading a few bytes of SOCKS header information can potentially cause such an overhead.
    2)
    Let's say I make certain changes to my networking code which results in a slightly faster download - however can I assume that this will also mean lower latency while gaming? Game traffic is extremely sporadic, unlike a regular HTTP download which is a continuous stream of bytes.
    3)
    "timing a large download" implies that I am using some kind of external mechanism to test my networking performance. Though this sounds like a pragmatic solution, I think there ought to be a formal finely grained test harness that tests networking performance in Java, no?

  • LabVIEW/SignalExpress: How can I automate measuring the time between two pulses?

    Hi everyone, bit of a newbie here so please bear with me.  
    I'm a student at a university conducting a muon decay experiment with an oscilloscope connected to some photomultipliers.  To summarize, if a muon enters the detector it will create a very small width pulse (a few ns).  Within a period of 10µs it may decay, creating a second pulse.  The oscilloscope triggers on the main pulse 5-15 times per second, and a decay event happens roughly 1-2 times per minute.  I am trying to collect 10 hours of data (roughly 1500-2000 decay events) and measure the time it takes for each decay.
    I've been able to set recording conditions in SignalExpress that starts recording on the first pulse and stops recording on the last.  The Tektronix TDS 1012 oscilloscope however feeds 2500 points of data from this snapshot into a text file (for use in excel or other software).  Even if I perfectly collected the data, I would have 100,000+ data points and it would be too much to handle.  I don't know how (or if it's possible) to reduce the sample size.
    To conclude, using Labview or SignalExpress, I would like to be able to have the software
    1.  Differentiate between the single pulse detections and double pulse decay events
    2.  Record only when two pulses appear on the oscilloscope
    3.  Measure the time between these two pulses and ONLY that to minimize the amount of data recorded.
    Any help would be GREATLY appreciated, thanks!

    Hi wdavis8,
    I am not that familiar with Tektronix, but there should be a place in the dialog that you go through when you create the action step to acquire date to specify a sampling rate. That would allow you to reduce the number of data points you are seeing, but may reduce the quality of the data.
    If it’s just a matter of that much data being hard to dig through when you have that many points, you could do some analysis on the data after the fact, and then create a new file with only the data you want to look at. For example, you could identify the peaks in the data, and based on the distance between them or the difference in magnitude, selectively write data to a new file.  
    Here is some information about peak detection in LabVIEW:
    http://www.ni.com/white-paper/3770/en/
    You could also do some downsampling on the data to get fewer data points:
    https://decibel.ni.com/content/docs/DOC-23952
    https://decibel.ni.com/content/docs/DOC-28976
    Those are just a few quick ideas. 
    Kelsey J
    Applications Engineer

  • In iPhoto how do I stop files from auto resizing as they import? I don't want my files reduced to 72. The option to choose size on the export menu not suitable for up sizing, as you can't restore the quality removed?

    in iPhoto how do I stop files from auto resizing as they import? I don't want my files reduced to 72. The option to choose size on the export menu not suitable for up sizing, as you can't restore the quality removed.

    Short answer: They are not resizing, there is no quality lost. The dpi is only set when you decide what size you're printing at
    Longer answer:  Dpi means nothing in the digital world of your computer. There are no "inches" to have "dots per..." Size is measured in pixels. That's the same on your camera. It doesn't take 10 x 8 or 6 x 4 shots. It takes shots measured in megapixels. For instance 4,000 x 3,000 is a 12 megapixel camera.
    Using that example, that shot from that camera has 12 million pixels. So that's how many "Dots" there are. To decide the ratio of dots per inch, you now need to decide the "inches" part. And that's printing. Print at 10 x 8 and the dpi will be 4,000/10 or about 400 dpi. At 6 x 4 then it's 4,000/6 or 660 dpi. Work the other way: Print at 300 dpi and the resulting image will be about 13 inches on the longer side.
    So, your photo as a fixed number of pixels. Changing the dimensions of the print will vary the dpi, changing the dpi will vary the dimensions of the print.
    For more see http://www.rideau-info.com/photos/mythdpi.html
    Regards
    TD

  • Help with reducing the size of an image but keeping the quality

    Hello,
    I am wondering how do I resize an image but keep the quality. I am creating a folder design for work and I want to include 6 images on the back each of 156px x 118px.
    The images I have are slightly different in size but for example sake one of the images that I have is 225px x 158px. The quality is very poor when I reduce the size.
    I have tried changing the ppi to 72 and to 300, I have also tried resizing all in one go and in a number of goes and I have tried the bicubic sharpener but I have just had no luck with it.
    I am wondering if anyone can help with this,
    Thanks
    Tracy

    Hi Tracy,
    A couple of questions:
    I am creating a folder design for work...
    So this is printed? How large? Pixels are not a unit-of-measure for print.
    If for print, you are looking at ppi of 300 or higher AT PRINT SIZE.
    So, say you want a 3in x 4in picture. That would mean your original would have to be at least 900px X 1200px. (ie 300ppi x width, 300 ppi x height)
    If your original files are not at least that resolution.. . it is hard to add quality after the fact. Unless your are working with vector art.
    I have tried changing the ppi to 72 and to 300, I have also tried resizing all in one go
    Be sure to do that on copies of original. Keep in mind the PPI number by itself is completely meaningless. It only has use in describing a to be printed image when combined with the print size.

  • Optimizing the Quality of S-video in FCP X

    I have captured some S-VHS footage taken several years ago for editing in FCP X.  The footage was captured using a Matrox MXO2 Mini fed by a JVC S-VHS deck through a DataVideo TBC-1000 TBC, and was captured  in the Pro res 422 format. The quality of this footage is (for S-VHS) generally good, but I feel that it could benefit from some sharpening, occasionally from noise removal, and perhaps other cleaning measures.  I have been searching online for plugin-ins for FCP X that would be useful in optimizing the quality of this S-VHS material.  Two examples of plugins that might be useful for this purpose are Digital Film Tool's Refine and PHYX Cleaner.  I realize that nothing is going to turn this S-VHS material into clips with the detail and quality of current HD video.  However, these S-VHS memories are important to me and I would like to optimize the quality as much as reasonably possible. 
    I would appreciate hearing from any of you that either have experience with Cleaner or Refine or could suggest other plugins for FCP X that would serve to optimize the quality of this S-VHS footage. Thank you.
    Tom

    Ian, Russ, and Warwick,
    I want to sincerely thank all of you for responding with some excellent advice and suggestions.  I have been usng the trial verisons of several plugins on my S-VHS footage as well as some of the filters supplied in FCP X.  I have already learned three things: (1) Judicious and conservative use of any of these filters is required to avoid making the footage appear even worse.  (2) It's important to judge the results after applying the filter at 100% image size rather than either a smaller or enlarged image.  With the former the results are difficult to see, and enlarging the image beyond 100% (e.g. going to full screen mode) on a 30 in. Apple Cinema Display gives an awful looking image because of artifacts brought on by scaling up the image beyond its native resolution.  (3) These filters, even those that are GPU accelerated, really require a lot of rendering time.  (I sure wish Apple would release a new Mac Pro as this project would really benefit from more computing horsepower than my five year old Mac Pro affords.)
    Warwick, thank you so much for the suggestion about the DENOISER plugin from Neat Video and the CoreMelt lock-n-load stabilizer.  I am carefully evaluating these as well.  It is really going to take me quite some time to experiment with the judicious application of these filters to decide which ones to buy, but I do believe that either some of the third party plugins or even those available in FCP X (thanks Ian for calling my attention to those!) will significantly improve this S-VHS footage.
    This is a great community to turn to for advice, and I truly apreciate the fact that each of you took the time to share your experiences with me and to offer suggestion.  Thank you all!
    Tom

  • Unable to load the data into Cube Using DTP in the quality system

    Hi,
    I am unable to load the data from PSA to Cube using DTP in the quality system for the first time
    I am getting the error like" Data package processing terminated" and "Source TRCS 2LIS_17_NOTIF is not allowed".
    Please suggest .
    Thanks,
    Satyaprasad

    Hi,
    Some Infoobjects are missing while collecting the transport.
    I collected those objects and transported ,now its working fine.
    Many Thanks to all
    Regards,
    Satyaprasad

  • The quality of built-in camera on macbook pro with retina display is far worse than any other mac. Why?

    I just got the new macbook pro with retina display like a week ago and I've been noticing that the quality of the built-in camera on it is so bad.
    It's even worse than my 2009 iMac or my sister's 2012 13" macbook pro. I don't get why this is a facetime HD built-in camera when i can obviously see that is not even near HD. Or is it just my computer. Do i have to bring it to the apple store??

    it's advertised 720p so yes it is the same as all the other MBP.  it may just be the resolution of your monitor is stretching the pixels.  Don't worry about bringing it to the apple store. 

  • I am having trouble keeping the quality of my video when exporting to my external drive.  I change the kind to "original" but when I view the video from the external drive once exported, the sound and picture are distorted.

    I am having trouble keeping the quality of my video when exporting to my external hard drive.  I have tried to export from iphoto with changing kind to "original" but when I open the video from the external drive the quality is effected.  I have also tried copying the video to my desktop and then dragging and dropping onto the drive...same problem. Any suggestions?  I would like to save everything to my external drive so that I can delete all of iphoto library and then reimport to correct a few things.

    The file sizes are identical.  In the finder I did a search for all .mov files.  Then when they came up, I went through them and named each one so that they would be easier to find.  I am a new mac user and am used to windows.  I realized later that I should not have done this....with a mac files should be named within iphoto app...not finder....When I went to open the videos in iphoto, it wouldn't allow me.  I had to go into finder, copy to desktop and reimport into iphoto.  Then the videos worked again.  However, I had the problem with the poor video quality when copied to external drive before that.  Actually the first problem I had was when I copied to ext drive...videos were copying as an image...so I learned that you have to change "kind" to original before exporting....then that problem was solved...now they are videos, not images but not of a great quality.  Skips, sound and images are not in sync...etc...

  • How to measure the time a pulse is high for?

    I am using Pulse measure.vi to measure the output of a comparator. My comparator output feeds to an LED. The duty cycle is 50% so the LED just flashes on and off. I want to measure the time the output is high ('ON time') and I have been getting this by just multiplying the pulse width measure by the no. of pulses but I want to modify it to measure the 'ON time' of random signals with different duty cycles...
    The aim is that I am taking in a signal and need the LED to turn on when the signal 'ON time' reaches a certain specified time.
    But before I continue with the LED I am wondering how to add up the time the signal is high for?

    Hi PinkLady4218,
    You should be able to use one of the shipping examples to do what you need, please open LabVIEW and go to Help >> Find Examples.
    From the Example Finder please go to Hardware Input and Output >> DAQmx >> Counter Measurements >> Period or Pulse Width >> Meas Buffered Semi-Period-Continuous.vi
    You will then need to deinterleave the output array as it will show high time then low time then high time then ...
    You will need to confirm the order these values appear and then you can use a function from the arrays palette and use the function "Decimate 1D Array"
    Regards
    JamesC
    NIUK and Ireland
    It only takes a second to rate an answer

  • How to measure the phase shift using AC analysis?

    Hi,
    I have a simple RLC circuit consisting of no more than 4 components. If i hook up the network analyser function to the circuit and read the S11 values, i can see a change in the input impedance and a phase shift when i run it across a range of frequencies.
    I am trying to do the same with the AC analysis tool. I've placed a probe on the positive input line. Using the AC analysis i've obtained the input impedance by using the expression mag(V(Probe1))/mag(I(Probe1)). Everything looks fine but i just can't seem to obtain the correct expression to gain the phase shift. All of the values i get out seem to flat line.
    It would be great help if someone could point me in the right direction, as i'm running out of ideas.
    Thanks for the help.
    Attachments:
    M1.png ‏3155 KB
    m2.png ‏618 KB

    Hi 08Ultrasound,
    You need to measure the voltage phase difference over a load as the voltage source is ideal so will always be in phase.
    Regards,
    Adam Brown
    Applications Engineer
    National Instruments UK

  • How to measure the phase of a sinusoidal signal?

    I need to measure the frequency and the phase of a sinusoidal signal. I tried to use the Advanced Single Tone Measurements.vi but the phase measured in each iteration (each second) keeps changing (the measured frequency is not in integral Hz, so the first point of the next iteration is at a location different from the location of the first point of the current iteration...I guess this is the reason). How can really measure the phase of a signal (not always changing with time)? When I change the phase of the input sinusoidal signal, the measured phase should be changed though.
    I'm using LabVIEW 7.1 and PCI-6110.
    Thank you very much!
    Marlon

    Marlon,
    Without DAQ hardware I cannot run your VI. DAQmx is not supported on my platform (Mac OS X), so I cannot examine your VI in detail.
    1. The AI VI will wait until it has the amount of data specified. So if you are collecting one second's worth of data at a time, it will wait one second before completing. The 50 ms Wait will run in parallel, so it has no effect on the timing in this case.
    2. Continuous AO is possible, depending on the hardware you have. However, I have no experience with implementing it. Since your frequency is such that you do not end the AO data segment at the end of a signal cycle, you need to be careful to avoid discontinuities in the signal sent to the AO.
    3. There is no data dependency between the AI and the AO. It is possible that the AO could run after the AI in any given iteration of the loop.
    4. Your phase reference should be the excitation signal. The best method of evaluating the response of the beam would be to use two sensors, one at the shaker and one at the tip. Then measure the phase shift of the signal at the tip with respect to the shaker signal. If two sensors cannot be used, either measure the voltage sent to the shaker or use the simulated signal that you send to the AO as the reference. In either case you would need to compensate for the response of the shaker.
    5. Consider the phase shift in the filter. The steady state phase shift is about 14 degrees. The initial transient lasts about 5 cycles of the input waveform.
    6. Your simulation sampling rate is 1000 samples/second. While this satisfies the Nyquist criterion for a 379 Hz signal, it does not give you much data to work with for the phase information. If the hardware will handle it, I would go to 10000 samples/second.
    Lynn

  • How to measure the baseline of a noisy, pulsed signal

    Hi
    I am measuring the torque exerted by a large motor on a shaft using a load cell and lever arm. The shaft runs at approx 150 rpm. I have attached a drawing that shows the output I get. This is a test rig.
    I have written some code that measures the maximum peak out of a group of approx 5 peaks and writes this to a shift register. This gives me an idea of the maximum torque "spike".
    I also wish to measure the baseline torque (due to the bearings in the machine). Even when highly filtered (my noise filter is set to 49Hz) the signal exhibits this noise which is probably due to vibration in the system. The signal is zeroed when the motor is not running.
    Does anyone have an ideas on how to measure the "baseline" torque? The large spike in torque prevents me from doing a running average. Can anyone think of a way of averaging just the noisy part of the signal to get an average value? I aim to to subtract the average baseline torque from the peak value to get an idea of the torque due to the event which causes
    the spike.
    Any help would be greatly appreciated.
    Many thanks.
    Attachments:
    drawing of torque signal.gif ‏26 KB

    Thanks for the reply. I understand what you are saying. However, I might have to modify my method for measuring the peaks if I choose to implement your idea. I have taken a screenshot of my "peak finder" code and attached it.
    Bascially, the reset terminal is wired to a timer which outputs a pulse every few seconds. This resets the vi (a standard NI one I think) and sets the peak magnitude back to zero. This way, I am windowing the signal and measuring the maximum peak in every window. This is what I need to do.
    So I could use a logical filter to feed data to the running average only if;
    the amplitude of the signal is less than a certain threshold
    and if the current value has similar low peaks either side of it
    How would you construct the code to delay the evaluation so that the values in front and behind of the current data point can be analysed?
    thanks again
    Attachments:
    peak_find_screenshot.jpg ‏45 KB

  • How to measure the performance of Extractor

    Hi,
    How to measure the time taken to by the extractor when executed from rsa3 for a given selection?
    Lot of threads speak about ST05... but these transactions are too granular to analyse.
    How to get the overall time taken.i need the overall time taken and the time taken by the individual SQL statements... please provide specific pointers.
    Thanks,
    Balaji

    Maybe SE30 can help you....
    Regards,
    Fred

  • How to measure the performance of sql query?

    Hi Experts,
    How to measure the performance, efficiency and cpu cost of a sql query?
    What are all the measures available for an sql query?
    How to identify i am writing optimal query?
    I am using Oracle 9i...
    It ll be useful for me to write efficient query....
    Thanks & Regards

    psram wrote:
    Hi Experts,
    How to measure the performance, efficiency and cpu cost of a sql query?
    What are all the measures available for an sql query?
    How to identify i am writing optimal query?
    I am using Oracle 9i... You might want to start with a feature of SQL*Plus: The AUTOTRACE (TRACEONLY) option which executes your statement, fetches all records (if there is something to fetch) and shows you some basic statistics information, which include the number of logical I/Os performed, number of sorts etc.
    This gives you an indication of the effectiveness of your statement, so that can check how many logical I/Os (and physical reads) had to be performed.
    Note however that there are more things to consider, as you've already mentioned: The CPU bit is not included in these statistics, and the work performed by SQL workareas (e.g. by hash joins) is also credited only very limited (number of sorts), but e.g. it doesn't cover any writes to temporary segments due to sort or hash operations spilling to disk etc.
    You can use the following approach to get a deeper understanding of the operations performed by each row source:
    alter session set statistics_level=all;
    alter session set timed_statistics = true;
    select /* findme */ ... <your query here>
    SELECT
             SUBSTR(LPAD(' ',DEPTH - 1)||OPERATION||' '||OBJECT_NAME,1,40) OPERATION,
             OBJECT_NAME,
             CARDINALITY,
             LAST_OUTPUT_ROWS,
             LAST_CR_BUFFER_GETS,
             LAST_DISK_READS,
             LAST_DISK_WRITES,
    FROM     V$SQL_PLAN_STATISTICS_ALL P,
             (SELECT *
              FROM   (SELECT   *
                      FROM     V$SQL
                      WHERE    SQL_TEXT LIKE '%findme%'
                               AND SQL_TEXT NOT LIKE '%V$SQL%'
                               AND PARSING_USER_ID = SYS_CONTEXT('USERENV','CURRENT_USERID')
                      ORDER BY LAST_LOAD_TIME DESC)
              WHERE  ROWNUM < 2) S
    WHERE    S.HASH_VALUE = P.HASH_VALUE
             AND S.CHILD_NUMBER = P.CHILD_NUMBER
    ORDER BY ID
    /Check the V$SQL_PLAN_STATISTICS_ALL view for more statistics available. In 10g there is a convenient function DBMS_XPLAN.DISPLAY_CURSOR which can show this information with a single call, but in 9i you need to do it yourself.
    Note that "statistics_level=all" adds a significant overhead to the processing, so use with care and only when required:
    http://jonathanlewis.wordpress.com/2007/11/25/gather_plan_statistics/
    http://jonathanlewis.wordpress.com/2007/04/26/heisenberg/
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

Maybe you are looking for